Locator: 48765COLOSSUS.
When it comes to AI, there are two camps:
- it's all hype, or it's simply another commodity; or,
- it's real, and right now it's all about Nvidia (NVDA).
I'm in the second camp.
Right now the most exciting states for growth with a "can-do"attitude:
- Texas
- Tennessee
- possibly, North Carolina
- possibly, Florida
Others that might join those four:
- Utah
- Georgia
With the right leadership, the following could be contenders:
- New Mexico
- Colorado
- Pennsylvania
- Ohio -- a bit of a stretch
States that have deliberately dropped out of the race:
- California
- Oregon
- Washington state
Musk's Colossus, Memphis, Tennessee:
Musk's xAI: huge, huge, huge. Biggest super-computer in the world. Repeat: biggest super-computer in the world.
Needs a stand-alone post. Memphis, TN. Fastest-built, biggest data center ever. Has raised as much as $6 billin new capital. New post-raise valuation seen at $50 billion +. Amazing. Uses only Nvidia chips. Great relationship between Jensen Huang and Elon Musk. They're gonna call it Colossus.
Bullets:
- Memphis, Tennessee
- largest superconductor in the known universe
- 100,000 Nvidia blades to expand to 200,000 Nvidia blades
- will expand with newest blades
- partnered with Jen-Hsun Huang
- launched in September, 2024, one month later already expanding
- will be the fastest a superconductor facility has ever been built
- ambitious growth, occurring merely 122 days after the facility’s initial announcement on June 5, 2024
- Colossus utilizes Nvidia’s H100 GPUs, offering up to nine times the speed of the previous A100 models, with each H100 GPU delivering up to 2,000 teraflops of performance. The initial setup integrates 100,000 H100 GPUs, with plans to add 50,000 more H100s alongside 50,000 new H200 GPUs.
The article at the link:
Meet the mammoth AI supercomputer featuring 100,000 Nvidia H100 GPUs, poised for a significant performance boost.
Built in Memphis, Tennessee, in just four months, Colossus utilizes Nvidia’s H100 GPUs, offering up to nine times the speed of the previous A100 models, with each H100 GPU delivering up to 2,000 teraflops of performance. The initial setup integrates 100,000 H100 GPUs, with plans to add 50,000 more H100s alongside 50,000 new H200 GPUs.
Photo at the link: Elon Musk, co-founder and CEO of Tesla Motors Inc., with Jen-Hsun Huang, CEO of Nvidia Corporation at the GPU Technology Conference in San Jose, California.
On October 4, 2024, the Greater Memphis Chamber approved the expansion of xAI’s AI training facility and supercomputer, Colossus. Launched just a month prior, Colossus quickly gained recognition as the world’s largest GPU supercomputer, initially featuring 100,000 Nvidia GPUs.
Led by Elon Musk, xAI aims to double Colossus’s capacity to 200,000 GPUs, including 50,000 advanced H200s. This ambitious growth, occurring merely 122 days after the facility’s initial announcement on June 5, showcases the rapid pace and scale typical of Musk’s ventures.
The combined array could theoretically achieve about 497.9 exaflops (497,900,000 teraflops), setting new benchmarks in supercomputing power. While this enhanced capacity far exceeds current supercomputing records, real-world performance may face system integration, communication overhead, power consumption, and cooling challenges.
Colossus also surpasses major competitors, with xAI’s planned 200,000 GPUs outstripping Google AI’s 90,000 and Meta AI’s 70,000 GPUs. The expansion aims to boost xAI’s ability to develop and refine AI models like Grok 3, which is set to compete with GPT-5, OpenAI’s highly anticipated breakthrough in language model technology. Grok 3, xAI’s most advanced chatbot, is expected to launch in December 2024.
Maps:
- here.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.