Locator: 48732META.
This is to serve Mark Zuckerberg's "own interests."
July 14 (Reuters) - Mark Zuckerberg said on Monday that Meta Platforms would spend hundreds of billions of dollars to build several massive AI data centers for superintelligence, intensifying his pursuit of a technology he has chased with a talent war for top engineers.
Saying again
- multi-gigawatt ...
- one supercluster will rival Manhattan i size ---
- that's a lot of blades
********************************
Review
Tech:
- AI is tracked here.
Super computers are tracked here. - LDCs are tracked here.
Type of AI data center:
- small (edge / enterprise): ~ 100 TB = 0.1 mega GB
- medium (cloud region): ~ 1 - 10 PB - 1.0 mega GB
- hyperscale (e.g., AWS, MSFT, Google, META, Nvidia): 10 - 100+ BP = 10 - 100 mega BG
1 PB (petabyte) = one million BB = 1 mega GB
1EB (exabyte) = 1,000 PB = 1 billion BG
Real-world examples (2024 - 2025):
- Nvidia's Blackwell-powered AI supercomputer
- tens of thousands of GPUs (Nvidia)
- requires 50 - 100+ PB storage for training LLMs
- OpenAI training GPT-4:
- estimates hundreds of petabytes of data processed across computer clusters
- Meta's AI Research SuperCluster:
- 16,000 GPUS (Nvidia), over 1 EB (billion GP) of storage projected when fully deployed
Bottom line:
- The average modern AI-focused data center—especially for large models—easily exceeds 1 million GB (1 PB) of storage, with hyperscalers using tens to hundreds of petabytes.