Monday, July 14, 2025

AI: META -- Mark Zuckerberg "ALL IN" -- More To Follow -- We'll Have The Story Later Today -- But This Is Huge

Locator: 48732META.

This is to serve Mark Zuckerberg's "own interests."

Link at Reuters.

  • Meta building several multi-gigawatt AI data centers
  • company has formed Superintelligence Labs to unify AI efforts
  • CEO Zuckerberg has been leading a talent war for top engineers
  • one of Meta's AI superclusters will rival Manhattan in size -- are investors even paying attention
July 14 (Reuters) - Mark Zuckerberg said on Monday that Meta Platforms would spend hundreds of billions of dollars to build several massive AI data centers for superintelligence, intensifying his pursuit of a technology he has chased with a talent war for top engineers.

Saying again

  • multi-gigawatt ...
  • one supercluster will rival Manhattan i size ---
  • that's a lot of blades

********************************
Review

Tech:

Type of AI data center:

  • small (edge / enterprise): ~ 100 TB = 0.1 mega GB
  • medium (cloud region): ~ 1 - 10 PB - 1.0 mega GB
  • hyperscale (e.g., AWS, MSFT, Google, META, Nvidia): 10 - 100+ BP = 10 - 100 mega BG

1 PB (petabyte) = one million BB = 1 mega GB
1EB (exabyte) = 1,000 PB = 1 billion BG

Real-world examples (2024 - 2025):

  • Nvidia's Blackwell-powered AI supercomputer
    • tens of thousands of GPUs (Nvidia)
    • requires 50 - 100+ PB storage for training LLMs
  • OpenAI training GPT-4:
    • estimates hundreds of petabytes of data processed across computer clusters
  • Meta's AI Research SuperCluster:
    • 16,000 GPUS (Nvidia), over 1 EB (billion GP) of storage projected when fully deployed

Bottom line:

  • The average modern AI-focused data center—especially for large models—easily exceeds 1 million GB (1 PB) of storage, with hyperscalers using tens to hundreds of petabytes.