Locator: 49777TECH.
Before we get started: another reason I like ChatGPT -- it points me to articles I otherwise would have missed. In this case, "EDA Vendors," Forbes, earlier this year, this is how fast things are moving, April 29, 2025. Link here.
This page needs to be formatted, proofread; there will be a lot of content and typographical errors but wanted to get it posted immediately for various reasons.
Note: this was all typed out, one letter / one character at a time. There was absolutely no "cut and paste" on this page (except the one URL).
Below the fold way down below is an extensive listing and explanation of the Google support system.
1. I sent this exact list to ChatGPT to see how/if AWS differs. The answer was interesting.
2. I then asked what suppliers are the true chokepoints vs interchangeable.
The true choke points:
- ASML (EUV lithogaphy)
- TSMC (leading-edge foundry)
- Nvidia (for now)
- Synopsys + Cadence (EDA duopoly)
- ARM (ISA licensing)
- Soft choke points:
- HBM memory suppliers (SK Hynix, Micron, Samsung)
- Advanced packaging (CoWoS, Amkor, ASE)
- Optical components (Lumentum, Coherent)
- High-speed networking silicon (Broadcom, Marvell)
Largely interchangeable (relative speaking)
- Storage (Western Digital, Seagate)
- Data center infrastructure (Vertiv, Schneider, Eaton)
- Foundry-adjacent services
A key insight most people miss:
- hyperscalers to not fear competition from each other.
- They fear capacity denial at choke points.
That's why
- everyone courts TSMC
- everyone invests in custom silicon
- everyone hoards HBM
- everyone diversifies optics suppliers even at high cost
The real power map looks like this:
- ASML --> TSMC --> EDA --> ARM --> Nvidia/HBM --> Hyperscalers -->Everyone else
Tech jargon: EDA, link here.
Why this matters (especially for investors and policy makers)
- pricing power lives at choke points
- geopolitical risk lives at choke points
- innovation bottlenecks live at choke points
- most "AI arms race" headlines are downstream noise
This was my next question: a completely different question because so much of the above relates to the AI LDCs, LLMs, cloud development and management, WHEREAS Apple as a consumer electronics company is completely different. But you mentioned the importance of CUSTOM SILICON. Apple seems to have been the leader in custom silicon. Any thoughts on that?
The short answer: that's a great pivot -- and it actually clarifies the entire custom-silicon story rather than changing it. Apple isn't an exception fo the rule: Apple is the proof case.
- Apple understood custom silicon before "AI infrastructure existed.
- Apple's custom silicon succeeded where others failed
- vertical integration
- volume certainty
- long-term patience
Apple has the same choke points: same list, different leverage. Apple still depends on:
- TSMC (absolutely)
- ASML (indirectly)
- ARM (ISA)
- Cadence / Synopsy (EDA)
Apple does not escape the chokepoints; Apple addresses them differently
- consume them earlier;
- lock them longer
- integrate them more deeply
Apple is TSMC's most strategically important customer.
Than a long discussion on Apple vs other hyperscalers with regard to AI. Bottom line, the irony, hyperscalers are becoming more like Apple.
Then a long discussion on why Apple's lead is durable. And the reasons are very, very difficult for others to copy what Apple does.
Then the bottom line: another long discussion but at the end:
- Apple is not different from the custom silicon story -- Apple is the clearest version of it.
*********************************
The Alphabet Empire
From Shay: link here.
The infrastructure between the Alphabet empire:
AI Chips
- Broadcom helps Google design custom TPUs so it can lower AI chip costs and avoid Nvidia pricing.
- TSM: the only foundry currently capable of producing Google's leading-edge TPSs at scale with acceptable yields.
- ARM licenses the CPU architecture Google uses alongside TPUs for AI inference and control.
- Cadence Designs systems sells the software Google uses to design each new generation of AI chips.
- Synopsys provides chip testing / IP so Google can ship complex TPUs without failures.
- Amkor Technology packages TPUs + memory together so Google can run them at data-center scale.
AI Networking
- Astera Labs supports high-speed rack-level connectivitiy as Google scales TPU pods.
- Marvell Technology supplies custom networking chips inside Google's AI data hardware.
- Arista Networks supplies switches that route traffic inside Google's AI clusters.
- Ciena Corp moves data between Google's data centers over long distances
AI Utility
- Cipher Mining supplies energy-backed sites supporting large AI workloads.
- Terawulf operates power-dense infrastructure where large-scale compute can be deployed.
AI Memory
- Micron adds DRAM + HBM supply as Google expands AI inference.
- Western Digital stores the massive datasets Google uses to train AI models.
AI Optics
- Lumentum Holdings supplies optical components used inside Google's AI data centers.
- Coheret Corp provides lasers needed for high-speed optical data transmission
AI Power
- Vertiv Holdings provides cooling infrastructure that keeps Googl's AI hardware online.
