Pages

Monday, April 29, 2024

CPUs, GPUs, Cores, Threads, And All That Jazz -- Part 2 -- April 29, 2024

Locator: 47103TECH.

Updates

June 21, 2024:

Pet peeve: when knowledgeable (?) financial analysts refer to "chips" as commodities. 

Tech: wow, things move quickly. 

It was very fortuitous that I did a "deep dive" into CPUs, GPUs, and all that jazz (see original post below), back on April 29, 2024 -- wow, that was less than two months ago. In less than two months, we have gone from talking about 7-nm chips to talking about 3-nm CPUs, GPUs and NPUs, along with cores, and now we have performance cores and efficiency cores. And "they're" all chips. And anyone who says chips are commodities display a level of ignorance that is mind-boggling. I'm not saying that the success and failure of chip makers won't change over the years, but chip makers aren't in the commodity business.

Phones: is this what it comes down to? This needs to be fact-checked

  • Apple iPhones: in-house chips, currently A-series; ARM-based.
  • Android: Qualcomm Snapdragon; ARM-based;
  • Huawei: Kirin series, made by Semiconductor Manufacturing International Corp (SMIC; a Chinese company, and Chinese-designed chips.

Battery: years ago, Steve Jobs said his biggest challenge was batteries. 

To the best of my knowledge, there has not been a lot of advances in battery technology -- at least there are not many headlines.
My hunch, at some point, Apple pivoted. Realizing that solving the battery problem (SUPPLY) was not going to happen any time soon, they turned to DEMAND. And that meant specialized chips, more efficient chips, smaller chips, chips that can be placed together more closely, and chips that work better together (are optimized for each other). All of those factors could reduce DEMAND, and thereby lengthen the intervals between charging. The holy grail? Twenty-four hours of high demand use (gaming and movies) on the smallest iPhones (with a side benefit, less heat). Specialized chips: CPUs, GPUs, NPUs. More efficient chips: performance cores and efficiency cores. Smaller chips: 3-nm. Placed on one chip: SoC. Optimized: in-house synergetic software.

Original Post

So, humor me. We might learn something. I know nothing about this but it could be an interesting rabbit hole to explore.

This goes back to one of my better blogs on "CPUs, GPUs, cores, threads, and all that jazz." 

In the first sentence I mentioned FinFET and GAAFET but TLDR. Tonight I was curious, so I followed the link. Fortunately the link still worked and the article was still accessible. 

Now I understand "FinFET"and to some extent "GAAFET."

This is not a trivial subject for an investor. This article was posted April 17, 2023, one year ago. This paragraph jumped out at me:

The leading semiconductor manufacturers (TSMC in Taiwan, Samsung in Korea, and Intel in the U.S.) are on the cusp of introducing a major change in transistor morphology. [Remember: this was back in 2023 -- last year.]

This new configuration is called GAA, GAAFET (Gate All Around), RibbonFET, MBC, MBCFET (Multi-Bridge-Channel) Nanosheet transistor, or Nanowire transistor, depending on the author. In this article, I will use the term GAAFET.

This seems important to know if one wants a bit of insight into investing in this arena.

Time to get the "reader's digest" version of all this. To wiki we go: link here. It will take a while to go through all of this but look at just this one section:

The first FinFET transistor type was called a "Depleted Lean-channel Transistor" (DELTA) transistor, which was first fabricated in Japan by Hitachi Central Research Laboratory's Digh Hisamoto, Toru Kaga, Yoshifumi Kawamoto and Eiji Takeda in 1989.

The gate of the transistor can cover and electrically contact the semiconductor channel fin on both the top and the sides or only on the sides. The former is called a tri-gate transistor and the latter a double-gate transistor. A double-gate transistor optionally can have each side connected to two different terminal or contacts. This variant is called split transistor. This enables more refined control of the operation of the transistor.

Indonesian engineer Effendi Leobandung, while working at the University of Minnesota, published a paper with Stephen Y. Chou at the 54th Device Research Conference in 1996 outlining the benefit of cutting a wide CMOS transistor into many channels with narrow width to improve device scaling and increase device current by increasing the effective device width.

This structure is what a modern FinFET looks like. Although some device width is sacrificed by cutting it into narrow widths, the conduction of the side wall of narrow fins more than make up for the loss, for tall fins.

The device had a 35 nm channel width and 70 nm channel length.

The potential of Digh Hisamoto's research on DELTA transistors drew the attention of the Defense Advanced Research Projects Agency (DARPA), which in 1997 awarded a contract to a research group at the University of California, Berkeley to develop a deep sub-micron transistor based on DELTA technology.

The group was led by Hisamoto [Hitachi] along with TSMC's Chenming Hu. The team made the following breakthroughs between 1998 and 2004.[16]

  • 1998 – N-channel FinFET (17 nm) – Digh Hisamoto, Chenming Hu, Tsu-Jae King Liu, Jeffrey Bokor, Wen-Chin Lee, Jakub Kedzierski, Erik Anderson, Hideki Takeuchi, Kazuya Asano
  • 1999 – P-channel FinFET (sub-50 nm) – Digh Hisamoto, Chenming Hu, Xuejue Huang, Wen-Chin Lee, Charles Kuo, Leland Chang, Jakub Kedzierski, Erik Anderson, Hideki Takeuchi
  • 2001 – 15 nm FinFET – Chenming Hu, Yang-Kyu Choi, Nick Lindert, P. Xuan, S. Tang, D. Ha, Erik Anderson, Tsu-Jae King Liu, Jeffrey Bokor
  • 2002 – 10 nm FinFET – Shibly Ahmed, Scott Bell, Cyrus Tabery, Jeffrey Bokor, David Kyser, Chenming Hu, Tsu-Jae King Liu, Bin Yu, Leland Chang
  • 2004 – High-κ/metal gate FinFET – D. Ha, Hideki Takeuchi, Yang-Kyu Choi, Tsu-Jae King Liu, W. Bai, D.-L. Kwong, A. Agarwal, M. Ameen

They coined the term "FinFET" (fin field-effect transistor) in a December 2000 paper, used to describe a non-planar, double-gate transistor built on an SOI substrate.

I'll come back to this later, but probably won't post any more blogs on it for awhile (if ever).

The reason this is important to me is to sort out how Broadcom (AVGO) fits into this puzzle. Why? Because of this in a Motley Fool article:

While many investors consider Nvidia the quintessential artificial intelligence (AI) stock, almost every company listed above is well positioned to monetize AI.

For instance, Microsoft, Alphabet, and Amazon are the three largest cloud computing companies in the world, meaning they are gatekeepers of AI infrastructure and platform services. Broadcom is the leader in application specific integrated circuits (ASICs), meaning it helps companies like Meta Platforms and Alphabet design custom AI chips.

Similarly, Tesla is designing full self-driving software and its supercomputer (Dojo) is purpose-built for training computer vision systems; both represent potentially significant revenue streams. Finally, Advanced Micro Devices is the second largest supplier of data center GPUs, though it trails Nvidia's market share by about 90 percentage points.
And how did I happen on that Motley Fool article? I was checking up on QQQ. I'll talk about QQQ later in another blog, but I posted a bit earlier this evening.

I typed out the entire "portfolio" of QQQ which was an incredibly worthwhile exercise. I can't post the QQQ portfolio because it's too long but I highly recommend investors interested in the semiconductor sector spend some time looking at the QQQ fund. 




No comments:

Post a Comment

Note: Only a member of this blog may post a comment.