In the "chip" sector, what is most important to me now is where Apple fits in with regard to "chips" and data centers.
So, the first question, what is meant by "AI chips"?
Link here for answer. From 2020. Great, great article. Full article will be archived for future reference.
AI chips include graphics processing units (GPUs), field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) that are specialized for AI. General-purpose chips like central processing units (CPUs) can also be used for some simpler AI tasks, but CPUs are becoming less and less useful as AI advances. (Section V(A).)
Like general-purpose CPUs, AI chips gain speed and efficiency (that is, they are able to complete more computations per unit of energy consumed) by incorporating huge numbers of smaller and smaller transistors, which run faster and consume less energy than larger transistors. But unlike CPUs, AI chips also have other, AI-optimized design features. These features dramatically accelerate the identical, predictable, independent calculations required by AI algorithms. They include executing a large number of calculations in parallel rather than sequentially, as in CPUs; calculating numbers with low precision in a way that successfully implements AI algorithms but reduces the number of transistors needed for the same calculation; speeding up memory access by, for example, storing an entire AI algorithm in a single AI chip; and using programming languages built specifically to efficiently translate AI computer code for execution on an AI chip. (Section V and Appendix B.)
Different types of AI chips are useful for different tasks. GPUs are most often used for initially developing and refining AI algorithms; this process is known as “training.” FPGAs are mostly used to apply trained AI algorithms to real-world data inputs; this is often called “inference.” ASICs can be designed for either training or inference. (Section V(A).)
So, that's a start.
Wiki's page on transistor count is the next most important page.
On that wiki page, the most important two tables are: GPUs and FPGA chips.
As an Apple investor, I want to see Apple in the GPU arena. And maybe to some extent, FPGA.
Right now:
- CPUs: owned by Apple silicon.
- GPUs: owned by Nvidia. AMD a distant second.
- FGPA: Xilinx (AMD bought Xilinx years ago).
The next big question: can the revolutionary Apple M4 be considered a GPU? Here we go:
WSJ, May 6, 2024: Apple is developing AI chips for data centers, seeking edge in arms race.
Let's see if we've learned anything -- what does Aaron Tilley and Yang Jie in that WSJ artile have to say?
Over the past decade, Apple has emerged as a leading player designing chips for iPhones, iPads, Apple Watch and Mac computers. The server project, which is internally code-named Project ACDC—for Apple Chips in Data Center—will bring this talent to bear for the company’s servers, according to people familiar with the matter.Not at all helpful.
Project ACDC has been in the works for several years and it is uncertain when the new chip will be unveiled, if ever. Apple has promised many new AI products and announcements at its Worldwide Developer Conference in June.
An Apple spokesman declined to comment.
Apple has been closely working with its chip-making partner Taiwan Semiconductor Manufacturing Co. to design and initiate production of such chips, yet it remains uncertain whether they have yielded a definitive result, some of the people said.For Apple’s server chip, the component will likely be focused on running AI models—what is known as inference—rather than on training AI models, where chip maker Nvidia will likely continue to dominate, according to some of the people.
Tim Cook needs to be very clear, very specific what he means by AI, generative AI, "AI chips," CPUs vs GPUs, Apple Silicon, and how the brand new, incredible Apple Silicon M4 chip fits in. I'm not holding my breath.
All for now, much more to explore
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.