Wednesday, April 15, 2026

AI Is A Bubble! Long Live The Bubble -- April 15, 2026

Locator: 50534TULIPS.

Allbirds annouces stunning pivot from shoes to AI, stock explodes 175%. This has been breaking news for CNBC for the last fifteen minutes. Wow, I'm glad I no longer watch CNBC on television / streaming.

Put this in perspective:

Tulip mania in the Dutch Golden Age peaked rapidly between 1634 and February 1637. While tulip prices had been rising for years, the most intense speculative frenzy occurred during the winter of 1636–1637, with prices collapsing almost overnight in early February 1637.

***************
The Book Page

Why Machines Learn: The Elegant Math Behind Modern AI, Anil Ananthaswamy, c. 2024 / 2025.

This is why the AI revolution is not even closely / remotely near the end. If Americans don't want LDCs in their backyard, the revolution will continue unabated in China.

This is why this book is so good. Notes are updated here

Chapter 11: The Eyes of a Machine

Almost all accounts of the history of deep neural networks for computer visioi acknowledge the seminal work done by neuro-physiologists David Hubel and Torsten Wiesel, co-founders of the Department of Neurobiology at Harvard in the early 1960s and joing winners fo the 1981 Nobel Prize in Physiology or Medicine.

Page 375: enter GPUs stage right. LOL.

"Recognizing high-res images required large neural networks, and training such networks meant having to crunch numbers, mainly in the form of matrix manipulations. To make the process go faster, much of this number crunching required a form of parallel computing, but the central processing units (CPUs) of computers of the 1990s weren't up to the task. However, saviors were on the horizon in the form of graphical processing units (GPUs), which were originally designed as hardware-on-a-chip dedicated to rending 3D graphics (gaming during Covid).

"GPUs proved central to changing the face of deep learning. One of the earliest indications of this change came in 2010, from Jürgen Schmidhube and colleagues, when they trained multi-layer perceptrons with as many as nine hidden layers and about 12 million parameters or weights, to classify ... images.

"But the use of GPUs to overcome the challenge ... doesn't begin to hint at the power of these processors... we have to shift focus to Hinton's lab in Toronto, where Hinton and two graduate students, Alex Krizhevsky and Ilya Sutskever ... built the first massive ... [specialized] neural networks. These two showed once and for all that conventional methods for image recognition were never going to catch up. The network came to be called AlexNet." 

And then look at this. Never, never, ever stop reading.

Page 376: The large network required GPUs; by then, these came equipped with software called CUDA, a programming inteface that allowed engineers to use GPUs for general-purpose tasks beyond their intended use as graphics accelerators. 

Not everyone accepted these new developments!

Hinton recalls trying to persuade Microsoft to buy GPUs for a common project, but Microsoft balked. 

The CEO of Microsoft in 2000 was Steve Ballmer.

2002: Sutskever, age 17, barely, joined the University of Toronto.

He was still in his second year of undergraduate studies when he knocked on Hinton's door. "The math is so simple."

2009: a problem big enough to pose questions of neural networks appears. That year, Stanford University professor Fei-Fei Li and her students presented a paper at the Computer Vision and Pattern Recognition (CVPR) conference.


Yann LeCun's group at Bell Labs. Page 379.