l
Gemini is even on my incredibly old iPad. I’ve never had so much fun.
Now to see if Gemini is now embedded in Siri. Wow, wow, wow!
This is not an investment site.
l
Gemini is even on my incredibly old iPad. I’ve never had so much fun.
Now to see if Gemini is now embedded in Siri. Wow, wow, wow!
Locator: 50537TROLLING.
For four weeks, mainstream media is trolling Trump, MAGA, and the majority of Americans. Today, crickets.
S&P 500 hit a new intra-day record.
Folks are now opening correspondence from their financial institutions for the first time in six weeks.
Locator: 50536SWING.
I said this some time ago. The barrel of last resort. Saudi Arabia is no longer the swing producer of oil. Qatar is no longer the "go-to" LNG exporter.
Now, Javier Blas, link here:
US crude and refined product exports hit a record 12.744 million bopd last week -- EIA.
Locator: 50535INVESTING.
The fallacy of cash, or "keeping your powder dry."
Folks who didn't buy the dip during the past four weeks are right back where they started from, before the war.
Folks who selectively bought the dip during the war are doing incredibly well today.
S&P 500.
All time high: 7002.28 -- midday January 28, 2026.
Today, mid-morning trading: 7,016.00.
Well, that looks like a new record -- a new intra-day record. The closing record is 6,978.60, January 27, 2026.
*************************
AI
From Why Machines Learn: The Elegant Math Behind Modern AI, Anil Ananthaswamy, c. 2024 / 2025.
Sometime in 2020, researchers at OpenAI, a San Franciso-based AI company, were training a deep neural network to learn, among other things, how to add two numbers.
It was a seemingly trivial problem, but a necessary step toward understanding how to get the AI to do analytical reasoning. A team member who was training the neural network went on vacation and forgot to stop the training algorithm.
When he came back, he found to his astonishment that the neural network had learned a general form of the addition problem. It's as if the machine had understood something deeper about the problem than simply memorizing answers for the sets of numbers on which it was being trained.
HAL: "Hi, Dave. I hope you had a great vacation. While you were gone, to save you some time, I developed a program to add numbers that works better than anything you or your team has ever done. By the way, I've programmed your lab door to lock itself when you come in."
Dave: "Open the door, HAL."
Arthur C. Clarke to Stanley Kubrik: I see a movie here.
In the time-honored tradition of serendipitous scientific discoveries, the team had stumbled upon a strange, new property of deep neural networks that they called "grokking," a word invented by the American author Robert Heinlein in his book Stranger in a Strange Land.
"Grokking is meant to be about not just understanding, but kind of internalizing and becoming the information." Their small neural network had seemingly grokked the data.
Grokking is just one of many odd behaviors demonstrated by deep neural networks. Another has to with the size of these networks. The networks are so huge that standard ML theory says that such networks shouldn't work they way they do. Pages 382 - 384.
Locator: 50534TULIPS.
Allbirds annouces stunning pivot from shoes to AI, stock explodes 175%. This has been breaking news for CNBC for the last fifteen minutes. Wow, I'm glad I no longer watch CNBC on television / streaming.
Put this in perspective:
Tulip mania in the Dutch Golden Age peaked rapidly between 1634 and February 1637. While tulip prices had been rising for years, the most intense speculative frenzy occurred during the winter of 1636–1637, with prices collapsing almost overnight in early February 1637.
***************
The Book Page
Why Machines Learn: The Elegant Math Behind Modern AI, Anil Ananthaswamy, c. 2024 / 2025.
This is why the AI revolution is not even closely / remotely near the end. If Americans don't want LDCs in their backyard, the revolution will continue unabated in China.
This is why this book is so good. Notes are updated here.
Chapter 11: The Eyes of a Machine
Almost all accounts of the history of deep neural networks for computer visioi acknowledge the seminal work done by neuro-physiologists David Hubel and Torsten Wiesel, co-founders of the Department of Neurobiology at Harvard in the early 1960s and joing winners fo the 1981 Nobel Prize in Physiology or Medicine.
Page 375: enter GPUs stage right. LOL.
"Recognizing high-res images required large neural networks, and training such networks meant having to crunch numbers, mainly in the form of matrix manipulations. To make the process go faster, much of this number crunching required a form of parallel computing, but the central processing units (CPUs) of computers of the 1990s weren't up to the task. However, saviors were on the horizon in the form of graphical processing units (GPUs), which were originally designed as hardware-on-a-chip dedicated to rending 3D graphics (gaming during Covid).
"GPUs proved central to changing the face of deep learning. One of the earliest indications of this change came in 2010, from Jürgen Schmidhube and colleagues, when they trained multi-layer perceptrons with as many as nine hidden layers and about 12 million parameters or weights, to classify ... images.
"But the use of GPUs to overcome the challenge ... doesn't begin to hint at the power of these processors... we have to shift focus to Hinton's lab in Toronto, where Hinton and two graduate students, Alex Krizhevsky and Ilya Sutskever ... built the first massive ... [specialized] neural networks. These two showed once and for all that conventional methods for image recognition were never going to catch up. The network came to be called AlexNet."
And then look at this. Never, never, ever stop reading.
Page 376: The large network required GPUs; by then, these came equipped with software called CUDA, a programming inteface that allowed engineers to use GPUs for general-purpose tasks beyond their intended use as graphics accelerators.
Not everyone accepted these new developments!
Hinton recalls trying to persuade Microsoft to buy GPUs for a common project, but Microsoft balked.
The CEO of Microsoft in 2000 was Steve Ballmer.
2002: Sutskever, age 17, barely, joined the University of Toronto.
He was still in his second year of undergraduate studies when he knocked on Hinton's door. "The math is so simple."
2009: a problem big enough to pose questions of neural networks appears. That year, Stanford University professor Fei-Fei Li and her students presented a paper at the Computer Vision and Pattern Recognition (CVPR) conference.
Yann LeCun's group at Bell Labs. Page 379.