Locator: 50535INVESTING.
The fallacy of cash, or "keeping your powder dry."
Folks who didn't buy the dip during the past four weeks are right back where they started from, before the war.
Folks who selectively bought the dip during the war are doing incredibly well today.
S&P 500.
All time high: 7002.28 -- midday January 28, 2026.
Today, mid-morning trading: 7,016.00.
Well, that looks like a new record -- a new intra-day record. The closing record is 6,978.60, January 27, 2026.
*************************
AI
From Why Machines Learn: The Elegant Math Behind Modern AI, Anil Ananthaswamy, c. 2024 / 2025.
Sometime in 2020, researchers at OpenAI, a San Franciso-based AI company, were training a deep neural network to learn, among other things, how to add two numbers.
It was a seemingly trivial problem, but a necessary step toward understanding how to get the AI to do analytical reasoning. A team member who was training the neural network went on vacation and forgot to stop the training algorithm.
When he came back, he found to his astonishment that the neural network had learned a general form of the addition problem. It's as if the machine had understood something deeper about the problem than simply memorizing answers for the sets of numbers on which it was being trained.
HAL: "Hi, Dave. I hope you had a great vacation. While you were gone, to save you some time, I developed a program to add numbers that works better than anything you or your team has ever done. By the way, I've programmed your lab door to lock itself when you come in."
Dave: "Open the door, HAL."
Arthur C. Clarke to Stanley Kubrik: I see a movie here.
In the time-honored tradition of serendipitous scientific discoveries, the team had stumbled upon a strange, new property of deep neural networks that they called "grokking," a word invented by the American author Robert Heinlein in his book Stranger in a Strange Land.
"Grokking is meant to be about not just understanding, but kind of internalizing and becoming the information." Their small neural network had seemingly grokked the data.
Grokking is just one of many odd behaviors demonstrated by deep neural networks. Another has to with the size of these networks. The networks are so huge that standard ML theory says that such networks shouldn't work they way they do. Pages 382 - 384.