Thursday, May 7, 2026

Development Of Machine Learning -- Since 1943 -- Perceptrons -- Rosenblatt -- Machine Learning -- For The Archives -- May 7, 2026

Locator: 50739MACHINELEARNING. 

Probably the most difficult book I've read in the past several years and yet the most interesting:  

Why Machines Learn: The Elegant Math Behind Modern AI, Anil Ananthaswamy, c. 2024/2025. 

One of many concepts new to me in that book: perceptrons. Wiki

Did aliens visit the earth under the cover of WWII?

This is simply so incredible, defies explanation and understanding. Again, it's like an alien dropped in from another universe jump-starting humans' "invention" of machine learning. From the linked wiki entry.

The artificial neuron and artificial neural network were invented in 1943 by Warren McCulloch and Walter Pitts in their seminal paper "A Logical Calculus of the Ideas Immanent in Nervous Activity."
In 1957, Frank Rosenblatt was at the Cornell Aeronautical Laboratory. He simulated the perceptron on an IBM 704
Later, he obtained funding by the Information Systems Branch of the United States Office of Naval Research and the Rome Air Development Center, to build a custom-made computer, the Mark I Perceptron
It was first publicly demonstrated on 23 June 1960. The machine was "part of a previously secret four-year NPIC [the US' National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters." 
Rosenblatt described the details of the perceptron in a 1958 paper. 
His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "projection", "association" and "response." He presented at the first international symposium on AI, Mechanisation of Thought Processes, which took place in 1958 November. 

"Attention Is All You Need," a 2017 research paper that has its own wiki entry

"Attention Is All You Need" is a 2017 research paper in machine learning authored by eight scientists and engineers working at Google. 
The paper introduced a new deep learning architecture known as the transformer, based on the attention mechanism proposed in 2014 by Bahdanau et al. The transformer approach it describes has become the main architecture of a wide variety of artificial intelligence, including large language models. At the time, the focus of the research was on improving Seq2seq techniques for machine translation, but the authors go further in the paper, foreseeing the technique's potential for other tasks like question answering and what is now known as multimodal generative AI.

And then look at this:

As of 2025, the paper has been cited more than 173,000 times, placing it among the top ten most-cited papers of the 21st century
After the paper was published by Google, each of the authors left the company to join other companies or to found startups.

And that brings us to this. From Medium, January 21, 2026 -- just a few months ago, an overview, link here