Pages

Saturday, January 27, 2024

Story Of The Day? January 27, 2024

Locator: 46661TECH.

Link here

From Forbes, September 11, 2023:

To understand the significance of Dojo, one needs to examine the existing milieu of AI processors and supercomputing. Conventional supercomputers, typified by NVIDIA's A100 GPUs, IBM's Summit, or HPE’s Cray Exascale, have been vital in scientific research, complex simulations, and big data analytics. However, these systems are primarily designed for a broad array of tasks rather than optimized for a singular purpose like the real-world data-driven AI computer vision that Tesla is designing Dojo for.

Tesla’s Dojo promises to revolutionize the AI processing landscape by focusing solely on improving the company's FSD capabilities. With this vertical integration, Tesla aims to construct an ecosystem that encompasses hardware, data, and practical application—a trifecta that could usher in a new era of supercomputing, explicitly designed for real-world data processing.

Historically, Tesla relied on NVIDIA's GPU clusters to train its neural networks for autopilot systems. Despite the lack of clarity in performance metrics such as single-precision or double-precision floating-point calculations, Tesla claimed to operate a computing cluster that stood as the "fifth-largest supercomputer in the world" as of 2021. Details are hard to come by but various commentators have put Tesla’s hoard of NVIDIA A100 GPUs at over 10,000 units, which by any stretch puts Tesla as having one of the largest training systems globally, and the company has been at this for at least 2 years now.

From wiki

Tesla Dojo is a supercomputer designed and built by Tesla for computer vision video processing and recognition.
It will be used for training Tesla's machine learning models to improve its Full Self-Driving advanced driver-assistance system. According to Tesla, it went into production in July 2023.
Dojo's goal is to efficiently process millions of terabytes of video data captured from real-life driving situations from Tesla's 4+ million cars.
This goal led to a considerably different architecture than conventional supercomputer designs.
Tesla operates several massively parallel computing clusters for developing its Autopilot advanced driver assistance system.
Its primary unnamed cluster using 5,760 Nvidia A100 graphics processing units (GPUs) was touted by Andrej Karpathy in 2021 at the fourth International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2021) to be "roughly the number five supercomputer in the world" at approximately 81.6 petaflops, based on scaling the performance of the Nvidia Selene supercomputer, which uses similar components.
However, the performance of the primary Tesla GPU cluster has been disputed, as it was not clear if this was measured using single-precision or double-precision floating point numbers (FP32 or FP64).
Tesla also operates a second 4,032 GPU cluster for training and a third 1,752 GPU cluster for automatic labeling of objects.
The primary unnamed Tesla GPU cluster has been used for processing one million video clips, each ten seconds long, taken from Tesla Autopilot cameras operating in Tesla cars in the real world, running at 36 frames per second. Collectively, these video clips contained six billion object labels, with depth and velocity data; the total size of the data set was 1.5 petabytes. This data set was used for training a neural network intended to help Autopilot computers in Tesla cars understand roads.
By August 2022, Tesla had upgraded the primary GPU cluster to 7,360 GPUs.
Supercomputer, 500, wiki.

Datacenterdynamics, 80 petaflops.

Bottom line: this is the kind of investment every EV automobile manufacturer will have to make -- and there are no less than a dozen EV manufacturers -- or buy the information from Elon Musk.

Ticker:


Disclaimer: this is not an investment site. Do not make any investment, financial, job, career, travel, or relationship decisions based on what you read here or think you may have read here. 

All my posts are done quickly: there will be content and typographical errors. If anything on any of my posts is important to you, go to the source. If/when I find typographical / content errors, I will correct them. 

Again, all my posts are done quickly. There will be typographical and content errors in all my posts. If any of my posts are important to you, go to the source.

Reminder: I am inappropriately exuberant about the US economy and the US market.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.