Article Image

IPFS News Link • Robots and Artificial Intelligence

World's Fastest AI Chip with 4 Trillion Transistors

• https://www.nextbigfuture.com, by Brian Wang

The WSE-3 delivers twice the performance of the previous record-holder, the Cerebras WSE-2, at the same power draw and for the same price. Purpose built for training the industry's largest AI models, the 5nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI performance through 900,000 AI optimized compute cores.

Key Specs:

4 trillion transistors
900,000 AI cores
125 petaflops of peak AI performance
44GB on-chip SRAM
5nm TSMC process
External memory: 1.5TB, 12TB, or 1.2PB
Trains AI models up to 24 trillion parameters
Cluster size of up to 2048 CS-3 systems

With a huge memory system of up to 1.2 petabytes, the CS-3 is designed to train next generation frontier models 10x larger than GPT-4 and Gemini. 24 trillion parameter models can be stored in a single logical memory space without partitioning or refactoring, dramatically simplifying training workflow and accelerating developer productivity. Training a one-trillion parameter model on the CS-3 is as straightforward as training a one billion parameter model on GPUs.

The CS-3 is built for both enterprise and hyperscale needs. Compact four system configurations can fine tune 70B models in a day while at full scale using 2048 systems, Llama 70B can be trained from scratch in a single day – an unprecedented feat for generative AI.


PurePatriot