Article Image
News Link • Robots and Artificial Intelligence

1.2 Trillion Transistors on a Wafer-Scale AI Chip

• https://www.nextbigfuture.com, by Brian Wang

It is 56x larger than any other chip. It delivers more compute, more memory, and more communication bandwidth. This enables AI research at previously-impossible speeds and scale.

The Cerebras Wafer Scale Engine 46,225 square millimeters with 1.2 Trillion transistors and 400,000 AI-optimized cores.

By comparison, the largest Graphics Processing Unit is 815 square millimeters and has 21.1 Billion transistors.

Andrew Feldman and the Cerebras team have built the wafer-scale integrated chip. They have successfully solved issues of yield, power delivery, cross-reticle connectivity, packaging, and more. It has a 1,000x performance improvement over what's currently available. It also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth.

It has a complex system of water-cooling. It uses an irrigation network to counteract the extreme heat generated by a chip running at 15 kilowatts of power.

Join us on our Social Networks:

 

Share this page with your friends on your favorite social network:


Purse.IO Save on All Amazon Purchases