Article Image
News Link • Science, Medicine and Technology

IBM overcomes von Neumann bottleneck for AI hundreds of time faster...

• https://www.nextbigfuture.com, brian wang

IBM Research AI team demonstrated deep neural network (DNN) training with large arrays of analog memory devices at the same accuracy as a Graphical Processing Unit (GPU)-based system. This is a major step on the path to the kind of hardware accelerators necessary for the next AI breakthroughs. Why? Because delivering the Future of AI will require vastly expanding the scale of AI calculations.

Above – Crossbar arrays of non-volatile memories can accelerate the training of fully connected neural networks by performing computation at the location of the data.

This new approach allows deep neural networks to run hundreds of times faster than with GPUs, using hundreds of times less energy.

IBM built key features of a neural net directly in silicon can make it hundreds of times more efficient. Hundreds of times better in energy efficiency and in training speed for fully connected layers is worth further effort.

Join us on our Social Networks:

 

Share this page with your friends on your favorite social network:


Free Talk Live