Article Image
News Link • Robots and Artificial Intelligence

Today GPUs for AI but then Phase Change Memory-Neuromorphic and then Quantum

• https://www.nextbigfuture.com, brian wang

The DDL algorithms "train" on visual and audio data, and the more GPUs should mean faster learning. To date, IBM's record-setting 95 percent scaling efficiency (meaning improved training as more GPUs are added) can recognize 33.8 percent of 7.5 million images, using 256 GPUs on 64 "Minsky" Power systems.

Distributed deep learning has progressed at a rate of about 2.5 times per year since 2009, when GPUs went from video game graphics accelerators to deep learning model trainers.

What technology do we need to develop in order to continue this rate of progress and go beyond the GPU?

Join us on our Social Networks:

 

Share this page with your friends on your favorite social network:


thelibertyadvisor.com/declare