You can see it with the lines showing flat clock speeds and performance per clock. However, you already knew it from the lack of real performance improvement in laptops and desktop computers.
GPUs kept to Moore's law in terms of performance improvement.
Google Tensor flow processors are on average 15x to 30x faster in executing Google's regular machine learning workloads than a standard GPU/CPU combination (Intel Haswell processors and Nvidia K80 GPUs). The TPUs also offer 30x to 80x higher TeraOps/Watt (and with using faster memory in the future, those numbers will probably increase).
Google says it started looking into how it could use GPUs, FPGAs and custom ASICS (which is essentially what the TPUs are) in its data centers back in 2006.
For compute heavy applications and businesses, custom ASICS (or minimally GPUs) are needed to be competitive.