NVIDIA’s few months-old Volta GPU architecture is leading an innovative, ground breaking benchmark. If we look at the difference between the previous-gen Pascal-based Tesla P100 and the new Volta-based Tesla V100.
From being a 12x more deep learning training performance, with 10 TFLOPs on P100 up to a freakin’ is-it-real 120 TFLOPs of ‘DL training’ on V100. NVIDIA has some huge memory bandwidth numbers on Tesla V100 as well, with 900GB/sec available – up from 720GB/sec on Tesla P100. NVLINK 2.0 is also featured, throwing the internal bandwidth up from 160GB/sec to a huge 300GB/sec (1.9x) while we have 10MB of L1 cache, up from 1.3MB on Tesla P100 (7.7x increase).
The new NVIDIA Tesla V100 has been tested on single-core Geekbench 4 compute tests, with an out-of-this-world score of 743,537… the next one close to that is the P100-based system with just 320,031 in comparison.
Even HP’s impressive Z8 G4 workstation PC is only capable of 278,706 points, and that system rocks 9 x PCIe slots with Quadro GP100 cards inside. All-in-all, NVIDIA’s new Tesla V100 is a compute MONSTER and nothing else on the market begins to compete. AMD is radically behind here, until their new Vega-based Radeon Instinct graphics cards begin shipping – NVIDIA continues to reign supreme.