Home / Technology Articles / Nvidia Debuts Tesla P100 Accelerator With 15B Transistors for AI, Deep Learning
Nvidia CEO Jen-Hsun Huang utilized the opening keynote of the organization’s yearly GPU Technology Conference to declare a gigantic new processor planned particularly for profound learning. The Tesla P100 is the principal shipping item to utilizing Nvidia’s new Pascal engineering, and is comprised of 15.3 billion transistors, which the organization says makes it the biggest microchip ever manufactured.
The Tesla P100 is assembled utilizing another 16nm FinFE producing process and uses 16GB of HBM2 illustrations memory which is incorporated onto the same chip substrate, which results in memory data transfer capacity of up to 720GBps. Crest execution is measured at 21.2 Teraflops for half-exactness directions, 10.6 Teraflops for single-accuracy and 5.3 Teraflops for twofold exactness workloads. Up to eight Tesla P100 chips can be interconnected utilizing Nvidia’s NVLink transport.
The Tesla P100 is guaranteed to convey more than 12x the execution of Nvidia’s bygone era Maxwell engineering in neural system preparing situations. Particular applications, for example, the AMBER sub-atomic elements code, are said to run speedier on one Tesla P100 server hub than at 48 double attachment CPU server hubs, as indicated by Nvidia.
Huang likewise said that the organization has decided to “get everything on AI”, and that profound learning and man-made brainpower are the organization’s quickest developing business region. He named a few zones of examination, including finding a cure for malignancy and comprehension environmental change, which requires registering assets that can scale interminably.
Massachusetts General Hospital has set up a clinical data centre which will utilize Nvidia’s AI preparing innovation to determine ailments beginning to have the fields of radiology and pathology, and will utilize its chronicle of 10 billion therapeutic pictures to make a profound learning neural system.
The Tesla P100 will at first be accessible in Nvidia’s new DGX-1 “profound learning supercomputer” in June, and servers from various makers starting in mid 2017. The DGX-1 will have eight Tesla P100 chips for a joined 170 Teraflops of half-exactness execution, and is assured to have the capacity to convey the profound learning throughput of 250 conventional x86 servers in a solitary 3U server fenced in area.