Deep studying, self-driving automobiles, and AI are all enormous subjects lately, with corporations like Nvidia, IBM, AMD, and Intel all throwing their hats into the ring. Now Cray, which helped pioneer the very idea of a supercomputer, can also be bringing its personal options to market.
Cray introduced a pair of recent techniques: the Cray CS-Storm 500GT, and the CS-Storm 500NX. Each are designed to work with Nvidia’s Pascal-based Tesla GPUs, however they provide totally different function units and capabilities. the CS-Storm 500GT helps as much as 8x 450W or 10x 400W accelerators, together with Nvidia’s Tesla P40 or P100 GPU accelerators. Add-in boards like Intel’s Knights Touchdown and FPGAs constructed by Nallatech are additionally supported on this system, which makes use of PCI Specific for its peripheral interconnect. The 500GT platform makes use of Intel’s Skylake Xeon processors.
The Cray CS-Storm 500GT helps as much as 10 P40 or P100 GPUs and faucets Nvidia’s NVLink connector relatively than PCI Specific. Xeon Phi and Nallatech gadgets aren’t listed as being appropriate with this method structure. Full specs on every are listed under:
The CS-Storm 500NX makes use of NVLink, which is why Cray can listing it as supporting as much as eight P100 SMX2 GPUs, with out having eighth PCIe three.zero slots (simply in case that was unclear).
“Buyer demand for AI-capable infrastructure is rising shortly, and the introduction of our new CS-Storm techniques will give our clients a strong answer for tackling a broad vary of deep studying and machine studying workloads at scale with the ability of a Cray supercomputer,” mentioned Fred Kohout, Cray’s senior vice chairman of merchandise and chief advertising and marketing officer. “The exponential progress of information sizes, coupled with the necessity for sooner time-to-solutions in AI, dictates the necessity for a highly-scalable and tuned infrastructure.”
The surge in self-driving automobiles, AI, and deep studying know-how could possibly be an enormous boon to corporations like Cray, which as soon as dominated the supercomputing trade. Cray went from an early chief within the house to a shadow of its former self after a string of acquisitions and unsuccessful merchandise within the late 1990s and early 2000s. From 2004 forwards the corporate has loved extra success, with a number of high-profile design wins utilizing AMD, Intel, and Nvidia .
To date, Nvidia has emerged as the general chief in HPC workload accelerators. Of the 86 techniques listed as utilizing an accelerator on the TOP500 listing, 60 of them use Fermi, Kepler, or Pascal (Kepler is the clear winner, with 50 designs). The subsequent-closest hybrid is Intel, which has 21 Xeon Phi wins.
AMD has made plans to enter these markets with deep studying accelerators primarily based on its Polaris and Vega architectures, however these chips haven’t really launched in-market but. By all accounts, these are the killer progress markets for the trade as a complete, and so they assist clarify why even some recreation builders like Blizzard need to get in on the AI craze. As compute assets shift in the direction of Amazon, Microsoft, and different cloud service suppliers, the businesses that may present the these workloads run on will probably be greatest positioned for the longer term. Smartphones and tablets didn’t actually work for Nvidia or Intel–making AMD’s determination to remain out of these markets retrospectively look very, very clever–however each are positioned nicely to capitalize on these new dense server tendencies. AMD is clearly enjoying catch-up on the CPU and GPU entrance, however Ryzen ought to ship robust server efficiency when Naples launches later this quarter.