Menu
Servers with Nvidia's Tesla P100 GPU will ship next year

Servers with Nvidia's Tesla P100 GPU will ship next year

The new GPU will provide a serious performance boost to servers and supercomputers

Nvidia's fastest GPU yet, the new Tesla P100, will be available in servers next year, the company said.

Dell, Hewlett Packard Enterprise, Cray and IBM will start taking orders for servers with the Tesla P100 in the fourth quarter of this year, Nvidia CEO Jen-Hsun Huang said during a keynote at the GPU Technology Conference in San Jose, California.

The servers will start shipping in the first quarter of next year, Huang said Tuesday.

The GPU will also ship to companies designing hyperscale servers in-house and then to outsourced manufacturing shops. It will be available for in-house "cloud servers" by the end of the year, Huang said.

Nvidia is targeting the GPUs at deep-learning systems, in which algorithms aid in the correlation and classification of data. These systems could help self-driving cars, robots and drones identify objects. The goal is to accelerate the learning time of such systems so the accuracy of results improves over time.

Nvidia's GPUs are widely used in supercomputers today. Two of the world's 10 fastest supercomputers use Nvidia GPUs, according to a list compiled by Top500.org.

The Tesla P100 is based on Nvidia's new Pascal architecture. Many new features could help the GPU improve overall server performance.

The GPU has 15 billion transistors, and its floating point performance tops out at 21.2 teraflops.

The chip was made using the 16-nanometer FinFET process. Chips are stacked on top each other, allowing Nvidia to cram in more features.

The Tesla P100 has HBM2 (High-Bandwidth Memory 2) memory that boasts bandwidth of 256GBps (gigabytes per second), which is two times faster than its predecessor, HBM.

A new NVLink interface can transfer data at 160Gbps (bits per second), which is five times faster than PCI-Express.

However, questions remain on how servers will fit with NVLink. IBM has said its Power architecture will support NVLink, but servers with Intel chips use PCI-Express to hook up GPUs to motherboards.

At the conference, however, Nvidia showed a supercomputer called the DGX-1 running on Intel Xeon chips with the Tesla P100 GPU.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags serversdatacentersnvidia

More about CrayDellHewlett PackardIntelNvidiaTechnologyTesla

Show Comments
[]