NVIDIA Tesla V100 PCIe Professional Graphics Card

Product status: Released | Report Error
Overview
Manufacturer
NVIDIA
Original Series
Tesla Volta
Release Date
May 10th, 2017
Graphics Processing Unit
GPU Model
GV100
Architecture
Volta
Fabrication Process
14 nm FinFET
Die Size
815 mm2
Transistors Count
21.1B
Transistors Density
25.9M TRAN/mm2
CUDAs
5120
SM
80
GPCs
6
TMUs
320
Clocks
Base Clock
1455 MHz
Boost Clock
1455 MHz
Memory Clock
880 MHz
Effective Memory Clock
1760 Mbps
Memory Configuration
Memory Size
16384 MB
Memory Type
HBM2
Memory Bus Width
4096-bit
Memory Bandwidth
901.1 GB/s
VideoCardz.com

Physical
Interface
PCI-Express 3.0 x4
Power Connectors
1 × 6-pin, 1 × 8-pin
TDP/TBP
300 W
Recommended PSU
600 W
API Support
DirectX
12.0
Vulkan
1.0
OpenGL
4.5
OpenCL
2.1
Shader Model
5.0

Performance
Texture Fillrate
465.6 GTexel/s
SPFP Performance
14.9 TFLOPS
Performance per W
49.7 GFLOPS/W
Performance per mm2
18.3 GFLOPS/mm2





 ModelClocksMemory
NVIDIA DGX-1V
40960 Unified Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA HGX-1
40960 Unified Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA DGX Station
20480 Unified Cores
1455 / 880 MHz 64 GB HBM2
NVIDIA Tesla V100 PCIe
5120 Unified Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 SMX2
5120 Unified Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 FHHL PCIe
5120 Unified Cores
1200 / 880 MHz 16 GB HBM2
 ModelClocksMemory
NVIDIA Tesla V100 SMX2
5120 Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 PCIe
5120 Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 FHHL PCIe
5120 Cores
1200 / 880 MHz 16 GB HBM2
NVIDIA HGX-1
40960 Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA DGX-1V
40960 Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA DGX Station
20480 Cores
1455 / 880 MHz 64 GB HBM2

WELCOME TO THE ERA OF AI
Every industry wants intelligence. Within their ever-growing lakes of data lie insights that can provide the opportunity to revolutionize entire industries: personalized cancer therapy, predicting the next big hurricane, and virtual personal assistants conversing naturally. These opportunities can become a reality when data scientists are given the tools they need to realize their life’s work.

NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

AI TRAINING
From recognizing speech to training virtual personal assistants and teaching autonomous cars to drive, data scientists are taking on increasingly complex challenges with AI. Solving these kinds of problems requires training deep learning models that are exponentially growing in complexity, in a practical amount of time.
With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink™ connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

AI INFERENCE
To connect us with the most relevant information, services, and products, hyperscale companies have started to tap into AI. However, keeping up with user demand is a daunting challenge. For example, the world’s largest hyperscale company recently estimated that they would need to double their data center capacity if every user spent just three minutes a day using their speech recognition service.
Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, Tesla V100 GPU delivers 30X higher inference performance than a CPU server. This giant leap in throughput and efficiency will make the scale-out of AI services practical.

HIGH PERFORMANCE COMPUTING (HPC)
HPC is a fundamental pillar of modern science. From predicting weather to discovering drugs to finding new energy sources, researchers use large computing systems to simulate and predict our world. AI extends traditional HPC by allowing researchers to analyze large volumes of data for rapid insights where simulation alone cannot fully predict the real world.
Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads. Every researcher and engineer can now afford an AI supercomputer to tackle their most challenging work.

NVIDIA Launches Revolutionary Volta GPU Platform, Fueling Next Era of AI and High Performance Computing

Volta-Based Tesla V100 Data Center GPU Shatters Barrier of 120 Teraflops of Deep Learning

NVIDIA today launched Volta™ — the world’s most powerful GPU computing architecture, created to drive the next wave of advancement in artificial intelligence and high performance computing.

The company also announced its first Volta-based processor, the NVIDIA® Tesla® V100 data center GPU, which brings extraordinary speed and scalability for AI inferencing and training, as well as for accelerating HPC and graphics workloads.

“Artificial intelligence is driving the greatest technology advances in human history,” said Jensen Huang, founder and chief executive officer of NVIDIA, who unveiled Volta at his GTC keynote. “It will automate intelligence and spur a wave of social progress unmatched since the industrial revolution.

“Deep learning, a groundbreaking AI approach that creates computer software that learns, has insatiable demand for processing power. Thousands of NVIDIA engineers spent over three years crafting Volta to help meet this need, enabling the industry to realize AI’s life-changing potential,” he said.

Volta, NVIDIA’s seventh-generation GPU architecture, is built with 21 billion transistors and delivers the equivalent performance of 100 CPUs for deep learning.

It provides a 5x improvement over Pascal™, the current-generation NVIDIA GPU architecture, in peak teraflops, and 15x over the Maxwell™ architecture, launched two years ago. This performance surpasses by 4x the improvements that Moore’s law would have predicted.

Demand for accelerating AI has never been greater. Developers, data scientists and researchers increasingly rely on neural networks to power their next advances in fighting cancer, making transportation safer with self-driving vehicles, providing new intelligent customer experiences and more.

Data centers need to deliver exponentially greater processing power as these networks become more complex. And they need to efficiently scale to support the rapid adoption of highly accurate AI-based services, such as natural language virtual assistants, and personalized search and recommendation systems.

Volta will become the new standard for high performance computing. It offers a platform for HPC systems to excel at both computational science and data science for discovering insights. By pairing CUDA® cores and the new Volta Tensor Core within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPUs for traditional HPC.

Breakthrough Technologies
The Tesla V100 GPU leapfrogs previous generations of NVIDIA GPUs with groundbreaking technologies that enable it to shatter the 100 teraflops barrier of deep learning performance. They include:

  • Tensor Cores designed to speed AI workloads. Equipped with 640 Tensor Cores, V100 delivers 120 teraflops of deep learning performance, equivalent to the performance of 100 CPUs.
  • New GPU architecture with over 21 billion transistors. It pairs CUDA cores and Tensor Cores within a unified architecture, providing the performance of an AI supercomputer in a single GPU.
  • NVLink™ provides the next generation of high-speed interconnect linking GPUs, and GPUs to CPUs, with up to 2x the throughput of the prior generation NVLink.
  • 900 GB/sec HBM2 DRAM, developed in collaboration with Samsung, achieves 50 percent more memory bandwidth than previous generation GPUs, essential to support the extraordinary computing throughput of Volta.
  • Volta-optimized software, including CUDA, cuDNN and TensorRT™ software, which leading frameworks and applications can easily tap into to accelerate AI and research.

Ecosystem Support for Volta
Volta has received broad industry support from leading companies and organizations around the world:

“NVIDIA and AWS have worked together for a long time to help customers run compute-intensive AI workloads in the cloud. We launched the first GPU-optimized cloud instance in 2010, and introduced last year the most powerful GPU instance available in the cloud. AWS is home to some of today’s most innovative and creative AI applications, and we look forward to helping customers continue to build incredible new applications with the next generation of our general-purpose GPU instance family when Volta becomes available later in the year.”
— Matt Garman, vice president of Compute Services, Amazon Web Services

“We express our congratulations to NVIDIA’s latest release of Volta. From Baidu Cloud to Intelligent Driving, Baidu has been strengthening its efforts in building an open AI platform. Together with NVIDIA, we believe we will accelerate the development and application of the global AI technology and create more opportunities for the whole society.”
— Yaqin Zhang, president, Baidu

“NVIDIA and Facebook have been great partners and we are excited about the contributions NVIDIA has made to Facebook’s Caffe2 and PyTorch. We look forward to the AI advances NVIDIA’s new high-performing Volta graphics architecture will enable.”
— Mike Schroepfer, chief technology officer, Facebook

“NVIDIA’s GPUs deliver significant performance boosts for Google Cloud Platform customers. GPUs are an important part of our infrastructure, offering Google and our enterprise customers extra computational power for machine learning or high performance computing and data analysis. Volta’s performance improvements will make GPUs even more powerful and we plan to offer Volta GPUs on GCP.”
— Brad Calder, vice president of Engineering for Google Cloud Platform, Google

“Microsoft and NVIDIA have partnered for years on AI technologies, including Microsoft Azure N-series, Project Olympus and Cognitive Toolkit. The new Volta architecture will unlock extraordinary new capabilities for Microsoft customers.”
— Harry Shum, executive vice president of Microsoft AI and Research Group, Microsoft

“Oak Ridge National Laboratory will begin assembling our next-generation leadership computing system, Summit, this summer. Summit is powered by Volta GPUs and will be the top supercomputer in the U.S. for scientific discovery when completed in 2018. It will keep the U.S. at the forefront of scientific research and help the Department of Energy address complex challenges with computational science and AI-assisted discovery.”
— Jeff Nichols, associate laboratory director of the Computing and Computational Sciences Directorate, Oak Ridge National Laboratory

“A large variety of our products, including voice technology in wechat, photo/video technology in QQ and Qzone, and the deep learning platform based on Tencent Cloud, already rely on AI. We believe Volta will provide unprecedented computing power for our AI developers, and we’re excited to open up those capabilities soon from Tencent Cloud to more clients.”
— Dowson Tong, senior executive vice president, Tencent