NVIDIA Tesla V100 SMX2 Professional Graphics Card

Product status: Released | Report Error
Overview
Manufacturer
NVIDIA
Original Series
Tesla Volta
Release Date
May 10th, 2017
Graphics Processing Unit
GPU Model
GV100
Architecture
Volta
Fabrication Process
14 nm FinFET
Die Size
815 mm2
Transistors Count
21.1B
Transistors Density
25.9M TRAN/mm2
CUDAs
5120
SM
80
GPCs
6
TMUs
320
Clocks
Base Clock
1455 MHz
Boost Clock
1455 MHz
Memory Clock
880 MHz
Effective Memory Clock
1760 Mbps
Memory Configuration
Memory Size
16384 MB
Memory Type
HBM2
Memory Bus Width
4096-bit
Memory Bandwidth
901.1 GB/s
VideoCardz.com

Physical
Interface
NVLINK 2.0
Power Connectors
-
TDP/TBP
300 W
Recommended PSU
600 W
API Support
DirectX
12.0
Vulkan
1.0
OpenGL
4.5
OpenCL
2.1
Shader Model
5.0

Performance
Texture Fillrate
465.6 GTexel/s
SPFP Performance
14.9 TFLOPS
Performance per W
49.7 GFLOPS/W
Performance per mm2
18.3 GFLOPS/mm2





 ModelClocksMemory
NVIDIA DGX-1V
40960 Unified Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA HGX-1
40960 Unified Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA DGX Station
20480 Unified Cores
1455 / 880 MHz 64 GB HBM2
NVIDIA Tesla V100 PCIe
5120 Unified Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 SMX2
5120 Unified Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 FHHL PCIe
5120 Unified Cores
1200 / 880 MHz 16 GB HBM2
 ModelClocksMemory
NVIDIA Tesla V100 SMX2
5120 Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 PCIe
5120 Cores
1455 / 880 MHz 16 GB HBM2
NVIDIA Tesla V100 FHHL PCIe
5120 Cores
1200 / 880 MHz 16 GB HBM2
NVIDIA HGX-1
40960 Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA DGX-1V
40960 Cores
1455 / 880 MHz 128 GB HBM2
NVIDIA DGX Station
20480 Cores
1455 / 880 MHz 64 GB HBM2

WELCOME TO THE ERA OF AI
Every industry wants intelligence. Within their ever-growing lakes of data lie insights that can provide the opportunity to revolutionize entire industries: personalized cancer therapy, predicting the next big hurricane, and virtual personal assistants conversing naturally. These opportunities can become a reality when data scientists are given the tools they need to realize their life’s work.

NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

AI TRAINING
From recognizing speech to training virtual personal assistants and teaching autonomous cars to drive, data scientists are taking on increasingly complex challenges with AI. Solving these kinds of problems requires training deep learning models that are exponentially growing in complexity, in a practical amount of time.
With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink™ connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

AI INFERENCE
To connect us with the most relevant information, services, and products, hyperscale companies have started to tap into AI. However, keeping up with user demand is a daunting challenge. For example, the world’s largest hyperscale company recently estimated that they would need to double their data center capacity if every user spent just three minutes a day using their speech recognition service.
Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, Tesla V100 GPU delivers 30X higher inference performance than a CPU server. This giant leap in throughput and efficiency will make the scale-out of AI services practical.

HIGH PERFORMANCE COMPUTING (HPC)
HPC is a fundamental pillar of modern science. From predicting weather to discovering drugs to finding new energy sources, researchers use large computing systems to simulate and predict our world. AI extends traditional HPC by allowing researchers to analyze large volumes of data for rapid insights where simulation alone cannot fully predict the real world.
Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads. Every researcher and engineer can now afford an AI supercomputer to tackle their most challenging work.