NVIDIA HGX-1 Professional Computing Solution

8x Tesla V100

Product status: Official | Last Update: 2020-05-14 | Report Error
Overview
Manufacturer
NVIDIA
Original Series
Tesla Volta
Release Date
May 10th, 2017
Graphics Processing Unit
GPU Model
8× GV100
Architecture
Volta
Fabrication Process
12 nm FinFET ()
Die Size
8× 815 mm2
Transistors Count
8× 21.1B
Transistors Density
25.9M TRAN/mm2
CUDAs
8× 5120 (40960)
Tensor Cores
8× 640 (5120)
SM
8× 80 (640)
GPCs
8× 6 (48)
TMUs
8× 320 (2560)
Clocks
Base Clock
1200 MHz
Boost Clock
1455 MHz
Memory Clock
880 MHz
Effective Memory Clock
1760 Mbps
Memory Configuration
Memory Size
8× 16384 (131072) MB
Memory Type
HBM2
Memory Bus Width
8× 4096 (32768)-bit
Memory Bandwidth
8× 901.1 (7208.8) GB/s
Physical
Interface
SXM 2.0
API Support
DirectX
12.0
Vulkan
1.0
OpenGL
4.5
OpenCL
2.1

Performance
Texture Fillrate
3.7 TTexel/s
FP16 Performance
238.4 TFLOPS
FP32 Performance
119.2 TFLOPS
FP64 Performance
59.6 TFLOPS
FP32 Perf. per mm2
18.3 GFLOPS/mm2




 ModelCoresBoost ClockMemory ClockMemory Config.
NVIDIA DGX-2
 
81920
 
1455 MHz
 
1.8 Gbps
 
8192 GB HB2 4096b
NVIDIA HGX-2
 
81920
 
1455 MHz
 
1.8 Gbps
 
4096 GB HB2 4096b
NVIDIA DGX-1V
 
40960
 
1455 MHz
 
1.8 Gbps
 
1024 GB HB2 4096b
NVIDIA HGX-1
 
40960
 
1455 MHz
 
1.8 Gbps
 
1024 GB HB2 4096b
Thumbnail
NVIDIA DGX Station
 
20480
 
1455 MHz
 
1.8 Gbps
 
256 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100S PCIe 32GB
 
5120
 
1600 MHz
 
2.2 Gbps
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 SXM2 32GB
 
5120
 
1533 MHz
 
1.8 Gbps
 
32 GB HB2 4096b
NVIDIA Tesla V100 SXM3 350W
 
5120
 
1533 MHz
 
1.8 Gbps
 
32 GB HB2 4096b
NVIDIA Tesla V100 SXM3 450W
 
5120
 
1533 MHz
 
1.8 Gbps
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 PCIe 32GB
 
5120
 
1367 MHz
 
1.8 Gbps
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 SXM2
 
5120
 
1533 MHz
 
1.8 Gbps
 
16 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 PCIe
 
5120
 
1368 MHz
 
1.8 Gbps
 
16 GB HB2 4096b
NVIDIA Tesla V100 FHHL PCIe
 
5120
 
1367 MHz
 
1.8 Gbps
 
16 GB HB2 4096b
 ModelCoresBoost ClockMemory ClockMemory Config.
Thumbnail
NVIDIA DGX-2
 
81920
 
1455 MHz
 
1.8 GB/s
 
8192 GB HB2 4096b
Thumbnail
NVIDIA HGX-2
 
81920
 
1455 MHz
 
1.8 GB/s
 
4096 GB HB2 4096b
Thumbnail
NVIDIA DGX-1V
 
40960
 
1455 MHz
 
1.8 GB/s
 
1024 GB HB2 4096b
Thumbnail
NVIDIA HGX-1
 
40960
 
1455 MHz
 
1.8 GB/s
 
1024 GB HB2 4096b
Thumbnail
NVIDIA DGX Station
 
20480
 
1455 MHz
 
1.8 GB/s
 
256 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100S PCIe 32GB
 
5120
 
1600 MHz
 
2.2 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 SXM2 32GB
 
5120
 
1533 MHz
 
1.8 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 SXM3 350W
 
5120
 
1533 MHz
 
1.8 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 SXM3 450W
 
5120
 
1533 MHz
 
1.8 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA Quadro GV100
 
5120
-
 
1.7 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 PCIe 32GB
 
5120
 
1367 MHz
 
1.8 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA TITAN V CEO Edition
 
5120
 
1455 MHz
 
1.7 GB/s
 
32 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 SXM2
 
5120
 
1533 MHz
 
1.8 GB/s
 
16 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 PCIe
 
5120
 
1368 MHz
 
1.8 GB/s
 
16 GB HB2 4096b
Thumbnail
NVIDIA Tesla V100 FHHL PCIe
 
5120
 
1367 MHz
 
1.8 GB/s
 
16 GB HB2 4096b
Thumbnail
NVIDIA TITAN V
 
5120
 
1455 MHz
 
1.7 GB/s
 
12 GB HB2 3072b

HGX-1 HYPERSCALE GPU ACCELERATOR FOR AI CLOUD COMPUTING

Powered by NVIDIA® Tesla® GPUs and NVIDIA NVLink high-speed interconnect technology; the HGX-1 comes as AI workloads—from autonomous driving and personalized healthcare to superhuman voice recognition—are taking off in the cloud.

Purpose-built for cloud computing, the HGX-1 enclosure architecture provides revolutionary performance with unprecedented configurability and future-proofing. It taps into the power of eight NVIDIA Tesla GPUs interconnected with the NVLink hybrid cube, that was introduced with the NVIDIA DGX-1 for class-leading performance. Its innovative PCIe switching architecture enables a CPU to be dynamically connected to any number of GPUs. This allows cloud service providers that standardize with a single HGX-1 infrastructure to offer customers a range of CPU and GPU machine instances, while a standard NVLink fabric architecture allows the rich GPU software ecosystem to accelerate AI and other workloads.

With HGX-1, hyperscale data centers can provide optimal performance and flexibility for virtually any accelerated workload, including deep learning training, inference, advanced analytics, and high-performance computing. For deep learning, it provides up to 100X faster performance when compared with legacy CPU-based servers and is estimated to be one-fifth the cost for AI training and one-tenth the cost for AI inferencing.

With its modular design, HGX-1 is suited for deployment in existing data center racks across the globe, offering hyperscale data centers a quick, simple path to be ready for AI. HGX-1 is architected to be Tesla V100 ready and extract the full AI performance that Tesla V100 provides. With a simple drop-in upgrade, Tesla V100 GPUs will create an even more flexible and powerful cloud computing platform.