AMD Instinct MI100 Professional Graphics Card

Product status: Official | Last Update: 2022-10-24 | Report Error
Overview
Manufacturer
AMD
Original Series
Instinct
Launch Date
November 16th, 2020
PCB Code
109-D34317-10-03
Board Model
AMD D343
Graphics Processing Unit
GPU Model
Arcturus
Architecture
CDNA
Fabrication Process
7 nm FF (TSMC N7)
Die Size
750 mm2
Transistors Count
25.6B
Transistors Density
34.1M TRAN/mm2
Stream Processors
7680
Shader Engines
120
Clocks
Boost Clock
1504 MHz
Memory Clock
1200 MHz
Effective Memory Clock
2400 Mbps
Memory Configuration
Memory Size
32768 MB
Memory Type
HBM2
Memory Bus Width
4096-bit
Memory Bandwidth
1,228.8 GB/s

Physical
Interface
PCI-Express 4.0 x16
Height
2-slot
Power Connectors
2× 8-pin
TDP/TBP
300 W
Recommended PSU
600 W
API Support
DirectX
12.2
Vulkan
1.2
OpenGL
4.6
OpenCL
3.0

Performance
Peak FP32
23.1 TFLOPS
FP32 Perf. per Watt
77 GFLOPS/W
FP32 Perf. per mm2
30.8 GFLOPS/mm2




 ModelCoresBoost ClockMemory ClockMemory Config.
Thumbnail
AMD Instinct MI300X
 
19456
 
2100 MHz
 
5.2 Gbps
 
192 GB 8192b
Thumbnail
AMD Instinct MI250X
 
14080
 
1700 MHz
 
3.2 Gbps
 
128 GB 8192b
Thumbnail
AMD Instinct MI250
 
13312
 
1700 MHz
 
3.2 Gbps
 
128 GB 8192b
Thumbnail
AMD Instinct MI100
 
7680
 
1504 MHz
 
2.4 Gbps
 
32 GB HB2 4096b
Thumbnail
AMD Instinct MI210
 
6656
 
1700 MHz
 
3.2 Gbps
 
64 GB 4096b
Thumbnail
AMD Radeon Instinct MI60
 
4096
 
1800 MHz
 
2 Gbps
 
32 GB HB2 4096b
Thumbnail
AMD Radeon Instinct MI25
 
4096
 
1501 MHz
 
1.9 Gbps
 
16 GB HB2 2048b
Thumbnail
AMD Vega Cube
 
4096
 
1501 MHz
 
1.9 Gbps
 
16 GB HB2 2048b
Thumbnail
AMD Radeon Instinct MI8
 
4096
 
1000 MHz
 
1 Gbps
 
4 GB HB1 4096b
Thumbnail
AMD Radeon Instinct MI50 32GB
 
3840
 
1725 MHz
 
2 Gbps
 
32 GB HB2 4096b
Thumbnail
AMD Radeon Instinct MI50
 
3840
 
1746 MHz
 
2 Gbps
 
16 GB HB2 4096b
Thumbnail
AMD Radeon Instinct MI6
 
2304
 
1237 MHz
 
7 Gbps
 
16 GB GD5 256b
 ModelCoresBoost ClockMemory ClockMemory Config.
Thumbnail
AMD Instinct MI100
 
7680
 
1504 MHz
 
2.4 GB/s
 
32 GB HB2 4096b

AMD Announces World’s Fastest HPC Accelerator for Scientific Research¹

AMD Instinct™ MI100 accelerators revolutionize high-performance computing (HPC) and AI with industry-leading compute performance ?

First GPU accelerator with new AMD CDNA architecture engineered for the exascale era ?

SANTA CLARA, Calif., Nov. 16, 2020 (GLOBE NEWSWIRE) —  AMD (NASDAQ: AMD) today announced the new AMD Instinct™ MI100 accelerator – the world’s fastest HPC GPU and the first x86 server GPU to surpass the 10 teraflops (FP64) performance barrier.1 Supported by new accelerated compute platforms from Dell, Gigabyte, HPE, and Supermicro, the MI100, combined with AMD EPYCTM CPUs and the ROCm™ 4.0 open software platform, is designed to propel new discoveries ahead of the exascale era.

Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. The MI100 offers up to 11.5 TFLOPS of peak FP64 performance for HPC and up to 46.1 TFLOPS peak FP32 Matrix performance for AI and machine learning workloads2. With new AMD Matrix Core technology, the MI100 also delivers a nearly 7x boost in FP16 theoretical peak floating point performance for AI training workloads compared to AMD’s prior generation accelerators.3

“Today AMD takes a major step forward in the journey toward exascale computing as we unveil the AMD Instinct MI100 – the world’s fastest HPC GPU,” said Brad McCredie, corporate vice president, Data Center GPU and Accelerated Processing, AMD. “Squarely targeted toward the workloads that matter in scientific computing, our latest accelerator, when combined with the AMD ROCm open software platform, is designed to provide scientists and researchers a superior foundation for their work in HPC.”

Open Software Platform for the Exascale Era

The AMD ROCm developer software provides the foundation for exascale computing. As an open source toolset consisting of compilers, programming APIs and libraries, ROCm is used by exascale software developers to create high performance applications. ROCm 4.0 has been optimized to deliver performance at scale for MI100-based systems. ROCm 4.0 has upgraded the compiler to be open source and unified to support both OpenMP® 5.0 and HIP. PyTorch and Tensorflow frameworks, which have been optimized with ROCm 4.0, can now achieve higher performance with MI1007,8. ROCm 4.0 is the latest offering for HPC, ML and AI application developers which allows them to create performance portable software.

“We’ve received early access to the MI100 accelerator, and the preliminary results are very encouraging. We’ve typically seen significant performance boosts, up to 2-3x compared to other GPUs,” said Bronson Messer, director of science, Oak Ridge Leadership Computing Facility. “What’s also important to recognize is the impact software has on performance. The fact that the ROCm open software platform and HIP developer tool are open source and work on a variety of platforms, it is something that we have been absolutely almost obsessed with since we fielded the very first hybrid CPU/GPU system.”

Key capabilities and features of the AMD Instinct MI100 accelerator include:

  • All-New AMD CDNA Architecture- Engineered to power AMD GPUs for the exascale era and at the heart of the MI100 accelerator, the AMD CDNA architecture offers exceptional performance and power efficiency
  • Leading FP64 and FP32 Performance for HPC Workloads – Delivers industry leading 11.5 TFLOPS peak FP64 performance and 23.1 TFLOPS peak FP32 performance, enabling scientists and researchers across the globe to accelerate discoveries in industries including life sciences, energy, finance, academics, government, defense and more.1
  • All-New Matrix Core Technology for HPC and AI – Supercharged performance for a full range of single and mixed precision matrix operations, such as FP32, FP16, bFloat16, Int8 and Int4, engineered to boost the convergence of HPC and AI.
  • 2nd Gen AMD Infinity Fabric™ Technology  Instinct MI100 provides ~2x the peer-to-peer (P2P) peak I/O bandwidth over PCIe® 4.0 with up to 340 GB/s of aggregate bandwidth per card with three AMD Infinity Fabric™ Links.4 In a server, MI100 GPUs can be configured with up to two fully-connected quad GPU hives, each providing up to 552 GB/s of P2P I/O bandwidth for fast data sharing.4
  • Ultra-Fast HBM2 Memory– Features 32GB High-bandwidth HBM2 memory at a clock rate of 1.2 GHz and delivers an ultra-high 1.23 TB/s of memory bandwidth to support large data sets and help eliminate bottlenecks in moving data in and out of memory.5
  • Support for Industry’s Latest PCIe® Gen 4.0 – Designed with the latest PCIe Gen 4.0 technology support providing up to 64GB/s peak theoretical transport data bandwidth from CPU to GPU.6

Available Server Solutions
The AMD Instinct MI100 accelerators are expected by end of the year in systems from major OEM and ODM partners in the enterprise markets, including:

Dell
“Dell EMC PowerEdge servers will support the new AMD Instinct MI100, which will enable faster insights from data. This would help our customers achieve more robust and efficient HPC and AI results rapidly,” said Ravi Pendekanti, senior vice president, PowerEdge Servers, Dell Technologies. “AMD has been a valued partner in our support for advancing innovation in the data center. The high-performance capabilities of AMD Instinct accelerators are a natural fit for our PowerEdge server AI & HPC portfolio.”

Gigabyte
“We’re pleased to again work with AMD as a strategic partner offering customers server hardware for high performance computing,” said Alan Chen, assistant vice president in NCBU, GIGABYTE. “AMD Instinct MI100 accelerators represent the next level of high-performance computing in the data center, bringing greater connectivity and data bandwidth for energy research, molecular dynamics, and deep learning training. As a new accelerator in the GIGABYTE portfolio, our customers can look to benefit from improved performance across a range of scientific and industrial HPC workloads.”

Hewlett Packard Enterprise (HPE)
“Customers use HPE Apollo systems for purpose-built capabilities and performance to tackle a range of complex, data-intensive workloads across high-performance computing (HPC), deep learning and analytics,” said Bill Mannel, vice president and general manager, HPC at HPE. “With the introduction of the new HPE Apollo 6500 Gen10 Plus system, we are further advancing our portfolio to improve workload performance by supporting the new AMD Instinct MI100 accelerator, which enables greater connectivity and data processing, alongside the 2nd Gen AMD EPYC™ processor. We look forward to continuing our collaboration with AMD to expand our offerings with its latest CPUs and accelerators.”

Supermicro
“We’re excited that AMD is making a big impact in high-performance computing with AMD Instinct MI100 GPU accelerators,” said Vik Malyala, senior vice president, field application engineering and business development, Supermicro. “With the combination of the compute power gained with the new CDNA architecture, along with the high memory and GPU peer-to-peer bandwidth the MI100 brings, our customers will get access to great solutions that will meet their accelerated compute requirements and critical enterprise workloads. The AMD Instinct MI100 will be a great addition for our multi-GPU servers and our extensive portfolio of high-performance systems and server building block solutions.”

MI100 Specifications

Compute
Units
Stream
Processors
FP64
TFLOPS
(Peak)
FP32
TFLOPS
(Peak)
FP32
Matrix
TFLOPS
(Peak)
FP16/FP16
Matrix
TFLOPS
(Peak)
INT4 |
INT8
TOPS
(Peak)
bFloat16
TFLOPS
(Peak)
HBM2
ECC
Memory
Memory
Bandwidth
1207680Up to
11.5
Up to 23.1Up to
46.1
Up to
184.6
Up to
184.6
Up to
92.3
TFLOPS
32GBUp to 1.23
TB/s