Example: bachelor of science

NVIDIA DGX A100 Datasheet

NVIDIA DGX a100 . THE UNIVERSAL SYSTEM FOR. AI INFRASTRUCTURE. The Challenge of Scaling Enterprise AI SYSTEM SPECIFICATIONS. GPUs 8x NVIDIA a100 Tensor Every business needs to transform using artificial intelligence (AI), not only to survive, Core GPUs but to thrive in challenging times. However, the enterprise requires a platform for AI. GPU Memory 320 GB total infrastructure that improves upon traditional approaches, which historically involved Performance 5 petaFLOPS AI. slow compute architectures that were siloed by analytics, training, and inference 10 petaOPS INT8. workloads. The old approach created complexity, drove up costs, constrained speed NVIDIA NVSwitches 6.

NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented

Tags:

  Nvidia, A100, Dgx a100, Nvidia dgx a100

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of NVIDIA DGX A100 Datasheet

1 NVIDIA DGX a100 . THE UNIVERSAL SYSTEM FOR. AI INFRASTRUCTURE. The Challenge of Scaling Enterprise AI SYSTEM SPECIFICATIONS. GPUs 8x NVIDIA a100 Tensor Every business needs to transform using artificial intelligence (AI), not only to survive, Core GPUs but to thrive in challenging times. However, the enterprise requires a platform for AI. GPU Memory 320 GB total infrastructure that improves upon traditional approaches, which historically involved Performance 5 petaFLOPS AI. slow compute architectures that were siloed by analytics, training, and inference 10 petaOPS INT8. workloads. The old approach created complexity, drove up costs, constrained speed NVIDIA NVSwitches 6.

2 Of scale, and was not ready for modern AI. Enterprises, developers, data scientists, System Power max and researchers need a new platform that unifies all AI workloads, simplifying Usage infrastructure and accelerating ROI. CPU Dual AMD Rome 7742, 128 cores total, GHz The Universal System for Every AI Workload (base), GHz (max boost). System Memory 1TB. NVIDIA DGX a100 is the universal system for all AI workloads from analytics Networking 8x Single-Port Mellanox to training to inference. DGX a100 sets a new bar for compute density, packing ConnectX-6 VPI. 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute 200Gb/s HDR InfiniBand infrastructure with a single, unified system.

3 DGX a100 also offers the unprecedented 1x Dual-Port Mellanox ability to deliver fine-grained allocation of computing power, using the Multi-Instance ConnectX-6 VPI. GPU capability in the NVIDIA a100 Tensor Core GPU, which enables administrators 10/25/50/100/200Gb/s Ethernet to assign resources that are right-sized for specific workloads. This ensures that the Storage OS: 2x NVME drives largest and most complex jobs are supported, along with the simplest and smallest. Internal Storage: 15TB. Running the DGX software stack with optimized software from NGC, the combination of (4x ) NVME drives dense compute power and complete workload flexibility make DGX a100 an ideal choice Software Ubuntu Linux OS.

4 For both single node deployments and large scale Slurm and Kubernetes clusters System Weight 271 lbs (123 kgs). deployed with NVIDIA DeepOps. Packaged System 315 lbs (143kgs). Weight Direct Access to NVIDIA DGXperts System Dimensions Height: in ( mm). NVIDIA DGX a100 is more than a server, it's a complete hardware and software platform Width: in ( mm) MAX. Length: in ( mm) MAX. built upon the knowledge gained from the world's largest DGX proving ground NVIDIA . Operating 5 C to 30 C (41 F to 86 F). DGX SATURNV and backed by thousands of DGXperts at NVIDIA . DGXperts are Temperature Range AI-fluent practitioners who offer prescriptive guidance and design expertise to help fastrack AI transformation.

5 They've built a wealth of know how and experience over the last decade to help maximize the value of your DGX investment. DGXperts help ensure that critical applications get up and running quickly, and stay running smoothly, for dramatically-improved time to insights. NVIDIA DGX a100 | DATA SHEET | MAY20. Fastest Time to Solution DGX a100 Delivers 6 Times The Training Performance NVIDIA DGX a100 features eight NVIDIA a100 Tensor Core NVIDIA DGX a100 TF32 6X. GPUs, providing users with unmatched acceleration, and 1289 Seq/s 8x V100 FP32. is fully optimized for NVIDIA CUDA-X software and the 216 Seq/s end-to-end NVIDIA data center solution stack. NVIDIA 0 300 600 900 1200 1500.

6 Training a100 GPUs bring a new precision, TF32, which works just NLP: BERT-Large like FP32 while providing 20X higher FLOPS for AI vs. the BERT Pre-Training Throughput using PyTorch including (2/3)Phase 1 and (1/3)Phase 2 | Phase 1 Seq Len = 128, Phase 2 Seq Len = 512 | V100: DGX-1 with 8x V100 using FP32 precision | DGX a100 : DGX a100 with 8x a100 using previous generation, and best of all, no code changes are TF32 precision required to get this speedup. And when using NVIDIA 's automatic mixed precision, a100 offers an additional 2X boost to performance with just one additional line of code using FP16 precision. The a100 GPU also has a class-leading terabytes per second (TB/s) of memory DGX a100 Delivers 172 Times The Inference Performance bandwidth, a greater than 70% increase over the last NVIDIA DGX a100 172X 10 PetaOPS.

7 Generation. Additionally, the a100 GPU has significantly CPU Server 58 TOPS. more on-chip memory, including a 40MB Level 2 cache 0 2,000 4,000 6,000 8,000 10,000 12,000. that is nearly 7X larger than the previous generation, Inference: Peak Compute maximizing compute performance. DGX a100 also debuts the next generation of NVIDIA NVLink , which doubles CPU Server: 2x Intel Platinum 8280 using INT8 | DGX a100 : DGX a100 with 8x a100 using INT8 with Structural Sparsity the GPU-to-GPU direct bandwidth to 600 gigabytes per second (GB/s), almost 10X higher than PCIe Gen 4, and a new NVIDIA NVSwitch that is 2X faster than the last generation. This unprecedented power delivers the fastest time-to-solution, allowing users to tackle DGX a100 Delivers 13 Times The Data Analytics Performance challenges that weren't possible or practical before.

8 NVIDIA DGX a100 13X 688 Billion Graph Edges/s CPU Cluster The World's Most Secure AI System 52 Billion Graph Edges/s 0 100 200 300 400 500 600 700 800. for Enterprise Analytics: PageRank NVIDIA DGX a100 delivers the most robust security posture 3000x CPU Servers vs. 4x DGX a100 | Published Common Crawl Data Set: 128B Edges, Graph for your AI enterprise, with a multi-layered approach that secures all major hardware and software components. Stretching across the baseboard management controller (BMC), CPU board, GPU board, self-encrypted drives, and and networking, all capable of 200Gb/s. The combination of massive secure boot, DGX a100 has security built in, allowing IT to GPU-accelerated compute with state-of-the-art networking hardware focus on operationalizing AI rather than spending time on and software optimizations means DGX a100 can scale to hundreds or threat assessment and mitigation.

9 Thousands of nodes to meet the biggest challenges, such as conversational AI and large scale image classification. Unmatched Data Center Scalability with Mellanox Proven Infrastructure Solutions Built with Trusted With the fastest I/O architecture of any DGX system, Data Center Leaders NVIDIA DGX a100 is the foundational building block In combination with leading storage and networking technology providers, for large AI clusters like NVIDIA DGX SuperPOD , the we offer a portfolio of infrastructure solutions that incorporate the enterprise blueprint for scalable AI infrastructure. DGX best of the NVIDIA DGX POD reference architecture. Delivered as a100 features eight single-port Mellanox ConnectX-6 fully integrated, ready-to-deploy offerings through our NVIDIA Partner VPI HDR InfiniBand adapters for clustering and 1 dual- Network, these solutions make data center AI deployments simpler and port ConnectX-6 VPI Ethernet adapter for storage faster for IT.

10 To learn more about NVIDIA DGX a100 , visit 2020 NVIDIA Corporation. All rights reserved. NVIDIA , the NVIDIA logo, NVIDIA DGX a100 , NVLink, DGX SuperPOD, DGX POD, and CUDA. are trademarks and/or registered trademarks of NVIDIA Corporation. All company and product names are trademarks or registered trademarks of the respective owners with which they are associated. Features, pricing, availability, and specifications are all subject to change without notice. MAY20.


Related search queries