We are an official
ASUS GOLD PARTNER
BEST of the REST
Next-Gen AI server (GPU)
Unleashed.
ASUS ESC AI POD
NVIDIA® GB200 NVL72
36 NVIDIA Grace CPUs Superchips
72 NVIDIA Blackwell Tensor Core GPUs
5th Gen NVIDIA NVLink technology
Supports trillion-parameter LLM inference and training with NVIDIA
Scale-up ecosystem-ready
ASUS Infrastructure Deployment Manager
End-to-end services
The NVIDIA Blackwell GPU Breakthrough
The ASUS ESC AI POD contains 72 NVIDIA Blackwell GPUs, each packing 208 billion transistors. NVIDIA Blackwell GPUs feature two reticle-limited dies connected by a 10 terabytes per second (TB/s) chip-to-chip interconnect in a unified single GPU.
LLM inference and energy efficiency: TTL = 50 milliseconds (ms) real time, FTL = 5s, 32,768 input/1,024 output, NVIDIA HGX™ H100 scaled over InfiniBand (IB) vs. GB200 NVL72, training 1.8T MOE 4096x HGX H100 scaled over IB vs. 456x GB200 NVL72 scaled over IB. Cluster size: 32,768
A database join and aggregation workload with Snappy / Deflate compression derived from TPC-H Q4 query. Custom query implementations for x86, H100 single GPU and single GPU from GB200 NLV72 vs. Intel Xeon 8480+
Projected performance subject to change
ให้ได้มากกว่า HARDWARE
ด้วย ASUS Software-driven Solution
Outshining competitors, ASUS is specialized in crafting tailored data center solutions and providing end-to-end service spanning from hybrid servers to edge-computing deployments. We don't just stop at hardware – we go the extra mile by offering software solutions to enterprise. Our software-driven approach encompasses system verification and remote deployment, ensuring seamless operations to speed up AI development.
Super LIGTHING Fast
5th-gen NVLink technology in NVIDIA GB200 NVL72
The NVIDIA NVLink Switch features 144 ports with a switching capacity of 14.4 TB/s, allowing nine switches to interconnect with the NVLink ports on each of the 72 NVIDIA Blackwell GPUs within a single NVLink domain.