Cyan Equipment
CIENA NTK665AA
NVIDIA H100 Tensor Core GPU
Exceptional performance, scalability, and security for every data center.
REDUCING LEADTIMES
COST SAVINGS
LIFETIME WARRANTY
SUSTAINABLE SOLUTIONS
Take an Order-of-Magnitude Leap in Accelerated Computing
The NVIDIA H100 Tensor Core GPU delivers exceptional performance, scalability, and security for every workload. With NVIDIA® NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads, while the dedicated Transformer Engine supports trillion-parameter language models. H100 uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models by 30X over the previous generation.
Multiple Parts Available & Ready to Ship with Lifetime Warranty
See Part List Below for a Sampling of Our Inventory
Don’t see what you’re looking for? Our Inventory is always changing, please contact us for current stock.
Manufacturer | Part # | Description | QTY |
NVIDIA | H100 | NVIDIA H100 Tensor Core GPU | 800+ |
Ready for Enterprise AI?
NVIDIA H100 GPUs for mainstream servers come with a five-year software subscription, including enterprise support, to the NVIDIA AI Enterprise software suite, simplifying AI adoption with the highest performance. This ensures organizations have access to the AI frameworks and tools they need to build H100- accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, and more.
Securely Accelerate Workloads From Enterprise to Exascale
NVIDIA H100 GPUs feature fourth-generation Tensor Cores and the Transformer Engine with FP8 precision, further extending NVIDIA’s market-leading AI leadership with up to 9X faster training and an incredible 30X inference speedup on large language models. For high-performance computing (HPC) applications, H100 triples the floating-point operations per second (FLOPS) of FP64 and adds dynamic programming (DPX) instructions to deliver up to 7X higher performance. With second-generation Multi-Instance GPU (MIG), built-in NVIDIA confidential computing, and NVIDIA NVLink Switch System, H100 securely accelerates all workloads for every data center from enterprise to exascale.
NVIDIA H100
Accelerate Every Workload, Everywhere
The NVIDIA H100 is an integral part of the NVIDIA data center platform. Built for AI, HPC, and data analytics, the platform accelerates over 3,000 applications, and is available everywhere from data center to edge, delivering both dramatic performance gains and cost-saving opportunities.
Explore the Technology Breakthroughs of NVIDIA Hopper
NVIDIA H100 Tensor Core GPU
Built with 80 billion transistors using a cutting-edge TSMC 4N process custom tailored for NVIDIA’s accelerated compute needs, H100 features major advances to accelerate AI, HPC, memory bandwidth, interconnect, and communication at data center scale.
Transformer Engine
The Transformer Engine uses software and Hopper Tensor Core technology designed to accelerate training for models built from the world’s most important AI model building block, the transformer. Hopper Tensor Cores can apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers.
NVLink Switch System
The NVLink Switch System enables the scaling of multi- GPU input/output (IO) across multiple servers at 900 gigabytes per second (GB/s) bidirectional per GPU, over 7X the bandwidth of PCIe Gen5. The system supports clusters of up to 256 H100s and delivers 9X higher bandwidth than InfiniBand HDR on the NVIDIA Ampere architecture.
NVIDIA Confidential Computing
NVIDIA H100 brings high performance security to workloads with confidentiality and integrity. Confidential Computing delivers hardware-based protection for data and applications in use.
Second-Generation Multi-Instance GPU (MIG)
The Hopper architecture’s second-generation MIG supports multi-tenant, multi-user configurations in virtualized environments, securely partitioning the GPU into isolated, right-size instances to maximize quality of service (QoS) for 7X more secured tenants.
DPX Instructions
Hopper’s DPX instructions accelerate dynamic programming algorithms by 40X compared to CPUs and 7X compared to NVIDIA Ampere architecture GPUs. This leads to dramatically faster times in disease diagnosis, real-time routing optimizations, and graph analytics.
Technical Specifications
Form Factor | H100 SXM | H100 PCIe | H100 NVL1 |
---|---|---|---|
FP64 | 34 teraFLOPS | 26 teraFLOPS | 68 teraFLOPs |
FP64 Tensor Core | 67 teraFLOPS | 51 teraFLOPS | 134 teraFLOPs |
FP32 | 67 teraFLOPS | 51 teraFLOPS | 134 teraFLOPs |
TF32 Tensor Core | 989 teraFLOPS2 | 756 teraFLOPS2 | 1,979 teraFLOPs2 |
BFLOAT16 Tensor Core | 1,979 teraFLOPS2 | 1,513 teraFLOPS2 | 3,958 teraFLOPs2 |
FP16 Tensor Core | 1,979 teraFLOPS2 | 1,513 teraFLOPS2 | 3,958 teraFLOPs2 |
FP8 Tensor Core | 3,958 teraFLOPS2 | 3,026 teraFLOPS2 | 7,916 teraFLOPs2 |
INT8 Tensor Core | 3,958 TOPS2 | 3,026 TOPS2 | 7,916 TOPS2 |
GPU memory | 80GB | 80GB | 188GB |
GPU memory bandwidth | 3.35TB/s | 2TB/s | 7.8TB/s3 |
Decoders | 7 NVDEC 7 JPEG |
7 NVDEC 7 JPEG |
14 NVDEC 14 JPEG |
Max thermal design power (TDP) | Up to 700W (configurable) | 300-350W (configurable) | 2x 350-400W (configurable) |
Multi-Instance GPUs | Up to 7 MIGS @ 10GB each | Up to 14 MIGS @ 12GB each | |
Form factor | SXM | PCIe dual-slot air-cooled |
2x PCIe dual-slot air-cooled |
Interconnect | NVLink: 900GB/s PCIe Gen5: 128GB/s |
NVLink: 600GB/s PCIe Gen5: 128GB/s |
NVLink: 600GB/s PCIe Gen5: 128GB/s |
Server options | NVIDIA HGX H100 Partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUs | Partner and NVIDIA-Certified Systems with 1–8 GPUs |
Partner and NVIDIA-Certified Systems with 2-4 pairs |
NVIDIA AI Enterprise | Add-on | Included | Included |
1. Preliminary specifications. May be subject to change. Specifications shown for 2x H100 NVL PCIe cards paired with NVLink Bridge.
2. With sparsity.
3. Aggregate HBM bandwidth.
The TELECOMCAULIFFE Difference
TELCOMCAULIFFE, A PICS Telecom Team, is a leading global distributor of telecom products and network equipment, offering an expansive selection of new, surplus, and refreshed products from over 1000 manufacturers. When you buy from the TELCOMCAULIFFE Team you’ll experience a wide range of benefits that not all OEMs or vendors offer.
- 7 Experienced Dynamic Professionals Working to Ensure the Best Pricing, Quickest Turnaround, And Highest Quality Equipment
- Standard Lifetime Warranty with Advanced Replacement
- State-of-the-Art Test Facilities and Repair Services
- 24/7 Online Customer Service
- Dedicated Marketing Professional to Sell Your Surplus Equipment
- Follow-Up Process to Ensure Customer Satisfaction