Accelerate AI and Simulation
Power next-generation AI inference, digital-twin, and simulation workloads with 5th-generation Tensor Cores supporting FP4 precision—delivering faster model execution and higher efficiency.
Power next-generation AI inference, digital-twin, and simulation workloads with 5th-generation Tensor Cores supporting FP4 precision—delivering faster model execution and higher efficiency.
Render photorealistic 3D environments and real-time visualizations with 4th-generation Ray Tracing Cores and neural rendering, enabling advanced design, VFX, and XR experiences.
Train and serve large AI models or handle ultra-detailed visual projects using 96 GB GDDR7 with ECC memory and 1.6 TB/s bandwidth—ideal for complex, memory-intensive workloads.
Our RTX Pro 6000 GPU is powered by the NVIDIA Blackwell architecture, delivering exceptional performance for AI inference, generative AI, visualization, and simulation workloads. With 96 GB of GDDR7 ECC memory, 24 064 CUDA cores, and 5th-generation Tensor Cores supporting FP4 precision, it achieves new levels of speed and efficiency.
This GPU is ideal for LLM inference, digital-twin simulation, 3D rendering, and scientific research, offering scalable performance and reliability for modern data-driven enterprises.
Use NVIDIA RTX Pro 6000 server edition to bring complex designs and industrial projects to life.
Architects, engineers, and manufacturers can create highly realistic building and product visualizations, simulate lighting and materials, and explore 3D models in real time using tools such as Autodesk Revit, NVIDIA Omniverse, V-Ray, or Unreal Engine.
Accelerate AI-driven content creation—from large language model (LLM) inference to intelligent video processing and automation.
With dedicated NVENC/NVDEC engines, RTX Pro 6000 enables real-time video analytics, automated editing, and streaming for sectors such as media production, marketing, and broadcasting.
Run detailed mechanical, fluid-dynamics, or electromagnetic simulations for product testing or research using software like ANSYS, COMSOL, or OpenFOAM.
RTX Pro 6000’s 96 GB of ultra-fast GDDR7 memory and high bandwidth support large datasets and quick iteration—perfect for automotive, aerospace, and energy innovation.
Our RTX Pro 6000 instances deliver outstanding value for next-generation AI and graphics performance. Select between 1 and 8 GPUs with NVMe SSD storage up to 10 TiB, depending on your compute needs.
| RAM | CPU Cores | GPU Cards | Min Local Storage | Max Local Storage | Price / Hour ({{ currency | uppercase }}) | |
|---|---|---|---|---|---|---|
| Small | 120 GB | 36 Cores | 1 GPU | 100 GB | 2 TiB | {{ prices.opencompute.gpurtx6000.small[currency] | number:8 }} |
| Medium | 240 GB | 72 Cores | 2 GPU | 100 GB | 3 TiB | {{ prices.opencompute.gpurtx6000.medium[currency] | number:8 }} |
| Large | 480 GB | 144 Cores | 4 GPU | 100 GB | 5 TiB | {{ prices.opencompute.gpurtx6000.large[currency] | number:8 }} |
| Huge | 960 GB | 288 Cores | 8 GPU | 100 GB | 10 TiB | {{ prices.opencompute.gpurtx6000.huge[currency] | number:8 }} |
Local Storage is not included in the displayed Instances price, and has a cost of {{ prices.opencompute.volume[currency] | number:8 }} {{ currency | uppercase }} / GiB hour.
Instance needs to be shutdown for hypervisor and platform updates as it cannot be live-migrated.
RTX 6000 huge Instances are only available on dedicated hypervisors.
Please note that GPU instances require account validation: access is provided with priority to established businesses, and is granted after a manual screening process.
This GPU will be available soon. Contact us to reserve capacity or request early access.
Combine Exoscale’s simplicity with the power of NVIDIA RTX Pro 6000 GPUs. With a Docker-based template, you can access the full potential of the NVIDIA GPU Cloud (NGC) and significantly reduce time to solution.
NVIDIA GPU Cloud (NGC) provides a selected set of GPU-optimized software for artificial intelligence applications, visualizations, and HPC. The NGC Catalog includes containers, pre-trained models, Helm charts for Kubernetes deployments, and specific AI toolkits with SDKs.
NGC catalog works with both Exoscale Compute Instances and Exoscale Scalable Kubernetes Service.
Enjoy 96 GB of next-generation GDDR7 ECC memory, providing the capacity needed for large AI models, digital-twin environments, and detailed visualizations.
Each RTX Pro 6000 GPU runs with dedicated pass-through access, ensuring predictable latency and performance across demanding, data-intensive workflows.
Built on NVIDIA’s Blackwell architecture, combining 5th-generation Tensor Cores and 4th-generation RT Cores for higher efficiency and real-time ray tracing.
Perfect for generative AI, simulation, and visualization, the NVIDIA RTX Pro 6000 balances compute and graphics capabilities for versatile professional use.
Deliver up to 130 TFLOPS FP32 and 5.6 PFLOPS FP4 AI compute power with 24 064 CUDA cores, ensuring exceptional speed for training, inference, and complex rendering.
As a sovereign European Cloud Provider, Exoscale ensures all your data is stored in the country of your chosen zone, fully GDPR compliant.
| Description | Specifications |
|---|---|
| Graphic Card | NVIDIA RTX Pro 6000 Server Edition |
| Cuda Cores per card | 24 064 |
| Tensor Cores per card | 752 (5th Gen) |
| Ray Tracing Cores per card | 188 (4th Gen) |
| GPU memory per card | 96 GB GDDR7 with ECC |
| GPU cards | 1-8 |
| CPU cores | 36-288 |
| RAM per instance | 120-960 |
| SSD NVMe local storage | max. 10 TiB |
| Zone | DE-FRA-1, HR-ZAG-1, CH-DK-2 |
| Works with | Compute SKS Docker NVIDIA |
When powering advanced AI inference, 3D visualization, or simulation workloads in the cloud, precision and speed are essential. Our GPU RTX Pro 6000 instances, built on NVIDIA Blackwell technology, enable teams across Europe to accelerate innovation—scaling complex projects efficiently and securely on Exoscale’s sovereign cloud platform.
Contact usDiscover more GPU Instances for Cloud Computing to power diverse compute, graphics, and AI tasks. Fully integrated with the Exoscale ecosystem.
Powered by NVIDIA A30. Perfect for AI inference, high-performance computing (HPC), and data-analytics workloads.
DiscoverBased on NVIDIA Tesla V100. Ideal for deep learning, neural-network training, and advanced AI workloads.
DiscoverPowered by NVIDIA A40 the all-rounder for AR/VR, complex simulations, rendering, AI, and more.
DiscoverNVIDIA GeForce RTX 3080 Ti is excellent for deep-learning model training, image processing, NLP, and more. 100 % liquid cooled with heat-reuse technology.
DiscoverEntry-level all-rounder leveraging NVIDIA RTX A5000. Fully liquid cooled with heat-reuse for sustainable accelerated computing. Great for AR/VR, simulations, rendering, and AI.
DiscoverNVIDIA HGX B300, next-generation performance for AI and HPC. Expect FP4 precision, massive GPU memory, and unmatched throughput.
Contact UsThe RTX Pro 6000 is built for AI inference, 3D rendering, scientific computing, and simulation. With 96 GB of memory and new 5th-gen Tensor Cores, it accelerates complex AI pipelines, digital twin models, and real-time graphics workloads across industries.
Compared to Ampere-generation GPUs like A5000 and A40, the NVIDIA RTX Pro 6000 (Blackwell Server Edition) offers dramatically higher throughput, double the memory capacity, and advanced FP4 precision for LLM and generative AI workloads. It’s ideal for enterprises transitioning from Ampere to Blackwell performance.
Yes. NVIDIA RTX Pro 6000 GPU instances integrate seamlessly with Exoscale SKS (Hosted and Managed Kubernetes), Object Storage, Block Storage, and DBaaS, enabling complete AI and rendering pipelines within our cloud ecosystem.
Built on the NVIDIA Blackwell architecture, the RTX Pro 6000 represents a generational leap in AI and visualization technology. With 5th-generation Tensor Cores, FP4 precision, and 96 GB GDDR7 with ECC memory, it delivers record efficiency for large-language-model inference, simulation, and real-time rendering—while consuming less power per computation than previous generations.