Register

Key benefits of our Nvidia A30 GPU

Cost-Effective AI Inference & Training

Cost-Effective AI Inference & Training

Accelerate AI model inference for applications like conversational AI and natural language processing (NLP). The A30 is suitable for small to medium-scale deep learning training, fine-tuning, and transfer learning, offering a practical option for budget-conscious projects.

Reliable High Performance Computing Workloads

Reliable High Performance Computing Workloads

Support your scientific and engineering simulations such as energy modeling and computational fluid dynamics (CFD). The A30 provides dependable performance for traditional HPC tasks, making it a viable choice for established workflows.

Flexible Data Analytics & Processing

Flexible Data Analytics & Processing

Efficiently process and analyze large datasets. The A30 performs well in GPU-accelerated ETL operations, real-time analytics, and machine learning tasks, offering a balanced approach to data processing.

GPU A30 based on NVIDIA A30

Our Tesla A30 GPU is powered by the NVIDIA A30 accelerator, built on the Ampere architecture to deliver balanced performance for AI inference, training, data analytics, and HPC workloads. With 24 GB of HBM2 memory and support for Multi-Instance GPU (MIG), it enables efficient resource utilization across a wide range of use cases.


This GPU is optimized for real-world applications such as NLP, computer vision, ETL operations, and scientific computing. Ideal for teams seeking scalable AI performance with low power consumption and strong cost-efficiency.

Start your GPU A30

High-performance computing at scale

GPU A30 enables data scientists and researchers to accelerate tasks like energy modeling, genomics, and computational fluid dynamics (CFD) simulations. Major frameworks such as TensorFlow and PyTorch run efficiently, making the NVIDIA A30 a versatile choice for production and research environments.

Efficient AI Inference and Training

The A30 GPU is optimized for AI workloads like natural language processing, conversational AI, computer vision tasks, and recommendation systems. It delivers consistent performance for small to medium-scale training, fine-tuning, and transfer learning while maintaining efficient resource usage.

Accelerated Data Analytics

As a proven Ampere-generation GPU, the NVIDIA A30 offers effective acceleration for mainstream data analytics. It reliably speeds up ETL operations and data processing with frameworks like Apache Spark and RAPIDS, making it a pragmatic and cost-effective choice for data engineering and improving large-scale pipeline performance.

Why GPU A30 on Exoscale

dedicated compute instance
  • Shared or dedicated Hypervisors
  • Large SSD Storage
  • No resource sharing
gpu product picto
  • Cutting-edge GPU A30 technology
  • Latest NVIDIA A30 cards
  • Direct GPU pass-through access
cloud orchestration picto
  • Complete cloud platform integration
  • Full Terraform automation support
  • Comprehensive API management support

Discover the best cost-to-performance ratio

Our NVIDIA GPU A30 provides a solid balance between cost and performance. Choose between four different options, from 1 to 4 GPUs, coming with SSD storage from 100 GB up to 1.6 TB, depending on the chosen instance type.

RAM CPU Cores GPU Cards Min Local Storage Max Local Storage Price / Hour ({{ currency | uppercase }})
Small 56 GB 12 Cores 1 GPU 100 GB 800 GB {{ prices.opencompute.gpua30.small[currency] | number:8 }}
Medium 90 GB 16 Cores 2 GPU 100 GB 1.2 TB {{ prices.opencompute.gpua30.medium[currency] | number:8 }}
Large 120 GB 24 Cores 3 GPU 100 GB 1.6 TB {{ prices.opencompute.gpua30.large[currency] | number:8 }}
Huge 225 GB 48 Cores 4 GPU 100 GB 1.6 TB {{ prices.opencompute.gpua30.huge[currency] | number:8 }}
  1. [1]

    Local Storage is not included in the displayed Instances price, and has a cost of {{ prices.opencompute.volume[currency] | number:8 }} {{ currency | uppercase }} / GiB hour.

  2. [2]

    Instance needs to be shutdown for hypervisor and platform updates as it cannot be live-migrated.

  3. [3]

    Available in the following Zones: CH-GVA-2

  4. [4]

    Please note that GPU instances require account validation: access is provided with priority to established businesses, and is granted after a manual screening process.

Level Up with NGC on Exoscale GPUs

Combine the simplicity and scalability of the Exoscale Cloud with the power of NVIDIA A30. With our Docker-based template, you can access the full potential of the NVIDIA GPU Cloud (NGC) and significantly reduce time to solution.


NVIDIA GPU Cloud (NGC) provides a selected set of GPU-optimized software for artificial intelligence applications, visualizations, and HPC. The NGC Catalog includes containers, pre-trained models, Helm charts for Kubernetes deployments, and specific AI toolkits with SDKs.


NGC catalog works with both Exoscale Compute Instances and Exoscale Hosted & Managed Kubernetes. The image template for Kubernetes worker nodes embeds the optimized version of the container runtime for GPU A30 cards, making it quick work to start any application from the catalog without fiddling with drivers and CUDA versions.

Learn more

GPU A30 Features

Technical Specifications NVIDIA A30 GPU

Description Specifications
Graphic Card NVIDIA A30
Cuda Cores per card 10752
Tensor Cores per card 336
GPU memory per card 24 GB
GPU cards 1-4
CPU cores 12-48
RAM per instance 56-225
SSD local storage max. 1.6 TB (SATA)
Zone CH-GVA-2
Works with Compute
SKS
Docker NVIDIA

Resources

Portal

Get started in our integrated environment with just a few clicks.
Choose from a Wide Selection of Officially Supported Templates
Multiple OS images support picto

Trusted by engineers: Accelerate AI & analytics with NVIDIA A30

When running demanding AI inference, data analytics, or HPC workloads in the cloud, performance and reliability matter. Our GPU A30 instances, powered by the NVIDIA A30, help teams across Europe scale efficiently and cost-effectively with Exoscale.

Contact us

More GPU Instances

Discover more GPU Instances for Cloud Computing to power diverse compute, graphics, and AI tasks. Fully integrated with the Exoscale ecosystem.

GPU2 on Exoscale

NVIDIA V100

Our Tesla V100-based GPU is recommended for deep learning, neural networks, AI, and more.

Discover
GPU3 on Exoscale

NVIDIA A40

Our NVIDIA A40-based GPU is the all-rounder for AR, VR, simulations, rendering, AI, and more.

Discover
GPU A5000 on Exoscale

NVIDIA A5000

Our A5000-based GPU is an entry-level all-rounder for AR, VR, simulations, rendering, AI, and more. GPU A5000 is 100% liquid-cooled and leverages a heat-reuse platform for sustainable accelerated computations.

Discover
GPU 3080ti on Exoscale

NVIDIA RTX 3080ti

Our 3080ti-based GPU is an all-rounder for deep learning model training, image processing, natural language processing, and more. GPU 3080ti is 100% liquid-cooled and leverages a heat-reuse platform for sustainable accelerated computations.

Discover
GPU B300 on Exoscale

NVIDIA B300 - Coming Soon

NVIDIA Blackwell B300 GPUs are coming soon to Exoscale! Prepare to supercharge your AI and HPC workloads with unparalleled performance, featuring floating point 4-bit precision and huge GPU memory, building on our commitment to deliver the most powerful cloud infrastructure.

Contact Us

Frequently asked questions about NVIDIA A30

What is the NVIDIA A30?

The NVIDIA A30 is a powerful GPU built for AI inference, data analytics, and high-performance computing. With 24 GB of high-bandwidth memory and support for multi-instance GPU (MIG) workloads, it delivers an efficient balance of performance and flexibility—ideal for teams running diverse and demanding tasks in the cloud.

What is the difference between the NVIDIA A30 and A40 GPUs?

The GPU A30 and A40 are designed for different workloads. Our GPU A30 is optimized for compute tasks like AI and HPC, featuring 24 GB of high-bandwidth HBM2 memory and supporting Multi-Instance GPU (MIG). The A40 targets professional visualization and rendering with 48 GB of GDDR6 memory, but lacks the MIG feature that makes our A30 instances ideal for efficiently running diverse, parallel workloads.