Register
Cost-Effective AI Inference & Training picto

Cost-Effective AI Inference & Training

Accelerate AI model inference for applications like conversational AI and natural language processing (NLP). The A30 is suitable for small to medium-scale deep learning training, fine-tuning, and transfer learning, offering a practical option for budget-conscious projects.

Reliable High Performance Computing Workloads picto

Reliable High Performance Computing Workloads

Support your scientific and engineering simulations such as energy modeling and computational fluid dynamics (CFD). The A30 provides dependable performance for traditional HPC tasks, making it a viable choice for established workflows.

Flexible Data Analytics & Processing picto

Flexible Data Analytics & Processing

Efficiently process and analyze large datasets. The A30 performs well in GPU-accelerated ETL operations, real-time analytics, and machine learning tasks, offering a balanced approach to data processing.

GPU A30 based on NVIDIA A30

Our Tesla A30 GPU is powered by the NVIDIA A30 accelerator, built on the Ampere architecture to deliver balanced performance for AI inference, training, data analytics, and HPC workloads. With 24 GB of HBM2 memory and support for Multi-Instance GPU (MIG), it enables efficient resource utilization across a wide range of use cases.


This GPU is optimized for real-world applications such as NLP, computer vision, ETL operations, and scientific computing. Ideal for teams seeking scalable AI performance with low power consumption and strong cost-efficiency.

Start your GPU A30

High performance computing at scale

GPU A30 enables data scientists and researchers to accelerate tasks like energy modeling, genomics, and computational fluid dynamics (CFD) simulations. Major frameworks such as TensorFlow and PyTorch run efficiently, making it a versatile choice for production and research environments.

Efficient AI Inference and Training

This GPU is optimized for AI workloads like natural language processing, conversational AI, computer vision tasks, and recommendation systems. It delivers consistent performance for small to medium-scale training, fine-tuning, and transfer learning while maintaining efficient resource usage.

RAM CPU Cores GPU Cards Min Local Storage Max Local Storage Price / Hour ({{ currency | uppercase }})
Small 56 GB 12 Cores 1 GPU 100 GB 800 GB {{ prices.opencompute.gpua30.small[currency] | number:8 }}
Medium 90 GB 16 Cores 2 GPU 100 GB 1.2 TB {{ prices.opencompute.gpua30.medium[currency] | number:8 }}
Large 120 GB 24 Cores 3 GPU 100 GB 1.6 TB {{ prices.opencompute.gpua30.large[currency] | number:8 }}
Huge 225 GB 48 Cores 4 GPU 100 GB 1.6 TB {{ prices.opencompute.gpua30.huge[currency] | number:8 }}
  1. [1]

    Local Storage is not included in the displayed Instances price, and has a cost of {{ prices.opencompute.volume[currency] | number:8 }} {{ currency | uppercase }} / GiB hour.

  2. [2]

    Instance needs to be shutdown for hypervisor and platform updates as it cannot be live-migrated.

  3. [3]

    Available in the following Zones: CH-GVA-2

  4. [4]

    Please note that GPU instances require account validation: access is provided with priority to established businesses, and is granted after a manual screening process.

Discover the best cost to performance ratio

Our GPU A30 comes at a fairly priced cost to performance ratio. Choose between four different options, from 1 to 4 GPUs coming with SSD storage from 100 GB up to 1.6 TB depending on the chosen instance type.

Level up with NGC on Exoscale GPUs

Combine the simplicity and scalability of the Exoscale Cloud with the power of NVIDIA GPUs. With our Docker based template you can access the full potential of the NVIDIA GPU Cloud (NGC) and significantly reduce time to solution.


NVIDIA GPU Cloud (NGC) provides a selected set of GPU-optimized software for artificial intelligence applications, visualizations and HPC. The NGC Catalog includes containers, pre-trained models, Helm charts for Kubernetes deployments and specific AI toolkits with SDKs.


NGC catalog works with both Exoscale Compute and Exoscale SKS. The image template for SKS worker nodes embeds the optimized version of the container runtime for Nvidia cards making it quick work of starting any application from the catalog without fiddling with drivers and CUDA versions.

Learn more

GPU A30 Features

Technical Specifications GPU A30

Description Specifications
Graphic Card NVIDIA A30
Cuda Cores per card 10752
Tensor Cores per card 336
GPU memory per card 24 GB
GPU cards 1-4
CPU cores 12-48
RAM per instance 56-225
SSD local storage max. 1.6 TB (SATA)
Zone CH-GVA-2
Works with Compute
SKS
Docker NVIDIA

Resources

Portal

Get started in our integrated environment with just a few clicks.

Why GPU A30 on Exoscale

swiss knife picto
  • Shared or dedicated Hypervisors
  • Large SSD Storage
  • No resource sharing
network bandwidth picto
  • Cutting edge GPUs
  • NVIDIA cards
  • Pass-through access
cloud orchestration picto
  • Complete platform integration
  • Terraform support
  • API support
Choose from a Wide Selection of Officially Supported Templates
Multiple OS images support picto

Trusted by engineers across Europe.

When running mission critical production workloads in the cloud, a partner you can rely on makes all the difference. Our customer success engineers have helped hundreds of customers from all over Europe migrate, run and scale production workloads on Exoscale.

Contact us