https://www.exoscale.com/syslog/exoscale syslog2024-03-13T14:27:51.449572+00:00python-feedgenTales from the command linehttps://www.exoscale.com/syslog/vmware-exoscale/How to migrate a virtual Instance from VMware vSphere to Exoscale2024-01-29T00:00:00+00:00exoscale<p>VMware vSphere is a virtualization and cloud computing platform that provides a suite of virtualization products for creating and managing virtualized data centers. Developed by VMware, Inc., vSphere is widely used in enterprise environments to enable organizations creating and managing virtualized infrastructures. The platform allows multiple virtual machines (VMs) to run on a single physical server.
<br>
In this blogpost we will show, how to migrate an existing VMware vSphere instance, in our example a Debian 12 instance, to Exoscale.</p>
<p><br></p>
<h3 id="current-setupconfiguration-on-vmware">Current setup/configuration on VMware</h3>
<pre><code>root@vmwaredebian12:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 392M 516K 392M 1% /run
/dev/mapper/debian--vg-root 4.4G 1.3G 2.9G 31% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda1 455M 58M 372M 14% /boot
tmpfs 392M 0 392M 0% /run/user/0
tmpfs 392M 0 392M 0% /run/user/1000
</code></pre>
<p><br></p>
<h3 id="pre-tasks-for-the-migration">Pre-Tasks for the migration</h3>
<p>Before diving into the technical details:</p>
<p><br></p>
<ul>
<li>Comment out non OS Mountpoints in /etc/fstab, otherwise the instance may be stuck in single user mode during initial boot on Exoscale.</li>
<li>Identify bootmode from instance (Legacy=BIOS or UEFI)</li>
<li>Create an <a href="https://community.exoscale.com/documentation/iam/iam-api-key-roles-policies/">Exoscale IAM Key</a> for <a href="https://community.exoscale.com/documentation/tools/exoscale-command-line-interface/">Exoscale CLI</a> and Template registration.</li>
<li>Create an <a href="https://community.exoscale.com/documentation/storage/quick-start/">Exoscale SOS Bucket</a> for Template registration.</li>
<li>Create an Ubuntu Instance with qemu-utils installed and <a href="https://community.exoscale.com/documentation/tools/exoscale-command-line-interface/">Exoscale CLI</a> installed and configured.</li>
</ul>
<p><br></p>
<h3 id="prerequesites-for-the-migration">Prerequesites for the migration</h3>
<ul>
<li>Root password from VMware Instance.</li>
<li>VMDK disk image from VMware Instance.</li>
</ul>
<p><br></p>
<h4 id="step-1-copy-vmdk-disk-image-to-the-ubuntu-instance"><strong>Step 1</strong>: Copy VMDK disk image to the Ubuntu Instance</h4>
<p>Use a tool of your choice (e.g.: scp/sftp) to upload the VMDK disk image to the Ubuntu Instance.</p>
<h4 id="step-2-vmdk-to-qcow2-migration"><strong>Step 2</strong>: VMDK to QCOW2 migration</h4>
<p>As Exoscale uses QCOW2 as file format for disk images, we will need to migrate the VMDK image to QCOW2.</p>
<pre><code>qemu-img convert -O qcow2 -p debain-12-flat.vmdk debain-12-flat.qcow2
</code></pre>
<h4 id="step-3-check-virtual-disk-size-and-resize-if-needed-to-10g"><strong>Step 3</strong>: Check virtual disk size and resize if needed to 10G</h4>
<p>In our case we have a virtual disk size of 6G, so we will need to resize it to 10G.</p>
<pre><code>qemu-img info debain-12-flat.qcow2
image: debain-12-flat.qcow2
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 2.42 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
qemu-img resize debain-12-flat.qcow2 10G
Image resized.
qemu-img info debain-12-flat.qcow2
image: debain-12-flat.qcow2
file format: qcow2
virtual size: 10 GiB (10737418240 bytes)
disk size: 2.42 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
</code></pre>
<h4 id="step-4-calculate-md5sum"><strong>Step 4</strong>: Calculate md5sum</h4>
<p>You will need this checksum later to register the template.</p>
<pre><code>md5sum debain-12-flat.qcow2
85ea29d7cee21bcfe22d3cca0fd350b6 debain-12-flat.qcow2
</code></pre>
<h4 id="step-5-upload-qcow2-disk-image-to-an-exoscale-sos-bucket"><strong>Step 5</strong> Upload QCOW2 disk image to an Exoscale SOS Bucket</h4>
<pre><code>exo storage --acl public-read upload debain-12-flat.qcow2 sos://vmware-migration-exoscale/debain-12-flat.qcow2
debain-12-flat.… [==============================================================================] 2.42 GiB / 2.42 GiB | 1m46s
</code></pre>
<h4 id="step-6-register-instance-as-a-custom-template"><strong>Step 6</strong>: Register Instance as a custom template</h4>
<p>This Debian Instance is using legacy(BIOS) bootmode, if your instance uses UEFI please change boot-mode parameter to uefi in the command below.</p>
<pre><code>exo compute instance-template register \
--boot-mode legacy \
--build "Debian 12 Vmware Migration" \
--description "Debian 12 Vmware Migration" \
--username root \
-z de-fra-1 \
debian_vmware_12 \
https://sos-at-vie-2.exo.io/vmware-migration-exoscale/debain-12-flat.qcow2 \
85ea29d7cee21bcfe22d3cca0fd350b6
✔ Registering template "debian_vmware_12"... 1m42s
┼──────────────────┼──────────────────────────────────────┼
│ TEMPLATE │ │
┼──────────────────┼──────────────────────────────────────┼
│ ID │ 3ba94ffb-94b7-4257-8ec5-5065cc11fc42 │
│ Zone │ │
│ Name │ debian_vmware_12 │
│ Description │ Debian 12 Vmware Migration │
│ Family │ other (64-bit) │
│ Creation Date │ 2024-01-29 16:47:56 +0000 UTC │
│ Visibility │ private │
│ Size │ 10 GiB │
│ Version │ │
│ Build │ Debian 12 Vmware Migration │
│ Maintainer │ │
│ Default User │ root │
│ SSH key enabled │ true │
│ Password enabled │ true │
│ Boot Mode │ legacy │
│ Checksum │ 85ea29d7cee21bcfe22d3cca0fd350b6 │
┼──────────────────┼──────────────────────────────────────┼
</code></pre>
<h4 id="step-7-change-template-back-to-private-on-sos"><strong>Step 7</strong>: Change Template back to private on SOS</h4>
<pre><code>❯ exo storage setacl sos://vmware-migration-exoscale/debain-12-flat.qcow2 private
┼───────────────┼────────────────────────────────────────────────────────────────────────────┼
│ STORAGE │ │
┼───────────────┼────────────────────────────────────────────────────────────────────────────┼
│ Path │ debain-12-flat.qcow2 │
│ Bucket │ vmware-migration-exoscale │
│ Last Modified │ 2024-01-30 13:24:50 UTC │
│ Size │ 2.4 GiB │
│ URL │ https://sos-at-vie-2.exo.io/vmware-migration-exoscale/debain-12-flat.qcow2 │
│ ACL │ │
│ │ Read - │
│ │ Write - │
│ │ Read ACP - │
│ │ Write ACP - │
│ │ Full Control vmware-migration-org │
│ │ │
│ Metadata │ │
│ Headers │ │
│ │ Content-Type application/octet-stream │
│ │ │
┼───────────────┼────────────────────────────────────────────────────────────────────────────┼
</code></pre>
<h4 id="step-8-deploy-instance-from-custom-template"><strong>Step 8</strong>: Deploy Instance from custom template</h4>
<pre><code>exo compute instance-template list -v private -z de-fra-1
┼──────────────────────────────────────┼─────────────────────────────┼────────────────┼───────────────────────────────┼
│ ID │ NAME │ FAMILY │ CREATION DATE │
┼──────────────────────────────────────┼─────────────────────────────┼────────────────┼───────────────────────────────┼
│ 3ba94ffb-94b7-4257-8ec5-5065cc11fc42 │ debian_vmware_12 │ other (64-bit) │ 2024-01-29 16:47:56 +0000 UTC │
┼──────────────────────────────────────┼─────────────────────────────┼────────────────┼───────────────────────────────┼
exo compute instance create \
--template debian_vmware_12 \
--template-visibility private \
--security-group default \
--ssh-key myKey \
-z de-fra-1 \
vmwaredebian12
✔ Creating instance "vmwaredebian12"... 18s
┼──────────────────────┼──────────────────────────────────────┼
│ COMPUTE INSTANCE │ │
┼──────────────────────┼──────────────────────────────────────┼
│ ID │ a463836d-3d4f-40ec-b461-b226f380eb0f │
│ Name │ vmwaredebian12 │
│ Creation Date │ 2024-01-29 16:55:47 +0000 UTC │
│ Instance Type │ standard.medium │
│ Template │ debian_vmware_12 │
│ Zone │ de-fra-1 │
│ Anti-Affinity Groups │ n/a │
│ Deploy Target │ - │
│ Security Groups │ defaul │
│ Private Instance │ No │
│ Private Networks │ n/a │
│ Elastic IPs │ n/a │
│ IP Address │ 194.182.171.217 │
│ IPv6 Address │ - │
│ SSH Key │ myKey │
│ Disk Size │ 50 GiB │
│ State │ running │
│ Labels │ n/a │
│ Reverse DNS │ │
┼──────────────────────┼──────────────────────────────────────┼
</code></pre>
<h4 id="step-9-reconfigure-network-settings"><strong>Step 9</strong>: Reconfigure Network settings</h4>
<p>Log in to the instance using Exoscale Console and reconfigure the network settings. This is needed as the instance is getting a new network device.</p>
<h4 id="step-10-copy-all-remaining-data"><strong>Step 10</strong>: Copy all remaining data</h4>
<p>Use a tool of your choice (e.g.: tar/rsync or backup/restore) to copy all remaining data to your instance.</p>
<h4 id="optional-step-install-cloud-init"><strong>Optional Step</strong>: Install cloud-init</h4>
<p>With cloud-init you will be able to leverage the disk resize and password reset feature for your instance from Exoscale.
Take care as this might change the SSH Host key from your instance.</p>
<pre><code>apt-get install cloud-init
</code></pre>
<p><br></p>
<h2 id="conclusion">Conclusion</h2>
<p>In summary, migrating a virtual instance from VMware vSphere to Exoscale involves a step-by-step process designed to ensure a seamless transition without compromising on data integrity and system functionality. By addressing prerequisites, such as adjusting the system configuration and obtaining essential credentials and disk images, users can successfully navigate the migration journey.</p>
<p>From copying the VMDK disk image to the Ubuntu instance and converting it to the required QCOW2 format, to registering the custom template on Exoscale and deploying the new instance, each step is carefully detailed to provide a comprehensive guide for users transitioning their virtualized environments. The guide emphasizes the flexibility and scalability offered by Exoscale, coupled with meticulous planning and execution, to make the migration process efficient and effective.</p>
<p>The inclusion of optional steps, such as installing cloud-init for enhanced instance management, underscores the adaptability of the migration process to meet specific user requirements. Ultimately, this guide equips users with the knowledge and tools needed to seamlessly migrate their VMware vSphere instances to Exoscale, unlocking the benefits of Exoscale’s cloud infrastructure.</p>
<p>Please be aware that a <a href="https://www.exoscale.com/pricing/#custom-templates">template fee</a> and an <a href="https://www.exoscale.com/pricing/#compute">instance fee</a> are applicable for each deployed/migrated instance.
<br></p>2024-01-29T00:00:00+00:00https://www.exoscale.com/syslog/glasskube-vault/Understanding Vault: A Guide to Glasskube's Managed Service with a Kubernetes Integration Demonstration2023-08-31T00:00:00+00:00exoscale<p>HashiCorp Vault is a powerful tool designed to manage secrets and protect sensitive data. From API keys and tokens to passwords, Vault encrypts and stores your confidential information, making it accessible only to authorized users. With features like secure secret storage, dynamic secrets generation, data encryption, and fine-grained access control, Vault has become an essential part of modern infrastructure management.</p>
<p><br></p>
<p><a href="/marketplace/listing/glasskube-vault/"><strong>Glasskube Vault</strong></a> takes this a step further by offering Vault as a fully managed service. By handling the installation, configuration, and maintenance of Vault, Glasskube allows you to focus on what truly matters: securing your applications and data.</p>
<h3 id="vaults-key-advantages">Vault’s Key Advantages</h3>
<p><br></p>
<ul>
<li><strong>Secure Secret Storage</strong>: Encrypts and stores secrets with strict access controls. </li>
<li><strong>Dynamic Secrets</strong>: Generates secrets on-demand, minimizing the risk of exposure. </li>
<li><strong>Data Encryption</strong>: Ensures the data in transit and at rest are encrypted. </li>
<li><strong>Access Control</strong>: Manages fine-grained access to secrets through policies. </li>
</ul>
<h2 id="how-to-access-glasskube-vault-on-exoscale">How to Access Glasskube Vault on Exoscale</h2>
<p>Getting started with Glasskube Vault is incredibly easy on the Exoscale Portal. Follow these simple steps:</p>
<p><br></p>
<ol>
<li>Click on the <a href="https://portal.exoscale.com/marketplace"><strong>Marketplace</strong> tab</a> on the Exoscale Portal.</li>
<li>Find and subscribe to Glasskube Vault.</li>
<li>An email will be sent to you containing instructions and a link to initialize your Vault.</li>
</ol>
<h3 id="initialization-process">Initialization Process</h3>
<p>The initialization process includes specifying a key and a number of shares to split the key into. This additional security measure requires multiple parties to perform critical actions, like unsealing the vault. Once these steps are completed, your Vault will be fully configured and ready for use!</p>
<h2 id="using-glasskube-vault-with-exoscale-sks">Using Glasskube Vault with Exoscale SKS</h2>
<p>Integrating Glasskube’s managed Vault offering with <a href="/sks/">Exoscale’s Managed Kubernetes Service (SKS)</a> streamlines the handling of secrets within your Kubernetes environment. Leveraging Vault’s robust secret management along with Exoscale’s Kubernetes service simplifies the deployment and scaling of applications.</p>
<p><br></p>
<p>The following steps are based on the official docs in regards to <a href="https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-external-vault">integrating a Kubernetes cluster with an external Vault</a>.</p>
<h3 id="prerequesites">Prerequesites</h3>
<p>Before diving into the technical details, ensure you have:</p>
<p><br></p>
<ul>
<li>A basic understanding of Kubernetes.</li>
<li>Access to an Exoscale SKS cluster with security groups correctly configured as per the <a href="https://community.exoscale.com/documentation/sks/quick-start/">Quick Start Guide</a>.</li>
<li>The Vault CLI installed on your local machine (e.g., Mac users can use <code>brew install vault</code>).</li>
<li>Helm CLI (Homebrew users can use <code>brew install helm</code>).</li>
</ul>
<h3 id="connecting-vault-with-kubernetes">Connecting Vault with Kubernetes</h3>
<p>Exoscale SKS (Managed Kubernetes Service) integrates seamlessly with Vault. Here’s a step-by-step guide to connecting the two.</p>
<h4 id="step-1-set-the-vault-address"><strong>Step 1</strong>: Set the Vault Address</h4>
<p>Specify the address of your managed Vault instance in your local shell:</p>
<pre><code>export VAULT_ADDR='https://vault.YOURORG.exo.gkube.eu'
</code></pre>
<h4 id="step-2-install-vault-in-the-kubernetes-cluster"><strong>Step 2</strong>: Install Vault in the Kubernetes Cluster</h4>
<p>Install Vault using Helm, while specifying the external Vault installation:</p>
<pre><code>helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--set "global.externalVaultAddr=$VAULT_ADDR"
</code></pre>
<h4 id="step-3-unseal-and-login-to-vault"><strong>Step 3</strong>: Unseal and Login to Vault</h4>
<p>Use the key received during initialization to unseal the Vault, and then log in locally:</p>
<pre><code>vault operator unseal
vault login
</code></pre>
<h4 id="step-4-create-a-kubernetes-secret-for-vault-service-account-required-with-kubernetes-124"><strong>Step 4</strong>: Create a Kubernetes Secret for Vault Service Account (required with Kubernetes 1.24+)</h4>
<pre><code>cat > vault-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: vault-token-g955r
annotations:
kubernetes.io/service-account.name: vault
type: kubernetes.io/service-account-token
EOF
kubectl apply -f vault-secret.yaml
</code></pre>
<h4 id="step-5-enable-kubernetes-authentication"><strong>Step 5</strong>: Enable Kubernetes Authentication</h4>
<p>Save the secret name and verify it:</p>
<pre><code>VAULT_HELM_SECRET_NAME=$(kubectl get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith("vault-token-")).name')
kubectl describe secret $VAULT_HELM_SECRET_NAME
</code></pre>
<p>Enable Kubernetes authentication and set up the connection parameters:</p>
<pre><code>vault auth enable kubernetes
TOKEN_REVIEW_JWT=$(kubectl get secret $VAULT_HELM_SECRET_NAME --output='go-template={{ .data.token }}' | base64 --decode)
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
issuer="https://kubernetes.default.svc.YOURCLUSTERID.cluster.local"
</code></pre>
<p>Replace <strong>YOURCLUSTERID</strong> with the ID of your Exoscale SKS cluster.</p>
<h3 id="demonstration-storing-and-retrieving-a-secret">Demonstration: Storing and Retrieving a Secret</h3>
<p>In this section, we’ll create a secret inside Vault and demonstrate how to retrieve it using a Pod in your Kubernetes cluster.</p>
<h4 id="step-1-create-a-secret-in-vault"><strong>Step 1</strong>: Create a Secret in Vault</h4>
<p>Enable a Key-Value secret engine and store a demonstration secret:</p>
<pre><code>vault secrets enable -path=app kv
vault kv put app/config username='Antoine' password='42'
</code></pre>
<p>You will also be able to see the secret inside the Vault UI.</p>
<p><img alt="Vault: Showing Secrets" src="/static/syslog/2023-08-31-glasskube-vault/vault.png" title="Vault: Showing Secrets" /></p>
<h4 id="step-2-create-vault-policy-and-kubernetes-role"><strong>Step 2</strong>: Create Vault Policy and Kubernetes Role</h4>
<p>Create a policy allowing read access to the secret and a Kubernetes authentication role connecting everything:</p>
<pre><code>vault policy write internal-app - <<EOF
path "app/config" {
capabilities = ["read"]
}
EOF
vault write auth/kubernetes/role/devweb-app \
bound_service_account_names=internal-app \
bound_service_account_namespaces=default \
policies=internal-app \
ttl=24h
</code></pre>
<h4 id="step-3-create-kubernetes-service-account"><strong>Step 3</strong>: Create Kubernetes Service Account</h4>
<pre><code>kubectl create sa internal-app
</code></pre>
<h4 id="step-4-deploy-the-demo-pod"><strong>Step 4</strong>: Deploy the Demo Pod</h4>
<p>Create a demo Ubuntu Pod, and mount the secrets:</p>
<pre><code>cat > pod.yaml <<
apiVersion: v1
kind: Pod
metadata:
name: test-app
labels:
app: test-app
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: 'devweb-app'
vault.hashicorp.com/agent-inject-secret-credentials.txt: 'app/config'
spec:
serviceAccountName: internal-app
containers:
- name: app
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "while true; do sleep 30; done;"]
tty: true
EOF
</code></pre>
<p>Apply the Pod and access it:</p>
<pre><code>kubectl apply -f pod.yaml
# Wait until the app is ready
kubectl exec -it test-app -- /bin/bash
</code></pre>
<p>Inside the Pod, check if you can fetch the secret:</p>
<pre><code>root@devwebapp-with-annotations:/# cat /vault/secrets/credentials.txt
password: 42
username: Antoine
</code></pre>
<h2 id="alternative-methods-for-accessing-secrets">Alternative Methods for Accessing Secrets</h2>
<p>Apart from the Vault agent demonstrated above, other methods can be used to access secrets.</p>
<h5 id="vault-operator">Vault Operator</h5>
<p>The Vault Operator provides a Kubernetes-native experience for managing Vault clusters. By leveraging custom resources, you can define and manage Vault’s lifecycle, policies, and more. Utilizing an operator could simplify the integration between Vault and your Kubernetes workloads.</p>
<h5 id="using-vaults-api-directly">Using Vault’s API Directly</h5>
<p>Applications can directly interact with Vault’s API to fetch secrets. This method provides more control and can be suitable for complex scenarios where the predefined integrations might not be sufficient.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Managing Vault in a high-availability mode is complex, involving many moving parts that require continuous oversight. Glasskube’s managed Vault service elegantly addresses this challenge. By taking care of all the underlying complexities of hosting Vault—including replication, failover, and infrastructure management—Glasskube Vault offers a dependable and highly available solution for secret management. The seamless integration and ease of use, as demonstrated with Exoscale’s SKS, make it a go-to solution for modern security needs.</p>2023-08-31T00:00:00+00:00https://www.exoscale.com/syslog/istio/Leverage the Power of Istio on Exoscale SKS for Enhanced Kubernetes Experience2023-08-17T00:00:00+00:00exoscale<p><a href="https://istio.io/">Istio</a> is an open-source service mesh platform that helps developers manage, secure, and understand the interactions between microservices in a Kubernetes environment. It creates a network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without requiring any changes in service code.</p>
<p><br></p>
<p>Learn how to set up Istio to enable container communications across multiple Kubernetes clusters. This is an advanced topic and is best suited for readers who already have practical experience with <a href="/sks/">Exoscale SKS</a> and a strong foundational understanding of Kubernetes.</p>
<h2 id="using-istio-to-connect-two-sks-clusters">Using Istio to connect two SKS Clusters</h2>
<p>While its wide array of functionalities cater to various use cases, our focus in this blog post will be on leveraging Istio’s multi-cluster feature. We will setup Istio accross two clusters (which can also be in different zones) and use a small application to demonstrate the seamless connectivity.</p>
<p><br></p>
<p>Istio’s secure inter-service communication relies on Envoy proxy sidecars, which can be automatically injected by labeling a namespace. Intercepts of all network communication by these sidecars enable it to transparently implement security policies and data encryption, eliminating the need for code modifications or extra Kubernetes configurations.</p>
<h3 id="prerequisites">Prerequisites</h3>
<p>You should already have two SKS clusters up and running on Exoscale. As flavor use medium instances as a minimum (amount of instances is arbitary) and make sure that your security groups are set up correctly according to the <a href="https://community.exoscale.com/documentation/sks/quick-start/">SKS Quick Start Guide</a>.</p>
<h3 id="installing-istio-and-establishing-connectivity-between-two-sks-clusters">Installing Istio and establishing Connectivity between two SKS Clusters</h3>
<p>A few steps are requird to install Istio and establish connectivity:
<br></p>
<ul>
<li>Get cluster access</li>
<li>Install Istioctl</li>
<li>Generate certificates</li>
<li>Set up Istio in multi-primary mode</li>
</ul>
<h4 id="getting-cluster-access">Getting cluster access</h4>
<p>The process of configuring multicluster Istio deployments can be done in several ways, but in this tutorial, our approach employs the multi-primary model with different networks, delivering cross-cluster load balancing.</p>
<p><br></p>
<p>Initially, let’s assign the IDs of your two Exoscale SKS clusters within your shell environment. Be sure to replace the placeholders with the actual IDs of your clusters:</p>
<pre><code>export CTX_CLUSTER1=IDOFYOURFirstCLUSTER
export CTX_CLUSTER2=IDOFYOURSecondCLUSTER
</code></pre>
<p>Next, you need to generate a unique kubeconfig for each cluster. It’s crucial to assign distinct kubeconfig usernames for each cluster to facilitate the merging process later on.</p>
<p>Here’s an example:</p>
<pre><code># Generate a kubeconfig for each cluster, replace the zone(s)
exo compute sks kubeconfig $CTX_CLUSTER1 istio1 -z de-fra-1 > ~/.kube/istio1
exo compute sks kubeconfig $CTX_CLUSTER2 istio2 -z at-vie-2 > ~/.kube/istio2
</code></pre>
<p>Now, let’s merge the configuration files from both clusters into ~/.kube/config:</p>
<pre><code>export KUBECONFIG=~/.kube/istio1:~/.kube/istio2
kubectl config view --flatten > ~/.kube/config
</code></pre>
<p>By following these steps, we can now easily access both clusters from our local environment.</p>
<h4 id="installing-istioctl-locally">Installing Istioctl locally</h4>
<p>In our next steps, we will install istioctl, a command-line tool used to control the Istio service mesh. However, we won’t be installing Istio into the cluster just yet. You can follow the steps outlined in the <a href="https://istio.io/latest/docs/setup/getting-started/#download">Istio’s Official Guide</a> for more details.</p>
<p><br></p>
<p>Choose a suitable directory where you’d like the istioctl files to be installed on your local system. Once chosen, navigate to the directory and install istioctl as follows:</p>
<pre><code># Choose any folder here
cd ~/Documents/dev
# Download the latest version of Istio
curl -L https://istio.io/downloadIstio | sh -
# Navigate to the downloaded Istio directory
cd istio*
# Temporarily add the Istio binaries to your PATH for the current shell session
export PATH=$PWD/bin:$PATH
</code></pre>
<p>Ensure you remain in this directory within your shell, as we’ll be working with the files located here for the remainder of our process.</p>
<h4 id="generating-certificates">Generating certificates</h4>
<p>Generating a Root Certificate Authority (CA) and individual certificates for each cluster is crucial as Istio leverages these components to automatically facilitate mutual TLS (mTLS) encryption. mTLS encryption ensures secure, authenticated data transfers not only within a single cluster between containers but also across multiple clusters. For accomplishing this task, we will follow the steps outlined in <a href="https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/">Istio’s plugin CA guide</a>.</p>
<p><br></p>
<p>It’s worth noting that when operating in a production environment, it’s recommended to employ a secure method for key protection, such as HashiCorp’s Vault.</p>
<p>These are the commands we use in this case:</p>
<pre><code># From the istio folder we entered above
# Create a certs folder and go into that
mkdir -p certs
pushd certs
# Generate root certificate and key
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
# For each cluster generate an intermediate certificate and key
make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../tools/certs/Makefile.selfsigned.mk cluster2-cacerts
# Insert them into each cluster using secrets
kubectl --context="${CTX_CLUSTER1}" create namespace istio-system
kubectl --context="${CTX_CLUSTER1}" create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem
kubectl --context="${CTX_CLUSTER2}" create namespace istio-system
kubectl --context="${CTX_CLUSTER2}" create secret generic cacerts -n istio-system \
--from-file=cluster2/ca-cert.pem \
--from-file=cluster2/ca-key.pem \
--from-file=cluster2/root-cert.pem \
--from-file=cluster2/cert-chain.pem
# Return back into istios folder
popd
</code></pre>
<h4 id="setting-up-istio-in-multi-primary-mode">Setting Up Istio in Multi-Primary Mode</h4>
<p>Having taken care of the prerequisites, you can now proceed to install Istio by following the guide provided by Istio for a <a href="https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/">Multi-Primary setup on different networks</a>.</p>
<p>When generating the IstioOperator configuration for each cluster, you have the option to enable smart DNS proxying right away. Here’s an example configuration for cluster1:</p>
<pre><code># For example for cluster1
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
meshConfig:
defaultConfig:
proxyMetadata:
# Enable basic DNS proxying
ISTIO_META_DNS_CAPTURE: "true"
# Enable automatic address allocation, optional
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
EOF
</code></pre>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Follow each guide step carefully and, if interested in the DNS option, simply switch out the configuration file with the one provided above.</p>
</div>
<p>DNS Proxying becomes crucial when you have a Kubernetes service exclusively on Cluster 1, but you want it to be accessible from Cluster 2. Without DNS Proxying, you would need to specify the IP address directly. Alternatively, you can enable DNS Proxying for each individual Deployment.</p>
<p><br></p>
<p>During installation, you’ll need to set up an east-west gateway. This gateway is integral for facilitating cross-cluster communication, effectively setting up a load balancer for each cluster that acts as an incoming gateway.</p>
<p><br></p>
<p>To verify your installation, refer to <a href="https://istio.io/latest/docs/setup/install/multicluster/verify/">the Istio verficication guide</a>. It’s worth noting that the verification process works even without DNS proxying, since it creates identical deployments and services in each cluster.</p>
<h2 id="deploying-a-simple-demo-application-and-accessing-it-cross-cluster">Deploying a Simple Demo Application and accessing it Cross-Cluster</h2>
<p>For our demonstration, we’ll deploy a simple “Hello World” application in Cluster 2 and then access it directly from Cluster 1, leveraging the secure service mesh.</p>
<p><br></p>
<p>To begin, let’s create a new namespace in each cluster and enable sidecar injection. The sidecar injector is used to automatically establish secure communication within the service mesh.</p>
<pre><code>kubectl --context="${CTX_CLUSTER1}" create namespace exodemo
kubectl --context="${CTX_CLUSTER2}" create namespace exodemo
kubectl label --context="${CTX_CLUSTER1}" namespace exodemo \
istio-injection=enabled
kubectl label --context="${CTX_CLUSTER2}" namespace exodemo \
istio-injection=enabled
</code></pre>
<p>Next, we’ll deploy the “Hello World” application on Cluster 2:</p>
<pre><code>kubectl --context="${CTX_CLUSTER2}" -n exodemo run exo-webtest --image=exo.container-registry.com/exoscale-images/exo-webtest:v2 --port=3000
</code></pre>
<p>We’ll also create an internal ClusterIP service on Cluster 2 which exposes the pod:</p>
<pre><code>kubectl --context="${CTX_CLUSTER2}" -n exodemo expose pod exo-webtest --port=3000
</code></pre>
<p>Moving over to Cluster 1, we’ll launch a simple Alpine container with CURL to test accessing our service. In this example, I’ve included the annotation to enable DNS proxying for this container, assuming you potentially didn’t enable it cluster-wide:</p>
<pre><code>cat <<EOF | kubectl --context="${CTX_CLUSTER1}" -n exodemo apply -f -
apiVersion: v1
kind: Pod
metadata:
name: alpine-curl
annotations:
proxy.istio.io/config: |
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
spec:
containers:
- name: shell
image: alpine:latest
command:
- sh
- -c
- |
apk update && apk add curl && sleep 3600
EOF
</code></pre>
<p>Let’s now open a shell in the Alpine container:</p>
<pre><code>kubectl exec --context="${CTX_CLUSTER1}" -n exodemo -it alpine-curl -- /bin/sh
</code></pre>
<p>Lastly, let’s ping the service from the other cluster:</p>
<pre><code>curl exo-webtest:3000
</code></pre>
<p>If successful, you should see something like the following output, which signifies we can now access services of the other cluster:</p>
<pre><code># curl exo-webtest:3000
<html><head><title>Hello from exo-webtest</title></head><body><img width=250px src=https://www.exoscale.com/static/img/exoscale-logo-full-201711.svg alt=ExoscaleLogo><br><p>Hello World from host exo-webtest!</p><p>VERSION: v2</p></body></html>
</code></pre>
<p>This simple demonstration showcases the power of Istio’s service mesh in enabling secure and efficient cross-cluster communication.</p>
<h2 id="further-ideas-for-the-sample-application">Further ideas for the sample application</h2>
<p>The demonstration provided in this tutorial can be further enhanced by integrating Istio’s <a href="https://istio.io/latest/docs/reference/config/networking/gateway/">Gateway</a> and <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/">VirtualService</a> components.</p>
<p><br></p>
<p>A <em>Gateway</em> handles incoming and outgoing traffic from the internet, effectively load balancing it across the clusters. Note that this Gateway differs from the East-West Gateway you deployed used for inter-cluster communication.</p>
<p><br></p>
<p>Meanwhile, a <em>VirtualService</em> defines the rules for routing requests within an Istio service mesh. It allows for complex and flexible traffic management strategies. For instance, you could distribute requests to different versions of a service based on certain criteria or percentages or do A/B rollouts/testing.</p>
<h2 id="wrapping-up">Wrapping Up</h2>
<p>This tutorial provided a hands-on exploration of using Istio within Exoscale SKS clusters. We showcased how Istio facilitates efficient cross-cluster communication — a critical aspect in ensuring high availability in a distributed system. We brought these concepts to life with a practical application, deploying it across two clusters and establishing secure communication between them.</p>
<p><br></p>
<p>However, it’s important to note that we have only scratched the surface of Istio’s capabilities in this tutorial. Istio is a powerful and versatile tool that offers a wide range of features beyond those covered here. From sophisticated traffic management and robust network policies to telemetry and reporting, Istio has a lot to offer when it comes to managing and securing microservices in complex, distributed systems.</p>
<p><br></p>
<p>We hope this tutorial serves as a useful starting point for leveraging the potential of service mesh and advanced setups within your Exoscale SKS clusters.</p>2023-08-17T00:00:00+00:00https://www.exoscale.com/syslog/announcing-vienna-2-zone/Announcing Vienna-2 Zone General Availability2023-06-14T00:00:00+00:00exoscale<blockquote>
<p><strong>AT-VIE-2</strong> is now generally available.</p>
</blockquote>
<p>Starting today, AT-VIE-2 is available to all customers. It is the first time, we are offering a second zone within one city.
With Vienna being at the heart of Europe, it is the perfect location to open the new zone. </p>
<p><br></p>
<p><img alt="AT-VIE-2 - one city, two zones" src="/static/syslog/2023-06-14-announcing-vienna-2-zone/banner_vie-2-announcement.png" /></p>
<p><br></p>
<p>The new zone allow companies to build multi-zone applications while staying within one country, well even one city. With being in line to all major <a href="/compliance/">certifications</a>, the new zone meets all requirements for a GDPR-compliant use.</p>
<p><br></p>
<p>AT-VIE-2 provides all the main Exoscale services out of the box: a <a href="/compute/">Compute</a>, <a href="/object-storage/">S3-compliant Object Storage</a>, <a href="/sks/">Managed Kubernetes</a>, <a href="/dbaas/">DBaaS</a> and all additional services such as Private Networks, <a href="/virtual-private-cloud/">Private Connect</a>, Elastic IP, and more.</p>
<p><br></p>
<p>Compute capabilities of the new zone are built using latest generation Intel Xeon CPUs codenamed <code>Sapphire Rapids</code> and brings new acceleration routines out of the box. Coupled with a low network latency to <code>AT-VIE-1</code> our initial <a href="/datacenters/austria/">Austrian zone</a>, this new location is the perfect candidate for primary workloads when designing primary/secondary architectures or highly available applications.</p>
<p><br></p>
<p>Vienna now offers Exoscale’s seventh European zone marking our commitment to further expand our complete infrastructure offerings to all automation hungry teams across Europe.</p>2023-06-14T00:00:00+00:00https://www.exoscale.com/syslog/exoscale-tisax-certification/Exoscale is TISAX certified2023-04-20T00:00:00+00:00exoscale<p>Exoscale takes the security of its customers’ data seriously, which is why we have been strengthening our compliance and security certifications over the past few years including ISO 27001, ISO 27017, ISO 27018 and the CSA Star program. Today we are pleased to announce that we have achieved TISAX Level 2 certification from <a href="https://www.dekra.de/de/tisax-assessment/">Dekra Deutschland</a> in an effort to demonstrate our commitment to the highest standards of security and data protection for our customers.</p>
<h3 id="what-is-tisax">What is TISAX?</h3>
<p>TISAX (Trusted Information Security Assessment Exchange) is an information security certification scheme developed by the <a href="https://enx.com/en-US/TISAX/">ENX association</a>. The certification is targeted at companies that are working for the automotive industry and provides a standardized mechanism for exchanging compliance information between suppliers and customers.</p>
<p><br></p>
<p>Audits are conducted according to the VDA-ISA standard which is a set of detailed requirements for information security management systems (ISMS) based on ISO 27001 and ISO 27002. Unlike ISO 27001, the maturity level of each requirement is rated individually, giving a more detailed picture of the security level of the participant.</p>
<h3 id="why-did-we-get-tisax-certified">Why did we get TISAX certified?</h3>
<p>TISAX is highly voluntary, but the German automotive industry has been increasingly demanding it from its supplier chain. The certification was created to provide the automotive industry with a certification process focused on two specific points: the protection of prototypes and sensitive information, and the implementation of a data protection management system.</p>
<p><br></p>
<p>TISAX participants can request access to Exoscale assessment results through the <a href="https://enx.com/en-US/SignIn">ENX portal</a>. In addition, Exoscale answers to the VDA-ISA self-assessment is available for download through the Exoscale Compliance Center.</p>2023-04-20T00:00:00+00:00