In our recent introduction to Calico we have gone trough a very high level overview. This time we’ll go more in depth, installing Calico itself.

While there are several getting started guides on Calico’s website, we’ll attempt to combine them into a single guide to show how everything fits together.

A complete guide for install Calico on Kubernetes or Bare Metal

Prerequisites to Install Calico

Regardless of how Calico will be used within the installation, there are several common prerequisites: 1. calicoctl needs to be installed and configured 1. etcd v3 may need to be installed and configured 1. kubectl is an optional dependency to interact with a Kubernetes cluster

Note in the case for Kubernetes, it is recommend that the Kubernetes datastore is used in lieu of etcd for small clusters. Otherwise, etcd can be used.

This article will assume a single etcd instance for the examples cited that look at an external etcd installation.

The Calico CLI

The calicoctl interface can be downloaded from Calico’s project page.

Optionally, Project Calico provides a Docker image and Kubernetes manifest which can be installed in a target environment where direct access may be difficult to obtain.

In any case, a Calico configuration file is required for the calicoctl CLI to be usable, and the configuration file must contain enough information to allow calicoctl to connect to the etcd cluster.

A sample configuration file for etcd, assuming a single etcd host, would resemble the following:

kind: CalicoAPIConfig
  datastoreType: "etcdv3"
  etcdEndpoints: "http://localhost:2379"


The datastoreTime will default to etcdv3 if not present._

For the Kubernetes datastore, the configuration file would resemble:

kind: CalicoAPIConfig
  datastoreType: "kubernetes"
  kubeconfig: "/path/to/.kube/config"

While the default location for the client configuration file is in /etc/calico/calicoctl.cfg, the --config flag can be used to specify an alternate location. In addition, some of the calicoctl properties can be passed as an environment variable to ensure private data such as credentials are not exposed.

Calico Datastore (etcd)

If Calico will be managing the networking stack on a bare metal system or a dedicated Docker installation, then etcd will need to be installed and configured.

For very small installations, a single node can be used, however there is the loss of redundancy in case of failure.

Installing etcd will require downloading the system appropriate archive, and extracting it. The archive will contain the etcd documentation as well as two binaries: etcd as the server component, and etcdctl as the control program.

Assuming a simple bring-up scenario, to start etcd on a host called etcd1:

$ export ETCDCTL_API=3
$ THIS_IP= # This really should be accessible to the Calico nodes only
$ etcd --data-dir=data.etcd --name etcd1 \
    --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
    --advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
    --initial-cluster etcd1=http://${THIS_IP}:2380 \
    --initial-cluster-state new --initial-cluster-token initaltoken


This will start etcd in the foreground. To run it in a more permanent scenario, a systemd service manifest would need to be created. In addition, this is an insecure configuration.

Calico Node (Felix)

With the pre-requisites taken care of, the last thing to do is to install Felix, which is Calico’s runtime service. This will be broken up into two scenarios: bare metal/Docker, and Kubernetes.


etcd is required to be reachable by Felix.

Bare Metal and Docker Installation of Calico

The bare metal installation can be done using Calico’s pre-built packages.

Felix can be installed by hand using the Docker image using the following steps:

$ docker pull calico/node:v3.7.2
$ docker create --name container calico/node:v3.7.2
$ docker cp container:/bin/calico-node calico-node
$ docker rm container
$ chmod +x calico-node
$ sudo mv calico-node /usr/local/bin

Once the binary is copied and migrated into place, an init script will need to be created. Assuming systemd is used:

Description=Calico Felix agent

ExecStartPre=/bin/mkdir -p /var/run/calico
ExecStart=/usr/local/bin/calico-node -felix


Lastly, before Felix can be started ipset needs to be installed on the node.

In the above cases, Felix can be configured by creating a configuration file at /etc/calico/felix.cfg.
Details on how to tune Felix can be found at: Note that by default, Felix is looking for etcd at

To run Felix inside a Docker container, run calicoctl node run --node-image=calico/node:v3.6.1.

Lastly, Felix needs to be configured in etcd. While the Docker command will handle the automatic registration, the bare metal approaches require the Felix daemon to register itself upon first invocation:

calicoctl create -f - <<EOF
- apiVersion:
  kind: Node
    name: <node name or hostname>
      ipv4Address: <your routable ip address>/24

Note that the name should reflect the unique hostname for the node.

Install Calico on Kubernetes

Installing Calico on Kubernetes can be simple or complex depending whether to use Calico for policy and network management or just policy.

The installation consists of applying a Kubernetes manifest file against the cluster. By default, Calico assumes the internal IP address inside the Kubernetes cluster to be If this is not the case, then the Calico manifest will need to be updated:

# To use the Kubernetes datastore manifest (recommended by the Calico Project)
$ curl \ \
# Otherwise, for the etcd datastore manifest
$ curl \ \
# This will be required for both
$ POD_CIDR="<your-pod-cidr>" \
  sed -i -e "s?$POD_CIDR?g" calico.yaml
$ kubectl apply -f ./calico.yaml

Lastly, depending on how the Kubernetes cluster is initialized, the taint mode on the master node may need to be removed. First, ensure that all pods are up, and then remove the taint flags.

$ watch kubectl get pods --all-namespaces
# ^C to kill the watch mode once all services are up

# May be required if kubeadm was used to bring up a new Kubernetes cluster
$ kubectl taint nodes --all

Validating the Calico Setup

At this point, Calico should be up and running. If Calico is managing the network stack in a Kubernetes cluster, then the Kubernetes NetworkPolicy API can be used to manage Calico with kubectl.

If you are running on bare-metal, you can confirm by running sudo iptables --list. A small snippet of the results can be seen:

vagrant@ubuntu:~/calico$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
cali-INPUT  all  --  anywhere             anywhere             /* cali:Cz_u1IQiXIMmKD4c */

Chain FORWARD (policy DROP)
target     prot opt source               destination         
cali-FORWARD  all  --  anywhere             anywhere             /* cali:wUHhoiAYhphO9Mso */
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
cali-OUTPUT  all  --  anywhere             anywhere             /* cali:tVnHkvAo15HuiPy0 */

Chain DOCKER (1 references)
target     prot opt source               destination         

Using the calicoctl command for either bare metal or Kubernetes:

./calicoctl get nodes -o wide
NAME      ASN         IPV4               IPV6   
ubuntu    (unknown)          

Next Up

We’ve seen how to install Calico on either bare metal/Docker or Kubernetes. Now that your Calico setup is up and running we’ll soon see how to deploy a multi-Zone deployment blueprint on Exoscale, implementing a resilient and redundant high availability architecture.

Follow Exoscale on Twitter to stay up to date!


  1. Calico configuration properties:
  2. etcd: