Serverless is without a doubt one of the most prominent trends in cloud computing.

The core promise of Serverless is that neither developer nor operator should have to care about the infrastructure. The responsibility of the infrastructure lies with the service provider. Resource sizing should be highly dynamic, and adapt itself depending on the load.

Another common trait of Serverless is that code tends to be focused and very small, hence it’s also sometimes referred to as Function-as-a-Service (FaaS).

Historically, Amazon was the first Cloud provider to offer FaaS with AWS Lambda, hence the familiar name. Other Cloud providers have since offered the same kind of product:

  • Google has Cloud Functions
  • Microsoft has Azure Functions

While both deliver the promise of Serverless, autoscaling and pricing are based on actual usage, and they also lock-in the client organization to their respective proprietary product.

However, this is not necessarily the only way to go, as free Open Source Serverless alternatives exist. A few notable examples are:

All of them are fully cloud agnostic, meaning they can be installed on any cloud, including your own, public or private, and of course Exoscale!

The Exoscale command line client

How does the Fn project work

Fn is entirely based on Docker, and is mainly composed of two key components:

  1. The Fn command line interface, that allows you to control every aspect of the Fn framework and interact with the Fn server
  2. The Fn server itself, which is a plain Docker application

Functions deployed on Fn are executed in isolated containers as well, enabling support for a lot more languages: let’s say e.g. Clojure!

Function arguments are passed on STDIN, and their return value is written on STDOUT. If arguments and return values are not simple values (e.g. a JSON object vs int vs String), then they are serialized by an abstraction layer provided by Fn itself in form of a Function Development Kit or FDK.

For convenience Fn offers a set of built-in runtime templates, facilitating the bootstrapping of FaaS in a large choice of languages and versions (Go, multiple Java versions, multiple Python versions, etc.).

Creating a FaaS with Fn is easy as this: 1. You bootstrap the function using the Fn CLI: that sets up the Fn configuration file and a simple project based on the chosen application template. 2. You deploy your function, again with the Fn CLI: that pushes the containerized function image to the Docker repository of your choice, and notifies the server about the existence and location of this very latest version.

The fn Serverless deployment

Deploy and test serverless functions locally

Let’s start by setting up Fn on our local machine. Since Fn requires Docker, let’s install it first. Assuming you are on Debian/Ubuntu:

$ sudo apt-get update
$ sudo apt-get install docker.io

Or use the package-manager / Docker distribution appropriate for your System. Then we can go forward by installing the Fn CLI. You can either curl it:

$ curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh

Or alternatively, if you’re on OSX and have Homebrew installed, type:

$ brew install fn

==> Downloading https://homebrew.bintray.com/bottles/fn-0.5.8.high_sierra.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/b1/b1767fb00e2e69fd9da73427d0926b1d1d0003622f7ddc0dd3a899b2894781ff?__gda__=exp=1538038849~hmac=c702c9335e7785fcbacad1f29afa61244d02f2eebb
######################################################################## 100.0%
==> Pouring fn-0.5.8.high_sierra.bottle.tar.gz
๐Ÿบ  /usr/local/Cellar/fn/0.5.8: 5 files, 16.7MB

We are now ready to bootstrap our first function using the Fn CLI. To keep things simple we’ll use a built-in runtime to start, e.g. Node:

$ fn init --runtime node --trigger http hellonode

Creating function at: /hellonode
Function boilerplate generated.
func.yaml created.

This will create a new directory hellonode with some basic config files to start developing our Fn function. Inside the newly created directory you will create your application following the standards of your chosen language / runtime:

# Our node folder looks like this:

   hellonode
   โ”œโ”€โ”€ func.js
   โ”œโ”€โ”€ func.yaml
   โ””โ”€โ”€ package.json

# A freshly bootstrapped Java11 runtime would instead look like this:

   hellojava11
ย ย  โ”œโ”€โ”€ func.yaml
ย ย  โ”œโ”€โ”€ pom.xml
ย ย  โ””โ”€โ”€ src
ย ย      โ”œโ”€โ”€ main
ย ย      โ”‚ย ย  โ””โ”€โ”€ java
ย ย      โ”‚ย ย      โ””โ”€โ”€ com
ย ย      โ”‚ย ย          โ””โ”€โ”€ example
ย ย      โ”‚ย ย              โ””โ”€โ”€ fn
ย ย      โ”‚ย ย                  โ””โ”€โ”€ HelloFunction.java
ย ย      โ””โ”€โ”€ test
ย ย          โ””โ”€โ”€ java
ย ย              โ””โ”€โ”€ com
ย ย                  โ””โ”€โ”€ example
ย ย                      โ””โ”€โ”€ fn
ย ย                          โ””โ”€โ”€ HelloFunctionTest.java

Fn bootstraps the project structure, inits a func.yaml file that contains the necessary Fn directives, and scaffolds a template for the code in the language of your choice.

This means, in the case of the Node runtime:

$ cat hellonode/func.js

const fdk=require('@fnproject/fdk');

fdk.handle(function(input){
  let name = 'World';
  if (input.name) {
    name = input.name;
  }
  return {'message': 'Hello ' + name}
})

We can now rapidly test our function locally to see how things work.

To do so, we first start the Fn server. As we said, the Fn server is itself a Docker container, and the framework will hence go and get the image from the Docker registry.

$ fn start -d                    # start the local server in daemon mode

Unable to find image 'fnproject/fnserver:latest' locally
latest: Pulling from fnproject/fnserver
ff3a5c916c92: Pull complete
1a649ea86bca: Pull complete
ce35f4d5f86a: Pull complete

...

Status: Downloaded newer image for fnproject/fnserver:latest
668ce9ac0ed8d7cd59da49228bda62464e01bff2c0c60079542d24ac6070f8e5

To run our function we need to “deploy” it. This requires an application name: in Fn, an application is to be intended as a namespace for related functions.

The Fn CLI will search a func.yaml in the current folder, looking for the function configuration. We need hence to cd into our hellonode folder first.

$ cd hellonode
$ fn deploy --app fnexo --local  # deploy the function within app fnexo.
                                 # local skips pushing the function image to the remote registry, and runs it directly

Deploying hellonode to app: fnexo
Bumped to version 0.0.2
Building image nfrankel/hellonode:0.0.3 .
Updating function hellonode using image nfrankel/hellonode:0.0.3...
Successfully created app:  fnexo
Successfully created function: hellonode with nfrankel/hellonode:0.0.3
Successfully created trigger: hellonode-trigger

You can see from the command output that the command has created a new Docker image containing our function. The function is now ready to be called and we have two ways of doing so:

  1. Using the Fn invoke command
  2. Calling the function directly over http.

Fn’s invoke simply emulates the HTTP calls for testing, and is convenient for a quick check:

$ fn invoke fnexo hellonode      # call the hellonode function within app fnexo

{"message":"Hello World"}

To call the function directly instead, one must know its full URL:

$ curl http://localhost:8080/t/fnexo/hellonode-trigger

{"message":"Hello World"}

The Fn server exposes its functions on the port 8080, and as you see the URL seem to match the /t/app/function schema, but not entirely: the HTTP call doesn’t call the function directly, but through a so-called trigger, which, as its name implies “triggers” the function call.

Triggers are defined in the project’s func.yaml:

schema_version: 20180708
name: hellonode
version: 0.0.3
runtime: node
entrypoint: node func.js
format: json
triggers:
- name: hellonode-trigger
  type: http
  source: /hellonode-trigger    # the trigger's URL

We can change the trigger name to match the function’s name in order to make things simpler:

triggers:
- name: hellonode-trigger
  type: http
  source: /hellonode    # match the function's name

Then, we can redeploy the function and call it with the new trigger:

$ fn deploy --app fnexo hellonode --local
$ curl http://localhost:8080/t/fnexo/hellonode

{"message":"Hello World"}

Everything is working! Is now time to go live and publish our FaaS on a server!

Deploy the self-hosted Serverless services on your own infrastructure

Let’s quickly setup a VM using the Exoscale CLI. If you did not configure it yet you can get up and running following this Quick-Start. It’s a great tool to make you even more productive. Remember you’ll have to also setup a Security group rule to expose port 8080! The following command sequence will get you up and running with a fresh VM ready to host your functions:

$ exo firewall create fn-securitygroup
$ exo firewall add fn-securitygroup ssh --my-ip
$ exo firewall add fn-securitygroup -p tcp -P 8080-8080 -c 0.0.0.0/0
$ exo vm create fn-server -s fn-securitygroup

You can then ssh to the VM to install your remote Fn server:

$ exo ssh fn-server

The authenticity of host '185.19.30.175 (185.19.30.175)' can't be established.
ECDSA key fingerprint is SHA256:uaCKRYeX4cvim+Gr8StdPvIQ7eQgPuOKdnj5WI3gI9Q.
Are you sure you want to continue connecting (yes/no)? yes 
Warning: Permanently added '185.19.30.175' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-20-generic x86_64)

Then install Docker and the Fn server as you did locally, and start the server:

$ sudo apt-get update
$ sudo apt-get install docker.io
$ sudo systemctl start docker
$ curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh
$ sudo fn start

...

    ______
   / ____/___
  / /_  / __ \
 / __/ / / / /
/_/   /_/ /_/
    v0.3.643

Fn is ready to receive your functions! In order to effectively push your functions to the remote server, we’ll use the deploy command from our local machine omitting the --local flag.

Moreover Fn requires the Fn Server location and the Docker registry location. Those parameters can respectively be set via the FN_API_URL and FN_REGISTRY environment variables, but Fn offers a more convenient way to create and manage deployment configurations easily.

In Fn’s parlance, a deployment configuration is called a context. Use the following command to create a context:

$ fn create context exoscale --provider default --api-url http://185.19.30.175:8080 --registry nfrankel

You can list available contexts, typing:

$ fn list contexts

CURRENT NAME      PROVIDER      API URL                      REGISTRY
    default       default       http://localhost:8080/     
    exoscale      default       http://185.19.30.175:8080    nfrankel

And switch to the newly created context with:

 $ fn use context exoscale

 Now using context: exoscale

At this point, deploying Fn functions will upload Docker images on the indicated Docker hub profile (nfrankel in this case) and notify the remote server (in this example http://185.19.30.175:8080) of the location and version of the latest image containing your function.

$ fn deploy --app fnexo .   # executed in the local machine hellonode directory

Deploying function at: /.
Deploying hellonode to app: fnexo
Bumped to version 0.0.5
Building image nfrankel/hellonode:0.0.5 .

...

And finally:

$ curl http://185.19.30.175:8080/t/fnexo/hellonode

{"message":"Hello World"}

The fn serverless function lifecycle

Advantages of a self-hosted Serverless solution

Serverless is a convenient solution when you need to quickly implement small independent parts of logic that may interact with more complex applications or microservices.

However, it often comes with a hidden cost of lock-in to the specific vendor you choose, and depending on your specific use-case and volume, it can result in higher costs and reduced flexibility down the line.

Multi-cloud and Hybrid-cloud architectures may also suffer from this situation, or you may simply find yourself in a position where you would love to have a serverless experience for your internal needs but corporate policies cut you off from this solution.

The Fn framework is easy to approach and can give you a very similar FaaS experience with a very small overhead. With Fn you get rid of any vendor lock-in, you can install it on-premises or on the cloud provider of your choice, and you are free to use the language of your choice.

In fact, we have barely scratched the surface of Fn, but creating your own runtime is fairly straightforward, and the overall architecture can be pushed much further using the Fn Load Balancer, or by placing Fn behind a web server proxy to secure your Fn server.