The beer hipsters at Terrax Micro-Brewery IncTM are always looking for ways to expand their beer production imperium. Recently they struck a deal with cloud giant Amazon for hosting their beer api’s Kubernetes cluster. Details of that setup using Amazon EKS (Elastic Kubernetes Service) can be found in the previous blog post here.

This blog post will be a follow-up on the previous one in the Docker/Kubernetes series. In this guy we’re gonna spice things up a little with the introduction of serverless to our Kubernetes cluster. In the search for ways of improving their supply management, and thereby saving expenses, the Terrax guys turned to AWS Fargate. Fargate is an AWS serverless extension to EKS that – we’re quoting Amazon here – “provides on-demand, right-sized compute capacity for containers”.

In other words, with Fargate you don’t need to define the compute capacity for the managed nodes in the Kubernetes cluster upfront. Instead, when your cluster is starting to evolve and more and more containers are added, Fargate will take care of scaling the computing power for you. This relieves you from the burden of infrastructure management and furthermore ensures that you’re only paying for the compute power you actually need.

Now, how does this work? It’s actually quite simple and I really like the way Amazon has implemented this, i.e. in the style and spirit of Kubernetes itself. The only thing you have to add to your EKS cluster is a so-called Fargate profile.

This profile consists of a namespace and optional labels who function like a selector does in a Kubernetes Service. When a pod is created that matches the namespace (and optional labels) of a profile, Fargate will provision the underlying compute power to host it on. Pods that don’t match a Fargate profile, will stick to running on a provisioned node of a (managed) node group, i.e., like we’ve already seen in a previous blog post, just a EC2 instance.

So far for theory, let’s see it in action! Remember, in our last blog post we created a managed node group for running our pods on, so in this post we’ll try to replace this with a Fargate profile.

Provisioning the cluster

Most of the information needed for getting this blog setup up and running can be found in the Amazon AWS Fargate for EKS user guide, found here. The guide also gives an overview of the specific regions that support Fargate on EKS, so make sure that you’re using one of those. In this post we’ll use the eu-west-1 region.

Fargate pod execution role

Make sure you follow the step “Create a Fargate pod execution role” described in the “Getting started with Fargate” section of the user guide. The Fargate pod execution role is needed so Fargate can do its job. The role created (AmazonEKSFargatePodExecutionRole as suggested by the user guide) will have the following policy

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "*"

Stateful services

You probably remember that the deployment setup we’ve used so far in our series consists of a stateful database layer (based on a mysql image) and a stateless service layer (a SpringBoot API).

Let’s talk a bit about stateful services first. Because unfortunately, as I learned the hard way, Fargate and stateful services are not a match made in heaven. In fact, stateful services are NOT supported on Fargate. Truth be told, the current setup is probably not gonna win you any architectural prices. It doesn’t make a lot of sense to run the database as a stateful pod. You’re better off using one of Amazon’s RDS offerings. But connecting to an Amazon RDS from within an EKS cluster is surely not trivial and is something to tackle in a future blog post.

Having said all this, I’m still gonna stick with the current deployments as they will show you that pods running on Fargate and pods running on provisioned node groups can still play very nicely together.

We’ll create this cluster the same way as we did in our previous post, i.e. with the help of an eksctl config file. This one will contain a node group for the database layer pods, and two Fargate profiles, one for the system and default pods, and one for our service layer pods:

kind: ClusterConfig

  name: terrax-fargate
  region: eu-west-1
  version: "1.15"

  - name: node-group
    instanceType: t2.medium
    desiredCapacity: 1

  - name: fp-default
      # All workloads in the "default" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: default
      # All workloads in the "kube-system" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: kube-system
  - name: fp-demo-service
      # All workloads in the "tb-demo" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: tb-demo
          app: service

As you can see we defined the fp-default profile for pods in the default and system namespace, and the fp-demo-service profile for the pods in our tb-demo namespace having the app=service label (so only our stateless pods).

Creating the cluster

Alright, let’s create the cluster and see if our setup works! Yank up a bash terminal and enter the following command using the eksctl CLI:

eksctl create cluster -f eks-terrax-fargate.yaml

The time it takes to enjoy a good pilsner later, you will have created your first Fargate infused EKS cluster. The output in your terminal should resemble the picture below:

As you can see in the logging, again, a pretty impressive CloudFormation Stack was made in the process and you can check it out in the AWS console:

You can of course also open the EKS Clusters page to verify the status of your new cluster.

Let’s check if kubectl is connected and take a peek at the nodes that this setup has created:

kubectl get nodes --namespace=tb-demo

As you can see, the lowest node is the one we actually provisioned, while the top two are nodes Fargate created to run the system pods on. When following this blog, if at any time you check out the EC2 page, you won’t find any instances there for the created Fargate nodes. They are, in a sense, truly serverless:

As you can see, there is only one EC2 instance created, i.e. the one for the node we provisioned for our stateful layer.

Deploy our services

Don’t forget to create the tb-demo namespace first, since both database and service deployment depend on it being there:

kubectl create namespace tb-demo

Stateful database layer

Now create the stateful database layer again by issuing the following command:

kubectl apply -f database/database.yml

On my first attempt I reused the deployment file of my previous blog post, but this time the pod as well as the persistent volume claim stayed stuck in a Pending state.

I checked the state of the persistent volume claim:

kubectl describe pvc mysql-data-disk --namespace=tb-demo

And I noticed this status line: “waiting for first consumer to be created before binding”. Apparently Amazon recently changed the default Storage Class that is created along with a new EKS cluster. If you check it out – kubectl describe -, you’ll notice that the VolumeBindingMode is set to WaitForFirstConsumer. That volume binding mode used to be Immediate.

You’re basically in a textbook catch 22 situation: the pod is waiting for the persistent volume to be claimed and the persistent volume can only be claimed after it’s first consumer, i.e. the database pod, has been created.

So what to do? Well the answer is surprisingly simple: don’t depend on the default storage class, but create your own!

So here is the revised deployment configuration of the database layer (note that for brevity only the persistent volume claim and the new Storage Class section are shown):

kind: StorageClass
  name: tb-gp2
  type: gp2
  fsType: ext4
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolumeClaim
  name: mysql-data-disk
  namespace: tb-demo
    - ReadWriteOnce
    storage: 25Mi
  storageClassName: tb-gp2

The storage class tb-gp2 is based on the old default storage class called gp2, but with the volume binding mode set to Immediate. And the persistent volume claim now references it.

When you apply this deployment plan, you’ll notice that the database pod eventually will enter the Running state, as expected.

Issuing a kubectl describe shows a pod with a status of Running and as its node the one we (not Fargate) provisioned. This is expected, since its app=database label doesn’t match the selector of the Fargate fp-demo-service profile, so Fargate won’t pick it up.

Stateless service layer

Alright, now that our database pod is running, let us finally see if our service pods will run on Fargate-provisioned nodes and, while we’re at it, check if our apis run properly on Fargate nodes.

Again we use the same deployment configuration used in our previous blog. Let’s apply it:

kubectl apply -f service/service.yml

This time you should experience no problem of the sort we had when deploying our database layer. In the end you will have two running service pods. Furthermore, if you check the nodes – kubectl get nodes – you’ll see that two more serverless nodes have been provisioned by Fargate:

If you check one of the service pods – kubectl describe pod – you can clearly see that it is attached to one of those new Fargate nodes:

Running the app

If you want to check if the apis are functioning properly at this stage, first get the external IP address of the springboot-service loadbalancer – kubectl get service:

And then fire up the swagger-ui.html page on that IP:

At this time feel free to play around with the apis again and you’ll see that data will be persisted as expected.


Let’s scale the serverless layer a bit to see more of the Fargate magic in action.

Scale up

First scale up to four service pods:

kubectl scale deployment springboot-deployment --replicas=4 --namespace=tb-demo

Checking the running service pods – kubectl get pods -, you eventually will see four of them:

And checking the number of nodes – kubectl get nodes -, you’ll see that two more serverless nodes have been created by Fargate:

Scale down

Now let’s scale down to one pod:

kubectl scale deployment springboot-deployment --replicas=1 --namespace=tb-demo

As you can see, three pods will be terminated and one pod is left:

And three serverless Fargate nodes are recycled as well:


To clean up the stuff we created in this blog post, just follow these basic steps again and you’re fine:

kubectl delete -f service/service.yml
kubectl delete -f database/database.yml
eksctl delete cluster -f eks-terrax-fargate.yaml


In this blog post we added some serverless to our EKS Kubernetes cluster by using Fargate functionality. We saw how easy it is to add Fargate profiles to our cluster and learned that with the help of Kubernetes-like selectors (based on a namespace and optional labels) you can direct which pods are created on Fargate nodes and which pods are created on regular provisioned nodes. Remember that this only applies for stateless pods and that Fargate does not support stateful pods.

To conclude Fargate is a great addition to EKS imho and, moreover, one that’s really easy to use. And that’s it for now. In the next post we’ll take a look at ways to log and monitor your EKS cluster. Till then, keep calm and have a beer!


code and previous blog posts

AWS Fargate user guide