@MaximeHeckel

How I got started with Kubernetes on GKE

June 19, 2018 / 10 min read

Last Updated: June 19, 2018

Disclaimer: I work at Docker but I wasn’t asked to write or publish this post. Here I’m simply sharing how I moved my own non-work related micro-services (i.e. portfolio, small projects) from a pure Docker based platform to Google Kubernetes Engine.

My personal projects needed a new place to live, so I decided to take this as an opportunity to learn more about Kubernetes while migrating them to Google Kubernetes Engine. After a few weeks of investigation, I ended up with a pretty good setup that allows me to deploy, publish and scale my portfolio, website, and any other project that I want to host, and all of this with SSL certificates with Let’s Encrypt. In this post, I want to share my step by step guide so you too can learn about Kubernetes and have an easy and efficient way to deploy your projects.

Note: This post assumes you have basic knowledge about Docker and containers, as well as Docker for Mac or Docker for Windows installed on your machine with the Kubernetes option turned on.

Setting up gcloud and GKE

For this part, we’ll focus on installing both gcloud tools and setting up your first GKE cluster. You can go through this guide for the setup of gcloud tools on your local CLI. After creating an account on GKE, the first step will be to create a cluster. To do so, we can simply go through the GKE GUI, hit the “Create Cluster” button and go through the wizard. Now that we have a cluster, let’s get its credentials so we can set the Kubernetes context to this cluster in our local CLI. To do that we can run:

gcloud command to get the credentials of an existing cluster

1
gcloud container clusters get-credentials CLUSTER --zone ZONE --project PROJECT

where CLUSTER is the name of the cluster andZONE the zone we’ve picked up while filling the wizard, and PROJECT the ID of our project.

After this, in our Docker for Mac menu, we should be able to see the name of our cluster in the context list under “Kubernetes”:

Kubernetes contexts list menu in Docker for Mac

If we click on it, all of the following Kubernetes commands we execute will be run against our GKE cluster. For example, if we try running kubectl get pods, we should see that we have no resources on this cluster (yet).

Deploying and exposing our first kubernetes workloads

Next, we’ll deploy our first workloads on our GKE clusters. If you’re new to Kubernetes, this is the moment when things get a bit tricky but I’ll do my best to get you up to speed with the required vocabulary. Here are the different types of workloads that we’ll deploy on our cluster:

  • ArrowAn icon representing an arrow
    Pod: A group of running containers. It’s the smallest and simplest Kubernetes object we’ll work with.
  • ArrowAn icon representing an arrow
    Deployment: A Kubernetes object that manages replicas of Pods.
  • ArrowAn icon representing an arrow
    Service: A Kubernetes object that describes ports, load balancers, and how to access applications.
  • ArrowAn icon representing an arrow
    Ingress: A Kubernetes object that manages external access to the services in a cluster via HTTP.

If you still don’t feel confidant enough, I’d recommend checking this great tutorial to get you started with the basics: https://kubernetes.io/docs/tutorials/kubernetes-basics/.

Kubernetes workloads are usually described with YAML files, which can be organized pretty much however we want. We can even multiple types of Kubernetes workloads in a single YAML file.
As an example, here’s a YAML file containing the definition of the first workloads we’ll deploy on our Kubernetes cluster:

Kubernetes deployment

1
apiVersion: apps/v1beta1
2
kind: Deployment
3
metadata:
4
name: website
5
spec:
6
selector:
7
matchLabels:
8
app: website
9
replicas: 1 # For now we declare only one replica
10
template: # We define pods within this field in our deployment
11
metadata:
12
labels:
13
app: website
14
spec:
15
containers:
16
- name: website
17
image: nginx:latest
18
imagePullPolicy: "Always"
19
ports:
20
- containerPort: 80 # The nginx container exposes port 80
21
22
---
23
24
apiVersion: v1
25
kind: Service
26
metadata:
27
name: website
28
labels:
29
run: website
30
spec:
31
type: NodePort
32
ports:
33
- port: 8000 # On which port you want to publish the website dep
34
targetPort: 80 # The port exposed by your container
35
protocol: TCP
36
selector:
37
app: website

Note: I was very confused the first time I deployed this workload by the service “type” field, then I read this amazing article which made it all clear to me: https://medium.com/@pczarkowski/kubernetes-services-exposed-86d45c994521

Let’s save the above file on our machine and deploy these workloads by running: kubectl apply -f PATH/FILENAME.yml. The deployment shouldn’t take more than a few seconds, and then we can verify that all our workloads are actually deployed. Run kubectl get TYPE, where type is any of the Kubernetes types we defined above, e.g. kubectl get pods, to list any Kubernetes workloads of a given type. If you want to know more about them you can run kubectl describe TYPE NAME, e.g. kubectl describe service website.

By listing the services we should end up with an output similar to this:

List of Kubernetes services

We can see that the port 8000 of our service is mapped to the port **31508** of one of our node in our cluster, however GKE nodes are not externally accessible by default, so our website service is not (yet) accessible from the Internet. This is where Ingresses comes into the picture.

Setting up an Ingress

Here, we’ll create an Ingress to access our website service from the Internet. An Ingress workload basically contains a set of rules to route traffic to our service.
For example, we can paste the following in a file called ingress.yml:

Ingress YAML definition

1
apiVersion: extensions/v1beta1
2
kind: Ingress
3
metadata:
4
name: main-ingress
5
spec:
6
backend:
7
serviceName: website
8
servicePort: 8000

If we run kubectl apply -f ingress.yml, we create a rule to route all external HTTP traffic hitting our Ingress external IP to our website. If we wait a few minutes, we’ll see that running kubectl get ingress will output a list containing main-ingress with an external IP:

List of Kubernetes ingresses

Accessing the external IP from your browser should show you the main NGINX page! We just deployed, exposed and published our first Kubernetes workload!

But wait there’s more: we can actually use this ingress to do load balancing, by adding more specific rules. Let’s say we only want our domain myawesomedomain.com to access our website service, we can add a set of rules:

Ingress YAMLK definition with loadbalancing in mind

1
apiVersion: extensions/v1beta1
2
kind: Ingress
3
metadata:
4
name: main-ingress
5
spec:
6
rules:
7
- host: myawesomedomain.com
8
http:
9
paths:
10
- backend:
11
serviceName: website
12
servicePort: 8000

Now if we run kubectl apply -f ingress.yml after saving the content above in our ingress.yml file and point our domain name myawesomedomain.com to the external IP of our Ingress, you’ll be able to access your website service with this domain.

Ingresses come very handy when you have multiple services to host on the same cluster. The ingress.yml file I’m currently using on for my personal projects looks something like this:

Ingress YAMLK definition with loadbalancing in mind

1
apiVersion: extensions/v1beta1
2
kind: Ingress
3
metadata:
4
name: main-ingress
5
spec:
6
rules:
7
- host: myawesomedomain.com
8
http:
9
paths:
10
- backend:
11
serviceName: website
12
servicePort: 8000
13
- host: test.myawesomedomain.com
14
http:
15
paths:
16
- backend:
17
serviceName: testwebsite
18
servicePort: 8000
19
- host: hello.myawesomedomain.com
20
http:
21
paths:
22
- backend:
23
serviceName: hello
24
servicePort: 9000

Thanks to our Ingress, we have now an easy way to route traffic to specific services by simply declaring rules in a YAML file and deploying it on our cluster.

Getting Let’s Encrypt SSL certificates to work

Now that we have our Kubernetes services published, the next step is to have SSL Certificates working for our services. That is being able to reach [https://myawesomedomain.com](https://myawesomedomain.com,), [https://test.myawesomedomain.com](https://test.myawesomedomain.com), etc. On my previous micro-services host, I was running a home made containerized version of HAProxy that would query my Let’s Encrypt certificates (they are free!) and renew them for me all by itself. Pretty handy since I didn’t want to bother manually renewing them every 90 days.

I had to look a around for quite a bit and try several projects such as the now deprecated kube-lego, before ending up with a solution that worked for me: kube-cert-manager. This project is doing exactly what I needed: “Automatically provision and manage TLS certificates in Kubernetes”.

As a first step we’ll need to deploy a NGINX-Ingress-Controller for GKE. This Ingress Controller will basically consume any Ingress workload and route its incoming traffic. After cloning the repository we’ll need to do the following:

  • ArrowAn icon representing an arrow
    Edit cluster-admin.yml to add our email address in the `<YOUR-GCLOUD-USER> placeholder.
  • ArrowAn icon representing an arrow
    Run cd gke-nginx-ingress-controller && ./deploy.sh

We now have a service of type Load Balancer, which is listenning for all the incoming traffic on port 80 (for HTTP traffic) and 443 (for HTTPS traffic) with an external IP address. It will use all of the Ingresses on our cluster to route traffic, including our main-ingress.

Then, we’ll need to deploy kube-cert-manager. Just like we did for the Ingress Controller, we’ll have to do some edits before deploying the project:

  • ArrowAn icon representing an arrow
    Create the kube-cert-manager-google secret (for this I just followed the README in the repository)
  • ArrowAn icon representing an arrow
    Edit kube-cert-manager-deployment.yml and fill the different fields such as your email and the DNS provider. The documentations about DNS provider is available here. In my case, my domain was managed by Dnsimple so I had to edit the deployment file like this:

kube-cert-manager-deployment.yml with env variables setup

1
containers:
2
- name: kube-cert-manager
3
env:
4
- name: DNSIMPLE_BASE_URL
5
value: [https://api.dnsimple.com](https://api.dnsimple.com)
6
- name: DNSIMPLE_OAUTH_TOKEN
7
value: myrequestedoauthtoken

Finally, runningcd gke-kube-cert-manager && ./deploy.sh will setup and deploy cert-manager on your cluster.

Now here’s the fun part: all this setup allows us to create a Certificate Kubernetes workload. Any certificate created on this cluster will be picked up and requested (and renewed) by the kube-cert-manager deployment. Let’s create one for myawesomedomain.com in a file called certificates.yml:

Certificate YAML definition

1
apiVersion: 'stable.k8s.psg.io/v1'
2
kind: 'Certificate'
3
metadata:
4
name: website
5
namespace: default
6
labels:
7
stable.k8s.psg.io/kcm.class: 'kube-cert-manager'
8
spec:
9
domain: 'myawesomedomain.com'

Running kubectl apply -f certificates.yml will submit the request to Let’s Encrypt and create a TLS secret for our NGINX Ingress Controller to use. We can check the logs of the kube-cert-manager Pod with kubectl logs -f nameofyourcertmanagerpodpod during the request, and if everything goes well, we should see logs like this:

Logs from the cert-manager Pod

After a few minutes we should have, as shown in the logs above, a secret titled myawesomedomain.com on our cluster. Let’s run kubectl get secrets to ensure it’s there before continuing. Finally, we can now edit our ingress.yml file as such to include our certificate:

Updated Ingress definition with certificate for a given domain passed as a secret

1
apiVersion: extensions/v1beta1
2
kind: Ingress
3
metadata:
4
name: main-ingress
5
annotations:
6
kubernetes.io/ingress.class: 'nginx'
7
spec:
8
rules:
9
- host: myawesomedomain.com
10
http:
11
paths:
12
- backend:
13
serviceName: website
14
servicePort: 8000
15
tls:
16
17
- secretName: myawesomedomain.com
18
hosts:
19
- myawesomedomain.com

Now, let’s run kubectl apply -f ingress.yml to update our main-ingress to support the secret we created earlier. Then, we just need to make sure myawesomedomain.com points to the external IP of our NGINX Ingress Controller, and after a while our website service will be accessible through HTTPS!

We can see that at this point we have a pretty solid and simple way to add new services on our cluster, scale them, route traffic to them thanks to what we learned in part II and III, and add certificates to their corresponding domains by requesting them and renewing them automatically thanks to kube-cert-manager.

Resources I used while getting started with Kubernetes

Liked this article? Share it with a friend on Bluesky or Twitter or support me to take on more ambitious projects to write about. Have a question, feedback or simply wish to contact me privately? Shoot me a DM and I'll do my best to get back to you.

Have a wonderful day.

– Maxime

How I deployed, published, scaled and setup SSL certificates for my personal projects