Kubernetes: Part 1
Welp, I finally took the Kubernetes (K8s) dive. This post is basically going to be my “lab notes” from the experience, so don’t expect too much expository prose.
The most useful links will probably be this kubectl
cheatsheet, followed by this DigitalOcean blog post for setting up an ingress. Also this YouTube video was a great starting point for getting a sense of Kubernetes from a high level.
But why though?⌗
I was working on another blog post and while I was editing it, I read over something that amounted to “I’m going to use this non-standard approach because K8s has a lot of overhead”. I decided to find out if that was actually the case and the best way would be to just jump in head first.
Before I’d made this decision, I did a bit of a “build vs buy” assessment. The main reason I hadn’t done this sooner is that provisioning and maintaining your own K8s cluster is complicated and expensive. In the spirit of getting things done, I decided it would be worthwhile to learn Kubernetes if I could skip all the painful DevOps stuff and get right to learning the “K8s for developers” side of things. I decided to just spin up a managed Kubernetes cluster on DigitalOcean. It’s like $25/mo and I’m using it to host services that would otherwise be running on Droplets, so in dollars, it’s basically a wash. Plus I get all the extraneous benefits of hands on learning. If you’re following along, make sure you have a cluster and you’ve done a few smoke tests to make sure you’re able to interact with it with kubectl
.
What’s the goal?⌗
I want to be able to deploy a new application with as few commands as possible. I’d gotten pretty close with Coolify, but it doesn’t have much carry over to a professional environment. Kubernetes is an industry standard tool, and I can write a couple config files and run a couple kubectl
commands and have my app up and running.
The whole impetus for this exercise is that I wanted to migrate my Temporal service to K8s, so that’s what I’ll be structuring this post around. By the end of this post, my goal is to be able to deploy a Temporal Server and Workers in less than 5 minutes.
How to Deploy a New Application?⌗
This section is going to be a recapitulation of the DigitalOcean post here.
To deploy a new application you need a Service
and a Deployment
. If you want it to be accessible from the outside (i.e., the Internet), you also need an Ingress Controller
, and an Ingress
.
Here’s the YAML for deploying the Service
and Deployment
(often stored in the same app.yaml
file):
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.10
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
It should be pretty obvious how you can control things like labels, the image the container runs on, how ports are mapped, and how to set envs in the container. You can deploy it to your cluster with:
kubectl create -f hello-kubernetes-first.yaml
You can also use apply
instead of create
. Verify it’s running with:
kubectl get service hello-kubernetes-first
Now, before you can setup the Ingress
, you need to setup the Ingress Controller
. This part uses Helm. You can run the Nginx Ingress Controller by first adding the repo:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
…and then updating your local Helm
:
helm repo update
…and finally deploy the Helm chart:
helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true
This will take a while because it’s going to create a DigitalOcean Load Balancer for you (which is roughly $15/mo). The upside is that this is probably the only load balancer you need for all the services you’re going to be hosting in your cluster. Once that’s up and running, you can create DNS A records to point all your (sub)domains at the load balancer’s IP.
Use this command to watch the load balancer come up:
kubectl get services -o wide -w nginx-ingress-ingress-nginx-controller
Now you can create the Ingress
itself in ingress.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "hw1.your_domain_name"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
And deploy it with:
kubectl apply -f ingress.yaml
If you try and access your endpoints (browser, cURL
, etc.), you should be served by your app.
Now I’m going to cruise through adding TLS. First you need Cert-Manager
:
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.10.1 --set installCRDs=true
You should know immediately if those were all successful. Now you need to create an Issuer
that issues TLS certificates in issuer.yaml
:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# Email address used for ACME registration
email: your_email_address
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Name of a secret used to store the ACME account private key
name: name-to-store-secret-under
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Apply it with:
kubectl apply -f issuer.yaml
Update your ingress.yaml
to the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- hw1.your_domain
secretName: hello-kubernetes-tls
rules:
- host: "hw1.your_domain_name"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
And re-apply it:
kubectl apply -f ingress.yaml
Track progress with:
kubectl describe certificate hello-kubernetes-tls
You should see something like the following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 2m34s cert-manager-certificates-trigger Issuing certificate as Secret does not exist
Normal Generated 2m34s cert-manager-certificates-key-manager Stored new private key in temporary Secret resource "hello-kubernetes-tls-hxtql"
Normal Requested 2m34s cert-manager-certificates-request-manager Created new CertificateRequest resource "hello-kubernetes-tls-jnnwx"
Normal Issuing 2m7s cert-manager-certificates-issuing The certificate has been successfully issued
Verify it yourself by accessing one of your domains with your browser.
What About Envs?⌗
Kubernetes doesn’t have the same sort of .env
templating support that, say, docker compose
has, but you can achieve something similar with a shell script or Makefile
. I’m stealing this from the Tailscale
setup scripts but the general idea is that you template your envs in your Kubernetes resource config in deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: {{MY_ENV_1}}
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: {{MY_ENV_2}}
Then, you can add a .env
file and a Makefile
alongside it:
import .env
source
MY_ENV_1 ?= foo
MY_ENV_2 ?= bar
myapp:
@sed -e "s;{{MY_ENV_1}};$(MY_ENV_1);g" deployment.yaml | sed -e "s;{{MY_ENV_2}};$(MY_ENV_2);g"
Then, just do make myapp | kubectl apply -f -
. This will dynamically populate your envs into the configuration and then pass that to kubectl apply
.
Let’s Get Serious (Temporal Deployment)⌗
Ok, so now what if I want to deploy Temporal? I grabbed a starter Helm chart from here. Following the instructions there for a minimum deployment (i.e., no Prometheus, Grafana, or ElasticSearch):
helm dependencies update
helm install \
--set server.replicaCount=1 \
--set cassandra.config.cluster_size=1 \
--set prometheus.enabled=false \
--set grafana.enabled=false \
--set elasticsearch.enabled=false \
temporaltest . --timeout 15m
Verify with:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
temporaltest-admintools-6cdf56b869-xdxz2 1/1 Running 0 11m
temporaltest-cassandra-0 1/1 Running 0 11m
temporaltest-frontend-5d5b6d9c59-v9g5j 1/1 Running 2 11m
temporaltest-history-64b9ddbc4b-bwk6j 1/1 Running 2 11m
temporaltest-matching-c8887ddc4-jnzg2 1/1 Running 2 11m
temporaltest-metrics-server-7fbbf65cff-rp2ks 1/1 Running 0 11m
temporaltest-web-77f68bff76-ndkzf 1/1 Running 0 11m
temporaltest-worker-7c9d68f4cf-8tzfw 1/1 Running 2 11m
To run the whole shebang (you’ll need a 3+ node cluster):
helm install temporaltest . --timeout 900s
And uhh…that’s it? We’re done?? Well, sort of, but in reality there’s still a little more to it.
You can shell into the admin tools container with:
kubectl exec -it services/temporaltest-admintools -- /bin/bash
And now you can run the Temporal CLI tctl
. If you’re following along, you’ll need to create the default
namespace manually with:
tctl --namespace default namespace register
You can also expose your instance’s front end port on your local machine:
kubectl port-forward services/temporaltest-frontend-headless 7233:7233
Similarly to how you accessed Temporal front end via kubernetes port forwarding, you can access your Temporal instance’s web user interface:
kubectl port-forward services/temporaltest-web 8080:8080
To uninstall:
helm uninstall temporaltest
Here is an example of a helm upgrade command that can be used to upgrade a cluster (showing how to set various configuration variables):
helm \
upgrade \
temporaltest \
-f values/values.cassandra.yaml \
--set elasticsearch.enabled=true \
--set server.replicaCount=8 \
--set server.config.persistence.default.cassandra.hosts='{c1.example.com,c2.example.com,c3.example.com}' \
--set server.config.persistence.default.cassandra.user=cassandra-user \
--set server.config.persistence.default.cassandra.password=cassandra-password \
--set server.config.persistence.default.cassandra.tls.caData=... \
--set server.config.persistence.default.cassandra.tls.enabled=true \
--set server.config.persistence.default.cassandra.replicationFactor=3 \
--set server.config.persistence.default.cassandra.keyspace=temporal \
--set server.config.persistence.visibility.cassandra.hosts='{c1.example.com,c2.example.com,c3.example.com}' \
--set server.config.persistence.visibility.cassandra.user=cassandra-user \
--set server.config.persistence.visibility.cassandra.password=cassandra-password \
--set server.config.persistence.visibility.cassandra.tls.caData=... \
--set server.config.persistence.visibility.cassandra.tls.enabled=true \
--set server.config.persistence.visibility.cassandra.replicationFactor=3 \
--set server.config.persistence.visibility.cassandra.keyspace=temporal_visibility \
--set server.image.tag=1.2.1 \
--set server.image.repository=temporalio/server \
--set admintools.image.tag=1.2.1 \
--set admintools.image.repository=temporalio/admin-tools \
--set web.image.tag=1.1.1 \
--set web.image.repository=temporalio/web \
. \
--wait \
--timeout 15m
You’ll want to make your Temporal Server reachable from wherever you’re running your Workers (remember, Workers can run anywhere). However, you don’t want to expose your Temporal server to the Internet. Tailscale to the rescue (from the Tailscale docs):
Tailscale can act as a proxy⌗
If you’re running a workload on a Kubernetes cluster that needs to be shared with others in your network, you can use Tailscale to make that workload accessible and still use MagicDNS to access your workloads. Check out our repo for a sample proxy container.
This means that you can easily share and connect to a workload running in a Kubernetes cluster with other parts of your network, for example to have your production workload contact your product database over Tailscale or expose an internal-only service over Tailscale and not the public internet.
You can follow the instructions here, but here’s the gist:
Running a Tailscale proxy allows you to provide inbound connectivity to a Kubernetes Service. NB: The following commands assume you’re in /docs/k8s/README.md
in the Tailscale
repo.
- Create a Kubernetes Secret for automated authentication. You can do this via Tailscale Admin Console and then applying the following
secret.yaml
:
apiVersion: v1
kind: Secret
metadata:
name: tailscale-auth
stringData:
TS_AUTHKEY: tskey-auth...
kubectl apply -f ./secret.yaml
- Provide the
ClusterIP
of the service you want to reach by either:
Creating a new deployment
kubectl create deployment nginx --image nginx
kubectl expose deployment nginx --port 80
export TS_DEST_IP="$(kubectl get svc nginx -o=jsonpath='{.spec.clusterIP}')"
Using an existing service
export TS_DEST_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')"
- Deploy the proxy pod
make proxy | kubectl apply -f-
- Check if you can to connect to the service using Tailscale magic DNS:
curl http://proxy
Depending on the IP you pointed TS_DEST_IP
at, you might get differing results. If you used an nginx
server, you might see a standard welcome message. If you pointed the proxy at a different service (e.g., your Temporal web UI listening on port 8080), then you’ll want to navigate to http://proxy:8080
(you can set up your service however your want though). Now any devices in your Tailnet can access that Kubernetes service by name. Amazing.
Let’s hook up the Temporal Workers to the Temporal Server. Remember, the Temporal Server itself is more of a scheduler, and the Workers run…anywhere…but they can also simply run in your cluster. If you decide to run your Workers as a Deployment in your cluster, you can skip all the Tailscale stuff above because Kubernetes makes the Temporal frontend available by name for you already (though it’s a useful pattern if you have services outside your cluster that need to connect to your Temporal Server, so I’d recommend setting up the proxy just in case).
You can create a Deployment for the Workers and adjust the number of replicas to scale it. A Service isn’t necessary since the workers are just running in the background. If your worker image is hosted in a private repo, then take a look at the Kubernetes documentation, which has explicit instructions for wiring the Secrets up for that (you can see the imagePullSecrets
object below hinting at this). Your workers.yaml
should look something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
component: workers
name: myapp-workers
spec:
replicas: 2
selector:
matchLabels:
app: myapp
component: workers
template:
metadata:
labels:
app: myapp
component: workers
spec:
containers:
- image: {{WORKER_IMG_TAG}}
imagePullPolicy: Always
name: myapp-workers
env:
- name: TEMPORAL_HOSTPORT
value: "{{TEMPORAL_HOSTPORT}}"
restartPolicy: Always
imagePullSecrets:
- name: regcred
You can use the same make workers | kubectl apply -f -
trick I explained above for populating envs, and you should be off to the races. Use kubectl
to check the state of your deployment, verify log messages are good, etc. The simplest thing to check is that your networking is sorted out and your Worker process successfully connects to your Temporal server.
Now that your Workers are ready to execute Workflows, you can add your “business” deployments/services that interface with your Temporal Workflows. For now, I’ve met my goal of being able to deploy a Temporal Server and Workers in just a couple minutes with a handful of kubectl
commands. I’m happy with how this is working, so I’m going to sign off here. Stay tuned for Part 2 where I’ll talk about Temporal visibility and scaling (spoiler: it’s a review of this post).