In the realm of DevOps canary deployment have become a popular technique for releasing new application versions gradually, minimizing risk of full-scale rollouts. Kubernetes has native support for flexible deployments, scaling and traffic management. In this article I will show you the easiest way to build canary deployment with Kubernetes.
- What is Canary Deployment
- Why Use Canary Deployments?
- Setting Up Canary Deployment in Kubernetes - Easiest way
- Create Deployment - stable version of application
- Create Service for stable version of application
- Create Deployment - canary version
- Edit Service to split traffic
- Extend this scenario
- Conclusion
What is Canary Deployment?
Canary deployment is a strategy where a new version of an application is released to a small subset of users before a full-scale rollout. By deploying to a small group (like 10-20% of users), you can monitor performance, get feedback, and catch potential issues before they affect all users. If everything works well, the rollout continues; if not, the deployment can be rolled back quickly. About rolling updates and rollbacks you can read in my previous article
This method allows you to make balance between availability and deployment difficulty.
Why Use Canary Deployments?
- User Feedback: many of us know the sentence “user is the best tester”. That’s true. Even the best team of tester and the best written automated tests didn’t catch all potential bugs in our applications. Users using canary version of application may find issues and provide feedback to improve the deployment.
- Faster Rollbacks: If issues are detected, rolling back the deployment affects only a small percentage of users.
- Risk Reduction: With a smaller number of users impacted initially, there’s less risk if the new release has issues.
- Performance Monitoring: Canary deployments allow for real-time monitoring of the new version, making it easier to identify issues with changes.
Setting Up Canary Deployment in Kubernetes - Easiest way
Canary deployment can be build with a lot of complicated conditionals and mechanisms using service meshes, Ingress controllers or load balancers but in this article I will show you the easiest way - using only Deployment and Service.
Prerequisites
- Access to Kubernetes cluster: You can use cloud provider like GKE, EKS, AKS etc. or use a local setup eg. Minikube.
- Kubectl: Make sure kubectl is installed and configured to interact with your cluster.
Create Deployment - stable version of application
At first, let’s create a Kubernetes Deployment for our stable version of the application. This version will serve most of the traffic. In this example I of course use nginx image.
Create file nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-stable
spec:
replicas: 4
selector:
matchLabels:
app: nginx
version: stable
template:
metadata:
labels:
app: nginx
version: stable
spec:
containers:
- image: nginx:1.27.1
name: nginx
Run:
kubectl apply -f nginx-deployment.yaml
Check status:
kubectl get deployment nginx
You should see 4 ready replicas:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 4/4 4 4 24s
Create Service for stable version of application
To direct traffic to the deployment, we will use single Kubernetes service with selectors that target the stable pods.
Create service manifest service.yaml
:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
version: stable
type: NodePort
Check service status:
kubectl get svc myapp
You should see:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp NodePort 10.103.52.103 <none> 80:32259/TCP 7m32s
You should be able to access the service using the NodeIP:80 address.
⚠️ if you use minikube you must run service tunnel:
minikube service myapp --url
As response you got url for the service
http://192.168.49.2:32259
Run curl to get nginx version:
curl http://192.168.49.2:32259/foo
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
Create Deployment - canary version
Next, create a similar Deployment for the canary version. This deployment will have less replicas than the stable version (in this example only one) to ensure that only a small part of the users are routed to it.
Here is the YAML file nginx-canary-deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-canary # Set different name
spec:
replicas: 1 # We use only one replica for canary
selector:
matchLabels:
app: nginx
version: canary # Canary label
template:
metadata:
labels:
app: nginx
version: canary # Canary label
spec:
containers:
- image: nginx:1.27.2 # Canary version of image
name: nginx
Run:
kubectl apply -f nginx-canary-deployment.yaml
kubectl get deployments -o wide
As you see - you have 4 replicas of stable and 1 replica of canary application
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-canary 1/1 1 1 43s nginx nginx:1.27.2 app=nginx,version=canary
nginx-stable 4/4 4 4 94s nginx nginx:1.27.1 app=nginx,version=stable
Edit Service to split traffic
It’s time to set up our service to split traffic between two deployments (canary and stable). Currently your service send traffic to 4 (stable) pods. Let’s check it:
kubectl describe svc myapp | grep Endpoints
Endpoints: 10.244.0.108:80,10.244.0.110:80,10.244.0.109:80 + 1 more...
Now you should reconfigure only one thing - a selector in your service. Comment line version: stable
and keep only app: nginx
selector and apply service changes.
Check endpoints again to see changes:
Endpoints: 10.244.0.108:80,10.244.0.110:80,10.244.0.109:80 + 2 more...
As you see your service direct to one more endpoint - a canary pod.
Let’s check it using curl. Send few requests and focus on version - time to time you see version 1.27.2
instead of 1.27.1
. That the live prove that out canary release works correctly!
Extend this scenario
I showed you basics of canary deployment in Kubernetes. In this case 20% of traffic go to the canary version. You can easy modify this value by editing count of replicas. Additionally using selectors you can build another deployments models like blue green or completely custom solutions with more than two deployments. As I mentioned at the begging of this article - it’s the easiest way with many limitations. In next articles I will try to show more powerful solutions using eg. service mesh Istio.
Conclusion
Implementing a canary deployment in Kubernetes offers a powerful, low-risk way to test new application versions. By following these steps, you can quickly set up a basic canary deployment using Kubernetes’ native tools. This setup provides a foundation for gradually shifting traffic, monitoring performance, and reducing the impact of potential issues. For more complex setups, a service mesh can offer enhanced traffic management capabilities, but the core concept remains the same: safely deploy and test updates before a full-scale rollout. Happy deploying!