TechAnek

In the dynamic world of modern software development, continuous deployment and high availability are crucial. Argo Rollouts are tools that facilitate these goals. In this blog, we’ll explore what Argo Rollouts is, the benefits of using Argo Rollouts, and how to implement a Canary deployment using Argo Rollouts.

What is Argo Rollouts?

A Rollout is a type of Kubernetes workload resource that functions similarly to a Kubernetes Deployment object. It is designed to replace a Deployment in situations where more sophisticated deployment or progressive delivery capabilities are required. A Rollout offers several features that a standard Kubernetes Deployment does not provide:

  1. Canary Deployments – Gradually shifts traffic to the new version while monitoring metrics and rollbacks if issues arise.
  2. Blue-Green Deployments – Deploys the new version alongside the old version, then shifts traffic to the new version once it passes all checks.
  3. Progressive Delivery – Combines multiple strategies like canary, blue-green, and experiment-based rollouts for comprehensive control over deployment processes.
  4. Integration with ingress controllers and service meshes for advanced traffic routing.
  5. Integration with metric providers for blue-green & canary analysis.
  6. Automated promotion or rollback based on successful or failed metrics.

Progressive Delivery

Progressive delivery is the process of releasing updates of a product in a controlled and gradual manner, thereby reducing the risk of the release, typically coupling automation and metric analysis to drive the automated promotion or rollback of the update. Progressive delivery is often described as an evolution of continuous delivery, extending the speed benefits made in CI/CD to the deployment process. This is accomplished by limiting the exposure of the new version to a subset of users, observing and analyzing for correct behavior, then progressively increasing the exposure to a broader and wider audience while continuously verifying correctness.

Why Argo Rollouts Should be Used/How can it Help in Application Deployments?

Argo Rollouts enhances Kubernetes’ native deployment strategies by providing:
  • Granular Control – Define custom rollout strategies tailored to your needs.
  • Automated Rollbacks – Automatically revert to the previous version if issues are detected.
  • Detailed Metrics – Integrate with metrics providers to monitor the health of new deployments.
  • Traffic Shifting – Safely direct traffic to new versions incrementally, reducing risk and downtime.

Canary Deployment:

Canary deployment is a technique used to gradually roll out a new version of an application to a small subset of users before making it available to the entire infrastructure. This strategy allows you to test the new version in a real-world scenario and reduce the risk of introducing bugs or performance issues. Argo Rollouts is a Kubernetes controller and set of CRDs (Custom Resource Definitions) that facilitate advanced deployment strategies, including canary deployments.
Here’s a detailed explanation of how canary deployment works in Argo Rollouts:
  • Key Concepts

    1. Rollout: The custom resource in Argo Rollouts that manages the deployment of an application.
    2. Canary Steps: These are incremental steps defined in the Rollout resource to control the traffic splitting between the stable version and the new version.
    3. Traffic Routing: Using tools like NGINX, Istio, or ALB (Application Load Balancer), traffic can be split between different versions of the application.
    4. Analysis: Automated analysis can be run at each step to ensure that the new version meets certain criteria before proceeding to the next step.
  • Benefits of Canary Deployment with Argo Rollouts

    1. Risk Mitigation: Gradually exposing the new version reduces the risk of widespread failures.
    2. Automated Rollbacks: If the new version fails at any step, it can be automatically rolled back.
    3. Metric-based Analysis: Automated checks ensure the new version meets performance and reliability criteria before full rollout.
    4. Flexibility: Customizable steps and traffic splitting allow fine-grained control over the deployment process.
By leveraging Argo Rollouts for canary deployments, you can achieve a more controlled and reliable process for releasing new application versions, ensuring minimal disruption and higher confidence in production environments.

Install Argo Rollouts with Helm Chart Using Custom Helm Values

  • Add the Argo Helm repository.
				
					helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
				
			
  • Create a values.yaml file to customize the installation:
				
					dashboard:
  # -- Deploy dashboard server
    enabled: true
  # -- Set cluster role to readonly
    readonly: true
  # -- Value of label `app.kubernetes.io/component`
    component: rollouts-dashboard

  ingress:
    enabled: true
    annotations: {}
    labels: {}
    ingressClassName: "nginx"
    hosts: 
      - ARGO-DASHBOARD-DOMAIN.COM     # Replace with your domain
				
			
  • Install Argo Rollouts:
				
					helm install argo-rollouts -n argo-rollouts argo/argo-rollouts -f values.yaml 
				
			
Now you can access Argo Rollouts Dashboard on ARGO-DASHBOARD  domain, and access Deployed Rollouts, or you can use Argo Rollouts CLI.
Installing Argo Rollouts CLI:
				
					curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64
chmod +x ./kubectl-argo-rollouts-linux-amd64
sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts
kubectl argo rollouts version

				
			

Deploy Application With Argo Rollouts (Canary Deployment)

We will Install sample application (Rollout, Service, Ingress) and through that we will understand how Canary Deployment actually works.
  • rollout.yaml
				
					apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: {{ include "my-app.fullname" . }}
  namespace: {{ .Values.namespace }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ include "my-app.fullname" . }}
  template:
    metadata:
      labels:
        app: {{ include "my-app.fullname" . }}
    spec:
      containers:
        - name: {{ include "my-app.fullname" . }}-container
          image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:
            - containerPort: {{ .Values.containerPort }}
          resources:
            requests:
              memory: 32Mi
              cpu: 5m      
  strategy:
    canary:
      steps:
      - setWeight: 20
      - pause: {}
      - setWeight: 40
      - pause: {duration: 10}
      - setWeight: 60
      - pause: {duration: 10}
      - setWeight: 80
      - pause: {duration: 10}
  revisionHistoryLimit: 2
				
			
  • service.yaml
				
					apiVersion: v1
kind: Service
metadata:
  name: {{ include "my-app.service.name" . }}
  namespace: {{ .Values.namespace }}
spec:
  selector:
    app: {{ include "my-app.fullname" . }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.targetPort }}
  type: {{ .Values.service.type }}
				
			
  • ingress.yaml
				
					 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: {{ include "my-app.fullname" . }}-ingress
   namespace: {{ .Values.namespace }}
   annotations:
     kubernetes.io/ingress.class: nginx  # Specify your ingress class if needed
 spec:
   ingressClassName: nginx
   rules:
     - host: {{ .Values.ingress.hostname }}
       http:
         paths:
           - path: {{ .Values.ingress.path }}
             pathType: {{ .Values.ingress.pathType }}
             backend:
               service:
                 name: {{ include "my-app.service.name" . }}
                 port:
                   number: {{ .Values.service.port }}
				
			
Apply all above files to deploy the application.
				
					kubectl apply -f rollout.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
				
			
User can use Argo Rollout CLI or Exposed Dashboard to access deployed Rollouts and its states.
				
					kubectl argo rollouts get rollout my-app
				
			

OR open dashboard in browser,

Now, let’s deploy the Green version of the app using argo rollout cli.
				
					kubectl argo rollouts set image my-app my-app-container=argoproj/rollouts-demo:green
				
			
After setting, user will be able to see a 20% set of pods, in the green version, of sample application coming up.
After a while, you’ll notice that both the old (blue) and new (green) pods are running side by side.
At this point, both versions of the application are available simultaneously, with the old (blue) version and the new (green) version running on the same service, split by a 20%-80% ratio. We can seamlessly start using the new (green) version on the my-app service by promoting it via the Argo Rollouts command.
				
					kubectl argo rollouts promote my-app
				
			
Well, we just successfully  used canary deployment with Argo Rollouts.

What Will be Achieved After Installing Argo Rollouts

After installing and configuring Argo Rollouts, you will achieve:
  • Automated Canary Deployments: Deploy new versions with minimal downtime and seamless traffic shifting.
  • Enhanced Monitoring: Integrate with Prometheus or other metrics providers to monitor the health of your deployments.
  • Improved Reliability: Automatically rollback to a stable state if issues are detected during deployment.

Conclusion:

Argo Rollouts provide a robust solution for managing Kubernetes deployments. By leveraging advanced deployment strategies like Canary deployments, you can ensure higher availability and reliability for your applications. Follow the steps outlined in this blog to implement a Canary deployment with Argo Rollouts, enhancing your deployment process and minimizing downtime.

Argo Rollouts is a powerful tool that,  can significantly streamline your Kubernetes deployment workflows. Try it out and experience the benefits of advanced deployment strategies in your applications!

Leave a Reply

Your email address will not be published. Required fields are marked *