Optimizing GKE Application Autoscaling: Leveraging KEDA Tool with SQS CloudWatch Metrics

Shubhangi Thakur
The Cloudside View
Published in
4 min readFeb 7, 2024

--

In this blog, we’ll explain how to use KEDA with GKE using Amazon SQS CloudWatch metrics. This will help your Kubernetes apps adjust to changes in their workload by monitoring real-time SQS queue data.

Scenario: Imagine you have a microservices-based application running on GKE. One of the components of your application processes messages from an Amazon SQS (Simple Queue Service) queue. The workload varies throughout the day, with peaks in message traffic during specific times.

Solution with KEDA: By integrating KEDA into your GKE deployment, you can automate the scaling of your application based on the incoming messages from the SQS queue.

KEDA: KEDA is a lightweight, open-source Kubernetes event-driven autoscaler. It’s an open-source project that helps you automate the scaling of your applications running on Kubernetes (such as GKE — Google Kubernetes Engine) based on certain events or metrics.

SQS: Amazon SQS (Simple Queue Service) is a fully managed message queuing service provided by Amazon Web Services (AWS). It helps components talk to each other in a distributed system.

CloudWatch: Amazon CloudWatch is a monitoring and observability service provided by Amazon Web Services (AWS) that helps you collect and track metrics, collect log files, and set alarms for your AWS resources and applications.

Implementing Keda on GKE:

Step 1: Prerequisites

Before we begin the keda setup, ensure you have completed the following prerequisites:

  1. Create a GKE cluster in GCP.
  2. Create SQS queue in AWS.

Step 2: Installation of KEDA

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --namespace keda --create-namespace

Step 3: Deploy nginx application on GKE

  1. Create namespace “kubectl create namespace keda”
  2. Create a Pod using the below yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: keda
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
envFrom:
- secretRef:
name: aws-secret

To create the pod, run the following command:

kubectl apply -f nginx-deployment.yaml

Step 4: Set up a KEDA Trigger, Deployment, and ScaledObject to enable the scaling of PODs based on the metrics collected from AWS SQS.

apiVersion: v1
kind: Secret
metadata:
name: aws-secret
namespace: keda
data:
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
___

kind: TriggerAuthentication
metadata:
name: keda-trigger-auth-aws-credentials
namespace: keda
spec:
secretTargetRef:
- parameter: awsAccessKeyID
name: aws-secret
key: AWS_ACCESS_KEY_ID
- parameter: awsSecretAccessKey
name: aws-secret
key: AWS_SECRET_ACCESS_KEY
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: aws-cloudwatch-queue-scaledobject
namespace: keda
spec:
scaleTargetRef:
name: nginx-deployment
pollingInterval: 5
minReplicaCount: 1
maxReplicaCount: 5
triggers:
- type: aws-cloudwatch
metadata:
namespace: AWS/SQS
dimensionName: QueueName
dimensionValue: shubhi-queue
expression: SELECT COUNT("NumberOfMessagesSent") FROM "AWS/SQS" WHERE QueueName = 'shubhi-queue'
metricName: NumberOfEmptyReceives
targetMetricValue: "2"
minMetricValue: "1"
awsRegion: "us-east-1"
metricCollectionTime: "300"
metricStatPeriod: "60"
metricStat: "Average"
authenticationRef:
name: keda-trigger-auth-aws-credentials

Once deployed, verify that all objects have been created.

HPA
ScaledObject
TriggerAuthentication

Step 5: By query the (NumberOfMessagesSent), you can observe here that the pod has been auto-scaled.

If messages are not sent in SQS Queues, the pod will automatically scale down.

Hope you find it helpful! Keep learning, until next time :)

--

--

Hi, I am Student | Cloud Engineer | GCP+AWS Cloud | DevOps | 2XGCP Certified