k8s-prom-hpa enables Kubernetes Horizontal Pod Autoscaling using Prometheus custom metrics for dynamic workload scaling.
Kubernetes Horizontal Pod Autoscaler with Prometheus custom metrics
This tool is used to enhance Kubernetes Horizontal Pod Autoscaler (HPA) by integrating Prometheus custom metrics, allowing users to scale pods based on application-specific metrics beyond CPU and memory. It is ideal for DevOps engineers and cloud security professionals who want to implement fine-grained, automated scaling in Kubernetes clusters to meet SLAs efficiently.
Ensure Kubernetes version 1.9 or later is used as the HPA rest client is enabled by default from this version onward. The Metrics Server must be deployed and operational before using this tool. Prometheus must be configured properly to expose custom metrics to the HPA controller. Using this tool requires familiarity with Kubernetes custom metrics API and Prometheus setup.
Install Go 1.8 or later
Set up GOPATH environment variable
Clone the repository: git clone https://github.com/stefanprodan/k8s-prom-hpa into $GOPATH
Deploy the Kubernetes Metrics Server in the kube-system namespace: kubectl create -f ./metrics-server
Wait approximately one minute for Metrics Server to start reporting metrics
Deploy the demo podinfo application to test autoscalingkubectl create -f ./metrics-server
Deploys the Metrics Server add-on to the Kubernetes cluster
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .
Retrieves and formats CPU and memory usage metrics for nodes
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods" | jq .
Retrieves and formats CPU and memory usage metrics for pods
kubectl create -f ./podinfo/podinfo-svc.yaml
Deploys the podinfo demo application to the default namespace for autoscaling tests