Autoscaling stateless applications

edit

This section only applies to stateless applications. Check Elasticsearch autoscaling for more details about scaling automatically Elasticsearch.

The Horizontal Pod Autoscaler can be used to automatically scale the deployments of the following resources:

  • Kibana
  • APM Server
  • Enterprise Search
  • Elastic Maps Server

These resources expose the scale subresource which can be used by the Horizontal Pod Autoscaler controller to automatically adjust the number of replicas according to the CPU load or any other custom or external metric. This example shows how to create an HorizontalPodAutoscaler resource to adjust the replicas of a Kibana deployment according to the CPU load:

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch-sample
spec:
  version: 8.16.0
  nodeSets:
    - name: default
      count: 1
      config:
        node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana-sample
spec:
  version: 8.16.0
  count: 1
  elasticsearchRef:
    name: "elasticsearch-sample"
  podTemplate:
    spec:
      containers:
        - name: kibana
          resources:
            requests:
              memory: 1Gi
              cpu: 1
            limits:
              memory: 1Gi
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: kb
spec:
  scaleTargetRef:
    apiVersion: kibana.k8s.elastic.co/v1
    kind: Kibana
    name: kibana-sample
  minReplicas: 1
  maxReplicas: 4
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50