- Fleet and Elastic Agent Guide: other versions:
- Fleet and Elastic Agent overview
- Beats and Elastic Agent capabilities
- Quick starts
- Migrate from Beats to Elastic Agent
- Deployment models
- Install Elastic Agents
- Install Fleet-managed Elastic Agents
- Install standalone Elastic Agents
- Install Elastic Agents in a containerized environment
- Run Elastic Agent in a container
- Run Elastic Agent on Kubernetes managed by Fleet
- Install Elastic Agent on Kubernetes using Helm
- Example: Install standalone Elastic Agent on Kubernetes using Helm
- Example: Install Fleet-managed Elastic Agent on Kubernetes using Helm
- Advanced Elastic Agent configuration managed by Fleet
- Configuring Kubernetes metadata enrichment on Elastic Agent
- Run Elastic Agent on GKE managed by Fleet
- Run Elastic Agent on Amazon EKS managed by Fleet
- Run Elastic Agent on Azure AKS managed by Fleet
- Run Elastic Agent Standalone on Kubernetes
- Scaling Elastic Agent on Kubernetes
- Using a custom ingest pipeline with the Kubernetes Integration
- Environment variables
- Run Elastic Agent as an OTel Collector
- Run Elastic Agent without administrative privileges
- Install Elastic Agent from an MSI package
- Installation layout
- Air-gapped environments
- Using a proxy server with Elastic Agent and Fleet
- Uninstall Elastic Agents from edge hosts
- Start and stop Elastic Agents on edge hosts
- Elastic Agent configuration encryption
- Secure connections
- Manage Elastic Agents in Fleet
- Configure standalone Elastic Agents
- Create a standalone Elastic Agent policy
- Structure of a config file
- Inputs
- Providers
- Outputs
- SSL/TLS
- Logging
- Feature flags
- Agent download
- Config file examples
- Grant standalone Elastic Agents access to Elasticsearch
- Example: Use standalone Elastic Agent with Elastic Cloud Serverless to monitor nginx
- Example: Use standalone Elastic Agent with Elasticsearch Service to monitor nginx
- Debug standalone Elastic Agents
- Kubernetes autodiscovery with Elastic Agent
- Monitoring
- Reference YAML
- Manage integrations
- Package signatures
- Add an integration to an Elastic Agent policy
- View integration policies
- Edit or delete an integration policy
- Install and uninstall integration assets
- View integration assets
- Set integration-level outputs
- Upgrade an integration
- Managed integrations content
- Best practices for integrations assets
- Data streams
- Define processors
- Processor syntax
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Command reference
- Troubleshoot
- Release notes
Run Elastic Agent Standalone on Kubernetes
editRun Elastic Agent Standalone on Kubernetes
editWhat you need
edit- kubectl installed.
-
Elasticsearch for storing and searching your data, and Kibana for visualizing and managing it.
To get started quickly, spin up a deployment of our hosted Elasticsearch Service. The Elasticsearch Service is available on AWS, GCP, and Azure. Try it out for free.
To install and run Elasticsearch and Kibana, see Installing the Elastic Stack.
-
kube-state-metrics
.You need to deploy
kube-state-metrics
to get the metrics about the state of the objects on the cluster (see the Kubernetes deployment docs). You can do that by first downloading the project:gh repo clone kubernetes/kube-state-metrics
And then deploying it:
kubectl apply -k kube-state-metrics
On managed Kubernetes solutions, such as AKS, GKE or EKS, Elastic Agent does not have the required permissions to collect metrics from Kubernetes control plane components, like
kube-scheduler
andkube-controller-manager
. Audit logs are only available on Kubernetes control plane nodes as well, and hence cannot be collected by Elastic Agent. Refer here to find more information. For more information about specific cloud providers, refer to Run Elastic Agent on Azure AKS managed by Fleet, Run Elastic Agent on GKE managed by Fleet and Run Elastic Agent on Amazon EKS managed by Fleet
Step 1: Download the Elastic Agent manifest
editYou can find Elastic Agent Docker images here.
Download the manifest file:
curl -L -O https://raw.githubusercontent.com/elastic/elastic-agent/v8.16.4/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml
You might need to adjust resource limits of the Elastic Agent container in the manifest. Container resource usage depends on the number of data streams and the environment size.
This manifest includes the Kubernetes integration to collect Kubernetes metrics and System integration to collect system level metrics and logs from nodes.
The Elastic Agent is deployed as a DaemonSet
to ensure that there is a running instance on each node of the cluster. These instances are used to retrieve most metrics from the host, such as system metrics, Docker stats, and metrics from all the services running on top of Kubernetes. These metrics are accessed through the deployed kube-state-metrics
. Notice that everything is deployed under the kube-system
namespace by default. To change the namespace, modify the manifest file.
Moreover, one of the Pods in the DaemonSet will constantly hold a leader lock which makes it responsible for handling cluster-wide monitoring. You can find more information about leader election configuration options at leader election provider. The leader pod will retrieve metrics that are unique for the whole cluster, such as Kubernetes events or kube-state-metrics. We make sure that these metrics are retrieved from the leader pod by applying the following condition in the manifest, before declaring the data streams with these metricsets:
... inputs: - id: kubernetes-cluster-metrics condition: ${kubernetes_leaderelection.leader} == true type: kubernetes/metrics # metricsets with the state_ prefix and the metricset event ...
For Kubernetes Security Posture Management (KSPM) purposes, the Elastic Agent requires read access to various types of Kubernetes resources, node processes, and files. To achieve this, read permissions are granted to the Elastic Agent for the necessary resources, and volumes from the hosting node’s file system are mounted to allow accessibility to the Elastic Agent pods.
The size and the number of nodes in a Kubernetes cluster can be large at times, and in such a case the Pod that will be collecting cluster level metrics might require more runtime resources than you would like to dedicate to all of the pods in the DaemonSet. The leader which is collecting the cluster wide metrics may face performance issues due to resource limitations if under-resourced. In this case users might consider avoiding the use of a single DaemonSet with the leader election strategy and instead run a dedicated standalone Elastic Agent instance for collecting cluster wide metrics using a Deployment in addition to the DaemonSet to collect metrics for each node. Then both the Deployment and the DaemonSet can be resourced independently and appropriately. For more information check the Scaling Elastic Agent on Kubernetes page.
Step 2: Connect to the Elastic Stack
editSet the Elasticsearch settings before deploying the manifest:
- name: ES_USERNAME value: "elastic" - name: ES_PASSWORD value: "passpassMyStr0ngP@ss" - name: ES_HOST value: "https://somesuperhostiduuid.europe-west1.gcp.cloud.es.io:9243"
The basic authentication username used to connect to Elasticsearch. |
|
The basic authentication password used to connect to Kibana. |
|
The Elasticsearch host to communicate with. |
Refer to Environment variables for all available options.
Step 3: Configure tolerations
editKubernetes control plane nodes can use taints to limit the workloads that can run on them. The manifest for standalone Elastic Agent defines tolerations to run on these. Agents running on control plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes. To disable Elastic Agent from running on control plane nodes, remove the following part of the DaemonSet spec:
spec: # Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes. # Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes tolerations: - key: node-role.kubernetes.io/control-plane effect: NoSchedule - key: node-role.kubernetes.io/master effect: NoSchedule
Both these two tolerations do the same, but node-role.kubernetes.io/master
is deprecated as of Kubernetes version v1.25.
Step 4: Deploy the Elastic Agent
editTo deploy Elastic Agent to Kubernetes, run:
kubectl create -f elastic-agent-standalone-kubernetes.yaml
To check the status, run:
$ kubectl -n kube-system get pods -l app=elastic-agent NAME READY STATUS RESTARTS AGE elastic-agent-4665d 1/1 Running 0 81m elastic-agent-9f466c4b5-l8cm8 1/1 Running 0 81m elastic-agent-fj2z9 1/1 Running 0 81m elastic-agent-hs4pb 1/1 Running 0 81m
Running Elastic Agent on a read-only file system
If you’d like to run Elastic Agent on Kubernetes on a read-only file
system, you can do so by specifying the readOnlyRootFilesystem
option.
Step 5: View your data in Kibana
edit-
Launch Kibana:
- Log in to your Elastic Cloud account.
- Navigate to the Kibana endpoint in your deployment.
Point your browser to http://localhost:5601, replacing
localhost
with the name of the Kibana host. -
You can see data flowing in by going to Analytics → Discover and selecting the index
metrics-*
, or even more specific,metrics-kubernetes.*
. If you can’t see these indexes, create a data view for them. - You can see predefined dashboards by selecting Analytics→Dashboard, or by installing assets through an integration.
Red Hat OpenShift configuration
editIf you are using Red Hat OpenShift, you need to specify additional settings in the manifest file and enable the container to run as privileged.
-
In the manifest file, modify the
agent-node-datastreams
ConfigMap and adjust inputs:-
kubernetes-cluster-metrics
input:-
If
https
is used to accesskube-state-metrics
, add the following settings to allkubernetes.state_*
datasets:bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token ssl.certificate_authorities: - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
-
-
kubernetes-node-metrics
input:-
Change the
kubernetes.controllermanager
data stream condition to:condition: ${kubernetes.labels.app} == 'kube-controller-manager'
-
Change the
kubernetes.scheduler
data stream condition to:condition: ${kubernetes.labels.app} == 'openshift-kube-scheduler'
-
The
kubernetes.proxy
data stream configuration should look like:- data_stream: dataset: kubernetes.proxy type: metrics metricsets: - proxy hosts: - 'localhost:29101' period: 10s
-
Add the following settings to all data streams that connect to
https://${env.NODE_NAME}:10250
:bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token ssl.certificate_authorities: - /path/to/ca-bundle.crt
ca-bundle.crt
can be any CA bundle that contains the issuer of the certificate used in the Kubelet API. According to each specific installation of OpenShift this can be found either insecrets
or inconfigmaps
. In some installations it can be available as part of the service account secret, in/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
. When using the OpenShift installer for GCP, mount the followingconfigmap
in the elastic-agent pod and useca-bundle.crt
inssl.certificate_authorities
:Name: kubelet-serving-ca Namespace: openshift-kube-apiserver Labels: <none> Annotations: <none> Data ==== ca-bundle.crt:
-
-
-
Grant the
elastic-agent
service account access to the privileged SCC:oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:elastic-agent
This command enables the container to be privileged as an administrator for OpenShift.
-
If the namespace where elastic-agent is running has the
"openshift.io/node-selector"
annotation set, elastic-agent might not run on all nodes. In this case consider overriding the node selector for the namespace to allow scheduling on any node:oc patch namespace kube-system -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}'
This command sets the node selector for the project to an empty string.
Autodiscover targeted Pods
editRefer to Kubernetes autodiscovery with Elastic Agent for more information.
On this page