- Fleet and Elastic Agent Guide: other versions:
- Fleet and Elastic Agent overview
- Beats and Elastic Agent capabilities
- Quick starts
- Migrate from Beats to Elastic Agent
- Deployment models
- Install Elastic Agents
- Install Fleet-managed Elastic Agents
- Install standalone Elastic Agents
- Install Elastic Agents in a containerized environment
- Run Elastic Agent in a container
- Run Elastic Agent on Kubernetes managed by Fleet
- Install Elastic Agent on Kubernetes using Helm
- Example: Install standalone Elastic Agent on Kubernetes using Helm
- Example: Install Fleet-managed Elastic Agent on Kubernetes using Helm
- Advanced Elastic Agent configuration managed by Fleet
- Configuring Kubernetes metadata enrichment on Elastic Agent
- Run Elastic Agent on GKE managed by Fleet
- Run Elastic Agent on Amazon EKS managed by Fleet
- Run Elastic Agent on Azure AKS managed by Fleet
- Run Elastic Agent Standalone on Kubernetes
- Scaling Elastic Agent on Kubernetes
- Using a custom ingest pipeline with the Kubernetes Integration
- Environment variables
- Run Elastic Agent as an OTel Collector
- Run Elastic Agent without administrative privileges
- Install Elastic Agent from an MSI package
- Installation layout
- Air-gapped environments
- Using a proxy server with Elastic Agent and Fleet
- Uninstall Elastic Agents from edge hosts
- Start and stop Elastic Agents on edge hosts
- Elastic Agent configuration encryption
- Secure connections
- Manage Elastic Agents in Fleet
- Configure standalone Elastic Agents
- Create a standalone Elastic Agent policy
- Structure of a config file
- Inputs
- Providers
- Outputs
- SSL/TLS
- Logging
- Feature flags
- Agent download
- Config file examples
- Grant standalone Elastic Agents access to Elasticsearch
- Example: Use standalone Elastic Agent with Elastic Cloud Serverless to monitor nginx
- Example: Use standalone Elastic Agent with Elasticsearch Service to monitor nginx
- Debug standalone Elastic Agents
- Kubernetes autodiscovery with Elastic Agent
- Monitoring
- Reference YAML
- Manage integrations
- Package signatures
- Add an integration to an Elastic Agent policy
- View integration policies
- Edit or delete an integration policy
- Install and uninstall integration assets
- View integration assets
- Set integration-level outputs
- Upgrade an integration
- Managed integrations content
- Best practices for integrations assets
- Data streams
- Define processors
- Processor syntax
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Command reference
- Troubleshoot
- Release notes
Kubernetes LeaderElection Provider
editKubernetes LeaderElection Provider
editProvides the option to enable leaderelection between a set of Elastic Agents
running on Kubernetes. Only one Elastic Agent at a time will be the holder of the leader
lock and based on this, configurations can be enabled with the condition
that the Elastic Agent holds the leadership. This is useful in cases where the Elastic Agent between a set of Elastic Agents collects cluster wide metrics for the Kubernetes cluster, such as the kube-state-metrics
endpoint.
Provider needs a kubeconfig
file to establish a connection to Kubernetes API.
It can automatically reach the API if it’s running in an InCluster environment (Elastic Agent runs as Pod).
providers.kubernetes_leaderelection: #enabled: true #kube_config: /Users/elastic-agent/.kube/config #kube_client_options: # qps: 5 # burst: 10 #leader_lease: agent-k8s-leader-lock #leader_retryperiod: 2 #leader_leaseduration: 15 #leader_renewdeadline: 10
-
enabled
-
(Optional) Defaults to true. To explicitly disable the LeaderElection provider,
set
enabled: false
. -
kube_config
-
(Optional) Use the given config file as configuration for the Kubernetes
client. If
kube_config
is not set,KUBECONFIG
environment variable will be checked and will fall back to InCluster if it’s not present. -
kube_client_options
-
(Optional) Configure additional options for the Kubernetes client.
Supported options are
qps
andburst
. If not set, the Kubernetes client’s default QPS and burst settings are used. -
leader_lease
-
(Optional) Specify the name of the leader lease.
This is set to
elastic-agent-cluster-leader
by default. -
leader_retryperiod
-
(Optional) Default value 2 (in sec). How long before Elastic Agents try to get the
leader
role. -
leader_leaseduration
-
(Optional) Default value 15 (in sec). How long the leader Elastic Agent holds the
leader
state. -
leader_renewdeadline
-
(Optional) Default value 10 (in sec). How long leaders retry getting the
leader
role.
The available key is:
Key | Type | Description |
---|---|---|
|
|
The value of the leadership flag. This is set to |
Understanding leader timings
editAs described above, the LeaderElection configuration offers the following parameters: Lease duration (leader_leaseduration
), Renew deadline (leader_renewdeadline
), and
Retry period (leader_retryperiod
). Based on the config provided, each agent will trigger Kubernetes API requests and will try to check the status of the lease.
The number of leader calls to the K8s Control API is proportional to the number of Elastic Agents installed. This means that requests will come from all Elastic Agents per leader_retryperiod
. Setting leader_retryperiod
to a greater value than the default (2sec), means that fewer requests will be made towards the Kubernetes Control API, but will also increase the period where collection of metrics from the leader Elastic Agent might be lost.
The library applies specific checks for the timing parameters and if those are not verified Elastic Agent will exit with a panic
error.
In general: - Leaseduration must be greater than renewdeadline - Renewdeadline must be greater than retryperiod*JitterFactor.
Constant JitterFactor=1.2 is defined in leaderelection lib.
Enabling configurations only when on leadership
editUse conditions based on the kubernetes_leaderelection.leader
key to leverage the leaderelection provider and enable specific inputs only when the Elastic Agent holds the leadership lock.
The below example enables the state_container
metricset only when the leadership lock is acquired:
- data_stream: dataset: kubernetes.state_container type: metrics metricsets: - state_container add_metadata: true hosts: - 'kube-state-metrics:8080' period: 10s condition: ${kubernetes_leaderelection.leader} == true