Fleet and Elastic Agent 8.12.0
editFleet and Elastic Agent 8.12.0
editReview important information about Fleet Server and Elastic Agent for the 8.12.0 release.
Security updates
edit- Elastic Agent
-
- Update Go version to 1.20.12. #3885
Breaking changes
editBreaking changes can prevent your application from optimal operation and performance. Before you upgrade, review the breaking changes, then mitigate the impact to your application.
Possible naming collisions with Fleet custom ingest pipelines
Details
Starting in this release, Fleet ingest pipelines can be configured to process events at various levels of customization. If you have a custom pipeline already defined that matches the name of a Fleet custom ingest pipeline, it may be unexpectedly called for other data streams in other integrations. For details and investigation about the issue refer to #175254. A fix is planned for delivery in the next 8.12 minor release.
Affected ingest pipelines
APM
-
traces-apm
-
traces-apm.rum
-
traces-apm.sampled`
For APM, if you had previously defined an ingest pipeline of the form traces-apm@custom
to customize the ingestion of documents ingested to the traces-apm
data stream, then by nature of the new @custom
hooks introduced in issue #168019, the traces-apm@custom
pipeline will be called as a pipeline processor in both the traces-apm.rum
and traces-apm.sampled
ingest pipelines. See the following for a comparison of the relevant processors
blocks for each of these pipeline before and after upgrading to 8.12.0:
// traces-apm-8.x.x { "pipeline": { "name": "traces-apm@custom", "ignore_missing_pipeline": true } } // traces-apm-8.12.0 { "pipeline": { "name": "global@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces-apm@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces-apm@custom", <--- Duplicate pipeline entry "ignore_missing_pipeline": true } }
// traces-apm.rum-8.x.x { "pipeline": { "name": "traces-apm.rum@custom", "ignore_missing_pipeline": true } } // traces-apm.rum-8.12.0 { "pipeline": { "name": "global@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces-apm@custom", <--- Collides with `traces-apm@custom` that may be preexisting "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces-apm.rum@custom", "ignore_missing_pipeline": true } }
// traces-apm.sampled-8.x.x { "pipeline": { "name": "traces-apm.rum@custom", "ignore_missing_pipeline": true } } // traces-apm.sampled-8.12.0 { "pipeline": { "name": "global@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces@custom", "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces-apm@custom", <--- Collides with `traces-apm@custom` that may be preexisting "ignore_missing_pipeline": true } }, { "pipeline": { "name": "traces-apm.sampled@custom", "ignore_missing_pipeline": true } }
The immediate workaround to avoid this unwanted behavior is to edit both the traces-apm.rum
and traces-apm.sampled
ingest pipelines to no longer include the traces-apm@custom
pipeline processor.
Please note that this is a temporary workaround, and this change will be undone if the APM integration is upgraded or reinstalled.
Elastic Agent
The elastic_agent
integration is subject to the same type of breaking change as described for APM, above. The following ingest pipelines are impacted:
-
logs-elastic_agent
-
logs-elastic_agent.apm_server
-
logs-elastic_agent.auditbeat
-
logs-elastic_agent.cloud_defend
-
logs-elastic_agent.cloudbeat
-
logs-elastic_agent.endpoint_security
-
logs-elastic_agent.filebeat
-
logs-elastic_agent.filebeat_input
-
logs-elastic_agent.fleet_server
-
logs-elastic_agent.heartbeat
-
logs-elastic_agent.metricbeat
-
logs-elastic_agent.osquerybeat
-
logs-elastic_agent.packetbeat
-
logs-elastic_agent.pf_elastic_collector
-
logs-elastic_agent.pf_elastic_symbolizer
-
logs-elastic_agent.pf_host_agent
The behavior is similar to what’s described for APM above: pipelines such as logs-elastic_agent.filebeat
will include a pipeline
processor that calls logs-elastic_agent@custom
. If you have custom processing logic defined in a logs-elastic_agent@custom
ingest pipeline, it will be called by all of the pipelines listed above.
The workaround is the same: remove the logs-elastic_agent@custom
pipeline processor from all of the ingest pipelines listed above.
Known issues
editFor new DEB and RPM installations the elastic-agent enroll
command incorrectly reports failure
Details
When you run the elastic-agent enroll
command for an RPM or DEB Elastic Agent package, a Retarting agent daemon
message appears in the command output, followed by a Restart attempt failed
error.
Impact
The error does not mean that the enrollment failed. The enrollment actually succeeded. You can ignore the Restart attempt failed
error and continue by running the following commands, after which Elastic Agent should successfully connect to Fleet:
sudo systemctl enable elastic-agent sudo systemctl start elastic-agent
Performance regression in AWS S3 inputs using SQS notification
Details
In 8.12 the default memory queue flush interval was raised from 1 second to 10 seconds. In many configurations this improves performance because it allows the output to batch more events per round trip, which improves efficiency. However, the SQS input has an extra bottleneck that interacts badly with the new value.
For more details see #37754.
Impact
If you are using the Elasticsearch output, and your configuration uses a performance preset, switch it to preset: latency
. If you use no preset or use preset: custom
, then set queue.mem.flush.timeout: 1s
in your output configuration.
If you are not using the Elasticsearch output, set queue.mem.flush.timeout: 1s
in your output configuration.
To configure the output parameters for a Fleet-managed agent, see Advanced YAML configuration. For a standalone agent, see Outputs.
Fleet setup can fail when there are more than one thousand Elastic Agent policies
Details
When you set up Fleet with a very high volume of Elastic Agent policies, one thousand or more, you may encounter an error similar to the following:
[ERROR][plugins.fleet] Unknown error happened while checking Uninstall Tokens validity: 'ResponseError: all shards failed: search_phase_execution_exception Caused by: too_many_nested_clauses: Query contains too many nested clauses; maxClauseCount is set to 5173
The exact number of Elastic Agent policies required to cause the error depends in part on the size of the Elasticsearch cluster, but generally it can happen with volumes above approximately one thousand policies.
Impact
Currently there is no workaround for the issue but a fix is planned to be included in the next version 8.12 release.
Note that according to our policy scaling recommendations, the current recommended maximum number of Elastic Agent policies supported by Fleet is 500.
Agents upgraded to 8.12.0 are stuck in a non-upgradeable state
Details
An issue discovered in Fleet Server prevents Elastic Agents that have been upgraded to version 8.12.0 from being upgraded again, using the Fleet UI, to version 8.12.1 or higher.
This issue is planned to be fixed in versions 8.12.2 and 8.13.0.
Impact
As a workaround, we recommend you to use the Kibana Fleet API to update any documents in which upgrade_details
is either null
or not defined. Note that these steps must be run as a superuser.
POST _security/role/fleet_superuser { "indices": [ { "names": [".fleet*",".kibana*"], "privileges": ["all"], "allow_restricted_indices": true } ] }
POST _security/user/fleet_superuser { "password": "password", "roles": ["superuser", "fleet_superuser"] }
curl -sk -XPOST --user fleet_superuser:password -H 'content-type:application/json' \ -H'x-elastic-product-origin:fleet' \ http://localhost:9200/.fleet-agents/_update_by_query \ -d '{ "script": { "source": "ctx._source.remove(\"upgrade_details\")", "lang": "painless" }, "query": { "bool": { "must_not": { "exists": { "field": "upgrade_details" } } } } }'
DELETE _security/user/fleet_superuser DELETE _security/role/fleet_superuser
After running these API requests, wait at least 10 minutes, and then the agents should be upgradeable again.
Remote Elasticsearch output does not support Elastic Defend response actions
Details
Support for a remote Elasticsearch output was introduced in this release to enable Elastic Agents to send integration or monitoring data to a remote Elasticsearch cluster. A bug has been found that causes Elastic Defend response actions to stop working when a remote Elasticsearch output is configured for an agent.
Impact
This bug is currently being investigated and is expected to be resolved in an upcoming release.
New features
editThe 8.12.0 release Added the following new and notable features.
- Fleet
-
- Add Elastic Agent upgrade states and display each agent’s progress through the upgrade process. See View upgrade status for details. (#167539)
- Add support for preconfigured output secrets. (#172041)
- Add support for pipelines to process events at various levels of customization. (#170270)
- Add UI components to create and edit output secrets. (#169429)
- Add support for remote ES output. (#169252)
- Add the ability to specify secrets in outputs. (#169221)
- Add an integrations configs tab to display input templates. (#168827)
- Add a Kibana task to publish Agent metrics. (#168435)
- Elastic Agent
-
- Add a "preset" field to Elasticsearch output configurations that applies a set of configuration overrides based on a desired performance priority. #37259 #3879 #3797
- Send the current agent upgrade details to Fleet Server as part of the check-in API’s request body. #3528 #3119
- Add new fields for retryable upgrade steps to upgrade details metadata. #3845 #3818
- Improve the upgrade watcher to no longer require root access. #3622
- Enable hints autodiscovery for Elastic Agent so that the host for a container in a Kubernetes pod no longer needs to be specified manually. #3575 #1453
- Enable hints autodiscovery for Elastic Agent so that a configuration can be defined through annotations for specific containers inside a pod. #3416 #3161
-
Support flattened
data_stream.*
fields in an Elastic Agent input configuration. #3465 #3191
Enhancements
edit- Fleet
- Elastic Agent
-
- Use shorter timeouts for diagnostic requests unless CPU diagnostics are requested. #3794 #3197
-
Add configuration parameters for the Kubernetes
leader_election
provider. #3625 - Remove duplicated tags that may be specified during an agent enrollment. #3740 #858
-
Include upgrade details in an agent diagnostics bundle #3624 and in the
elastic-agent status
command output. #3615 #3119 - Start and stop the monitoring server based on the monitoring configuration. #3584 #2734
- Copy files concurrently to reduce the time taken to install and upgrade Elastic Agent on systems running SSDs. #3212
-
Update
elastic-agent-libs
from version 0.7.2 to 0.7.3. #4000
Bug fixes
edit- Fleet
-
- Allow agent upgrades if patch version is higher than Kibana. (#173167)
- Fix secrets with dot-separated variable names. (#173115)
- Fix endpoint privilege management endpoints return errors. (#171722)
- Fix expiration time for immediate bulk upgrades being too short. (#170879)
-
Fix incorrect overwrite of
logs-*
andmetrics-*
data views on every integration install. (#170188) - Create intermediate objects when using dynamic mappings. (#169981)
- Elastic Agent
-
- Preserve build metadata in upgrade version strings. #3824 #3813
-
Create a custom
MarshalYAML()
method to properly handle error fields in agent diagnostics. #3835 #2940 -
Fix the Elastic Agent ignoring the
agent.download.proxy_url
setting during a policy update. #3803 #3560 -
Only try to download an upgrade locally if the
file://
prefix is specified for the source URI. #3682 - Fix logging calls that have missing arguments. #3679
- Update NodeJS version bundled with Heartbeat to v18.18.2. #3655
- Use a third-party library to track progress during install and uninstall operations. #3623 #3607
- Enable the Elastic Agent container to run on Azure Container Instances. #3778 #3711
- When a scheduled upgrade expires, set the upgrade state to failed. #3902 #3817
-
Update
elastic-agent-autodiscover
to version 0.6.6 and fix default metadata configuration. #3938