Heartbeat quick start: installation and configuration
editHeartbeat quick start: installation and configuration
editThis guide describes how to get started quickly collecting uptime data about your hosts. You’ll learn how to:
- install Heartbeat
- specify the protocols to monitor
- send uptime data to Elasticsearch
- visualize the uptime data in Kibana
Before you begin
editYou need Elasticsearch for storing and searching your data, and Kibana for visualizing and managing it.
To get started quickly, spin up a deployment of our hosted Elasticsearch Service. The Elasticsearch Service is available on AWS, GCP, and Azure. Try it out for free.
To install and run Elasticsearch and Kibana, see Installing the Elastic Stack.
Step 1: Install Heartbeat
editUnlike most Beats, which you install on edge nodes, you typically install Heartbeat as part of a monitoring service that runs on a separate machine and possibly even outside of the network where the services that you want to monitor are running.
To download and install Heartbeat, use the commands that work with your system:
Version 8.17.0 of Heartbeat has not yet been released.
Version 8.17.0 of Heartbeat has not yet been released.
Version 8.17.0 of Heartbeat has not yet been released.
Version 8.17.0 of Heartbeat has not yet been released.
Version 8.17.0 of Heartbeat has not yet been released.
The commands shown are for AMD platforms, but ARM packages are also available. Refer to the download page for the full list of available packages.
Other installation options
editStep 2: Connect to the Elastic Stack
editConnections to Elasticsearch and Kibana are required to set up Heartbeat.
Set the connection information in heartbeat.yml
. To locate this
configuration file, see Directory layout.
Specify the cloud.id of your Elasticsearch Service, and set cloud.auth to a user who is authorized to set up Heartbeat. For example:
cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw==" cloud.auth: "heartbeat_setup:YOUR_PASSWORD"
This examples shows a hard-coded password, but you should store sensitive values in the secrets keystore. |
-
Set the host and port where Heartbeat can find the Elasticsearch installation, and set the username and password of a user who is authorized to set up Heartbeat. For example:
output.elasticsearch: hosts: ["https://myEShost:9200"] username: "heartbeat_internal" password: "YOUR_PASSWORD" ssl: enabled: true ca_trusted_fingerprint: "b9a10bbe64ee9826abeda6546fc988c8bf798b41957c33d05db736716513dc9c"
This example shows a hard-coded password, but you should store sensitive values in the secrets keystore.
This example shows a hard-coded fingerprint, but you should store sensitive values in the secrets keystore. The fingerprint is a HEX encoded SHA-256 of a CA certificate, when you start Elasticsearch for the first time, security features such as network encryption (TLS) for Elasticsearch are enabled by default. If you are using the self-signed certificate generated by Elasticsearch when it is started for the first time, you will need to add its fingerprint here. The fingerprint is printed on Elasticsearch start up logs, or you can refer to connect clients to Elasticsearch documentation for other options on retrieving it. If you are providing your own SSL certificate to Elasticsearch refer to Heartbeat documentation on how to setup SSL.
-
If you plan to use our pre-built Kibana dashboards, configure the Kibana endpoint. Skip this step if Kibana is running on the same host as Elasticsearch.
The hostname and port of the machine where Kibana is running, for example,
mykibanahost:5601
. If you specify a path after the port number, include the scheme and port:http://mykibanahost:5601/path
.The
username
andpassword
settings for Kibana are optional. If you don’t specify credentials for Kibana, Heartbeat uses theusername
andpassword
specified for the Elasticsearch output.To use the pre-built Kibana dashboards, this user must be authorized to view dashboards or have the
kibana_admin
built-in role.
To learn more about required roles and privileges, see Grant users access to secured resources.
You can send data to other outputs, such as Logstash, but that requires additional configuration and setup.
Step 3: Configure Heartbeat monitors
editHeartbeat provides monitors to check the status of hosts at set intervals. Heartbeat currently provides monitors for ICMP, TCP, and HTTP (see Heartbeat overview for more about these monitors).
You configure each monitor individually. In heartbeat.yml
, specify the
list of monitors that you want to enable. Each item in the list begins with a
dash (-). The following example configures Heartbeat to use three monitors: an
icmp
monitor, a tcp
monitor, and an http
monitor.
heartbeat.monitors: - type: icmp schedule: '*/5 * * * * * *' hosts: ["myhost"] id: my-icmp-service name: My ICMP Service - type: tcp schedule: '@every 5s' hosts: ["myhost:12345"] mode: any id: my-tcp-service - type: http schedule: '@every 5s' urls: ["http://example.net"] service.name: apm-service-name id: my-http-service name: My HTTP Service
The |
|
The |
|
The |
|
The |
To test your configuration file, change to the directory where the
Heartbeat binary is installed, and run Heartbeat in the foreground with
the following options specified: ./heartbeat test config -e
. Make sure your
config files are in the path expected by Heartbeat (see Directory layout),
or use the -c
flag to specify the path to the config file.
For more information about configuring Heartbeat, also see:
- Configure Heartbeat
- Config file format
-
heartbeat.reference.yml
: This reference configuration file shows all non-deprecated options. You’ll find it in the same location asheartbeat.yml
.
Step 4: Configure the Heartbeat location
editHeartbeat can be deployed in multiple locations so that you can detect differences in availability and response times across those locations. Configure the Heartbeat location to allow Kibana to display location-specific information on Uptime maps and perform Uptime anomaly detection based on location.
To configure the location of a Heartbeat instance, modify the
add_observer_metadata
processor in heartbeat.yml
. The following
example specifies the geo.name
of the add_observer_metadata
processor as
us-east-1a
:
# ============================ Processors ============================ processors: - add_observer_metadata: # Optional, but recommended geo settings for the location Heartbeat is running in geo: # Token describing this location name: us-east-1a # Lat, Lon " #location: "37.926868, -78.024902"
Uncomment the |
|
Uncomment |
|
Optionally uncomment |
To test your configuration file, change to the directory where the
Heartbeat binary is installed, and run Heartbeat in the foreground with
the following options specified: ./heartbeat test config -e
. Make sure your
config files are in the path expected by Heartbeat (see Directory layout),
or use the -c
flag to specify the path to the config file.
For more information about configuring Heartbeat, also see:
- Configure Heartbeat
- Config file format
-
heartbeat.reference.yml
: This reference configuration file shows all non-deprecated options. You’ll find it in the same location asheartbeat.yml
.
Step 5: Set up assets
editHeartbeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:
-
Make sure the user specified in
heartbeat.yml
is authorized to set up Heartbeat. -
From the installation directory, run:
heartbeat setup -e
heartbeat setup -e
./heartbeat setup -e
./heartbeat setup -e
PS > .\heartbeat.exe setup -e
-e
is optional and sends output to standard error instead of the configured log output.
This step loads the recommended index template for writing to Elasticsearch. It does not install Heartbeat dashboards. Heartbeat dashboards and installation steps are available in the uptime-contrib GitHub repository.
A connection to Elasticsearch (or Elasticsearch Service) is required to set up the initial environment. If you’re using a different output, such as Logstash, see Load the index template manually.
Step 6: Start Heartbeat
editBefore starting Heartbeat, modify the user credentials in
heartbeat.yml
and specify a user who is
authorized to publish events.
To start Heartbeat, run:
sudo service heartbeat-elastic start
If you use an init.d
script to start Heartbeat, you can’t specify command
line flags (see Command reference). To specify flags, start Heartbeat in
the foreground.
Also see Heartbeat and systemd.
sudo service heartbeat-elastic start
If you use an init.d
script to start Heartbeat, you can’t specify command
line flags (see Command reference). To specify flags, start Heartbeat in
the foreground.
Also see Heartbeat and systemd.
You’ll be running Heartbeat as root, so you need to change ownership
of the configuration file, or run Heartbeat with |
You’ll be running Heartbeat as root, so you need to change ownership
of the configuration file, or run Heartbeat with |
PS C:\Program Files\heartbeat> Start-Service heartbeat
By default, Windows log files are stored in C:\ProgramData\heartbeat\Logs
.
Heartbeat is now ready to check the status of your services and send events to your defined output.
Step 7: View your data in Kibana
editHeartbeat comes with pre-built Kibana dashboards and UIs for visualizing the status of your services. The dashboards are available in the uptime-contrib GitHub repository.
If you loaded the dashboards earlier, open them now.
To open the dashboards:
-
Launch Kibana:
- Log in to your Elastic Cloud account.
- Navigate to the Kibana endpoint in your deployment.
Point your browser to http://localhost:5601, replacing
localhost
with the name of the Kibana host. -
In the side navigation, click Discover. To see Heartbeat data, make sure the predefined
heartbeat-*
data view is selected.If you don’t see data in Kibana, try changing the time filter to a larger range. By default, Kibana shows the last 15 minutes.
- In the side navigation, click Dashboard, then select the dashboard that you want to open.
The dashboards are provided as examples. We recommend that you customize them to meet your needs.
What’s next?
editNow that you have your uptime data streaming into Elasticsearch, learn how to unify your logs, metrics, uptime, and application performance data.
-
Ingest data from other sources by installing and configuring other Elastic Beats:
Elastic Beats To capture Infrastructure metrics
Logs
Windows event logs
Application performance metrics
Audit events
-
Use the Observability apps in Kibana to search across all your data:
Elastic apps Use to Explore metrics about systems and services across your ecosystem
Tail related log data in real time
Monitor availability issues across your apps and services
Monitor application performance
Analyze security events