Ingest uptime data with Heartbeat

edit

If you haven’t already, you need to install Elasticsearch for storing and searching your data, and Kibana for visualizing and managing it. For more information, see Spin up the Elastic Stack.

Install and configure Heartbeat on your servers to periodically check the status of your services. Heartbeat uses probing to monitor the availability of services and helps verify that you’re meeting your service level agreements for service uptime. You typically install Heartbeat as part of a monitoring service that runs on a separate machine and possibly even outside of the network where the services that you want to monitor are running.

Deployment considerations
edit

There are multiple ways to deploy Uptime and Heartbeat. A guiding principle is that when an outage takes down the service being monitored, it should not take down Heartbeat.

Heartbeat is commonly run as a centralized service within a data center. While it’s possible to run it as a separate "sidecar" process paired with each process/container, we recommend against it. Running Heartbeat centrally ensures you will still be able to see monitoring data in the event of an overloaded, disconnected, or otherwise malfunctioning server.

For further redundancy, you may want to deploy multiple instances of Heartbeat across geographic and network boundaries to provide more data.

For example:

  • A site served from a content delivery network (CDN) with points of presence (POPs) around the globe.

    To check if your site is reachable via CDN POPS, deploy multiple Heartbeat instances at different data centers around the world.

  • A service within a single data center that is accessed across multiple VPNs.

    Set up one Heartbeat instance within the VPN the service operates from, and another within an additional VPN that users access the service from. In the event of an outage, having both instances helps pinpoint the network errors.

  • A single service running primarily in a US east coast data center, with a hot failover located in a US west coast data center.

    In each data center, run a Heartbeat instance that checks both the local copy of the service and its counterpart across the country. Set up two monitors in each region, one for the local service, and one for the remote service. In the event of a data center failure, it will be immediately apparent if the service has a connectivity issue to the outside world, or if the failure is only internal.

Step 1: Install Heartbeat
edit

You typically install Heartbeat as part of a monitoring service that runs on a separate machine and possibly even outside of the network where the services that you want to monitor are running.

To download and install Heartbeat, use the commands that work with your system:

curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-7.17.25-amd64.deb
sudo dpkg -i heartbeat-7.17.25-amd64.deb
Other installation options
edit
Step 2: Connect to Elasticsearch and Kibana
edit

Connections to Elasticsearch and Kibana are required to set up Heartbeat.

Set the connection information in heartbeat.yml. To locate this configuration file, see Directory layout.

Specify the cloud.id of your Elasticsearch Service, and set cloud.auth to a user who is authorized to set up Heartbeat. For example:

cloud.id: "staging:dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRjZWM2ZjI2MWE3NGJmMjRjZTMzYmI4ODExYjg0Mjk0ZiRjNmMyY2E2ZDA0MjI0OWFmMGNjN2Q3YTllOTYyNTc0Mw=="
cloud.auth: "heartbeat_setup:YOUR_PASSWORD" 

This examples shows a hard-coded password, but you should store sensitive values in the secrets keystore.

You can send data to other outputs, such as Logstash, but that requires additional configuration and setup.

To learn more about required roles and privileges, see Grant users access to secured resources.

Step 3: Configure Heartbeat monitors
edit

Heartbeat provides monitors to check the status of hosts at set intervals. Heartbeat currently provides monitors for ICMP, TCP, and HTTP (see Heartbeat overview for more about these monitors).

You configure each monitor individually. In heartbeat.yml, specify the list of monitors that you want to enable. Each item in the list begins with a dash (-). The following example configures Heartbeat to use two monitors, an icmp monitor and a tcp monitor:

heartbeat.monitors:
- type: icmp
  schedule: '*/5 * * * * * *' 
  hosts: ["myhost"]
- type: tcp
  schedule: '@every 5s' 
  hosts: ["myhost:12345"]
  mode: any 

The icmp monitor is scheduled to run exactly every 5 seconds (10:00:00, 10:00:05, and so on). The schedule option uses a cron-like syntax based on this cronexpr implementation.

The tcp monitor is set to run every 5 seconds from the time when Heartbeat was started. Heartbeat adds the @every keyword to the syntax provided by the cronexpr package.

The mode specifies whether to ping one IP (any) or all resolvable IPs (all).

To test your configuration file, change to the directory where the Heartbeat binary is installed, and run Heartbeat in the foreground with the following options specified: ./heartbeat test config -e. Make sure your config files are in the path expected by Heartbeat (see Directory layout), or use the -c flag to specify the path to the config file.

For more information about configuring Heartbeat, also see:

Step 4: Configure Heartbeat location
edit

Heartbeat can be deployed in multiple locations so that you can detect differences in availability and response times across those locations. Configure the Heartbeat location to allow Kibana to display location-specific information on Uptime maps and perform Uptime anomaly detection based on location.

To configure the location of a Heartbeat instance, modify the add_observer_metadata processor in heartbeat.yml. The following example specifies the geo.name of the add_observer_metadata processor as us-east-1a:

# ============================ Processors ============================

processors:
  - add_observer_metadata:
      # Optional, but recommended geo settings for the location Heartbeat is running in
      geo: 
        # Token describing this location
        name: us-east-1a 
        # Lat, Lon "
        #location: "37.926868, -78.024902" 

Uncomment the geo setting.

Uncomment name and assign the name of the location of the Heartbeat server.

Optionally uncomment location and assign the latitude and longitude.

To test your configuration file, change to the directory where the Heartbeat binary is installed, and run Heartbeat in the foreground with the following options specified: ./heartbeat test config -e. Make sure your config files are in the path expected by Heartbeat (see Directory layout), or use the -c flag to specify the path to the config file.

Step 5: Set up assets
edit

Heartbeat comes with predefined assets for parsing, indexing, and visualizing your data. To load these assets:

  1. Make sure the user specified in heartbeat.yml is authorized to set up Heartbeat.
  2. From the installation directory, run:

    heartbeat setup -e

    -e is optional and sends output to standard error instead of the configured log output.

This step loads the recommended index template for writing to Elasticsearch. It does not install Heartbeat dashboards. Heartbeat dashboards and installation steps are available in the uptime-contrib GitHub repository.

A connection to Elasticsearch (or Elasticsearch Service) is required to set up the initial environment. If you’re using a different output, such as Logstash, see Load the index template manually.

Step 6: Start Heartbeat
edit

Before starting Heartbeat, modify the user credentials in heartbeat.yml and specify a user who is authorized to publish events.

To start Heartbeat, run:

sudo service heartbeat-elastic start

If you use an init.d script to start Heartbeat, you can’t specify command line flags (see Command reference). To specify flags, start Heartbeat in the foreground.

Also see Heartbeat and systemd.

Heartbeat is now ready to check the status of your services and send events to your defined output.

Step 7: View your data in Kibana
edit

Let’s confirm your data is correctly ingested to your cluster.

  1. Launch Kibana:

    1. Log in to your Elastic Cloud account.
    2. Navigate to the Kibana endpoint in your deployment.
  2. In the side navigation, click Observability > Uptime.

Now let’s have a look at the Uptime app.