Add remote clusters using API key authentication

edit

Add remote clusters using API key authentication

edit

API key authentication enables a local cluster to authenticate itself with a remote cluster via a cross-cluster API key. The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges.

All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to my-index on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key.

On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key.

In this model, cross-cluster operations use a dedicated server port (remote cluster interface) for communication between clusters. A remote cluster must enable this port for local clusters to connect. Configure Transport Layer Security (TLS) for this port to maximize security (as explained in Establish trust with a remote cluster).

The local cluster must trust the remote cluster on the remote cluster interface. This means that the local cluster trusts the remote cluster’s certificate authority (CA) that signs the server certificate used by the remote cluster interface. When establishing a connection, all nodes from the local cluster that participate in cross-cluster communication verify certificates from nodes on the other side, based on the TLS trust configuration.

To add a remote cluster using API key authentication:

If you run into any issues, refer to Troubleshooting.

Prerequisites

edit
  • The Elasticsearch security features need to be enabled on both clusters, on every node. Security is enabled by default. If it’s disabled, set xpack.security.enabled to true in elasticsearch.yml. Refer to General security settings.
  • The nodes of the local and remote clusters must be on version 8.10 or later.
  • The local and remote clusters must have an appropriate license. For more information, refer to https://www.elastic.co/subscriptions.

Establish trust with a remote cluster

edit

If a remote cluster is part of an Elasticsearch Service deployment, it has a valid certificate by default. You can therefore skip steps related to certificates in these instructions.

On the remote cluster

edit
  1. Enable the remote cluster server on every node of the remote cluster. In elasticsearch.yml:

    1. Set remote_cluster_server.enabled to true.
    2. Configure the bind and publish address for remote cluster server traffic, for example using remote_cluster.host. Without configuring the address, remote cluster traffic may be bound to the local interface, and remote clusters running on other machines can’t connect.
    3. Optionally, configure the remote server port using remote_cluster.port (defaults to 9443).
  2. Next, generate a certificate authority (CA) and a server certificate/key pair. On one of the nodes of the remote cluster, from the directory where Elasticsearch has been installed:

    1. Create a CA, if you don’t have a CA already:

      ./bin/elasticsearch-certutil ca --pem --out=cross-cluster-ca.zip --pass CA_PASSWORD

      Replace CA_PASSWORD with the password you want to use for the CA. You can remove the --pass option and its argument if you are not deploying to a production environment.

    2. Unzip the generated cross-cluster-ca.zip file. This compressed file contains the following content:

      /ca
      |_ ca.crt
      |_ ca.key
    3. Generate a certificate and private key pair for the nodes in the remote cluster:

      ./bin/elasticsearch-certutil cert --out=cross-cluster.p12 --pass=CERT_PASSWORD --ca-cert=ca/ca.crt --ca-key=ca/ca.key --ca-pass=CA_PASSWORD --dns=example.com --ip=127.0.0.1
      • Replace CA_PASSWORD with the CA password from the previous step.
      • Replace CERT_PASSWORD with the password you want to use for the generated private key.
      • Use the --dns option to specify the relevant DNS name for the certificate. You can specify it multiple times for multiple DNS.
      • Use the --ip option to specify the relevant IP address for the certificate. You can specify it multiple times for multiple IP addresses.
    4. If the remote cluster has multiple nodes, you can either:

      • create a single wildcard certificate for all nodes;
      • or, create separate certificates for each node either manually or in batch with the silent mode.
  3. On every node of the remote cluster:

    1. Copy the cross-cluster.p12 file from the earlier step to the config directory. If you didn’t create a wildcard certificate, make sure you copy the correct node-specific p12 file.
    2. Add following configuration to elasticsearch.yml:

      xpack.security.remote_cluster_server.ssl.enabled: true
      xpack.security.remote_cluster_server.ssl.keystore.path: cross-cluster.p12
    3. Add the SSL keystore password to the Elasticsearch keystore:

      ./bin/elasticsearch-keystore add xpack.security.remote_cluster_server.ssl.keystore.secure_password

      When prompted, enter the CERT_PASSWORD from the earlier step.

  4. Restart the remote cluster.
  5. On the remote cluster, generate a cross-cluster API key that provides access to the indices you want to use for cross-cluster search or cross-cluster replication. You can use the Create Cross-Cluster API key API or Kibana.
  6. Copy the encoded key (encoded in the response) to a safe location. You will need it to connect to the remote cluster later.

On the local cluster

edit
  1. On every node of the local cluster:

    1. Copy the ca.crt file generated on the remote cluster earlier into the config directory, renaming the file remote-cluster-ca.crt.
    2. Add following configuration to elasticsearch.yml:

      xpack.security.remote_cluster_client.ssl.enabled: true
      xpack.security.remote_cluster_client.ssl.certificate_authorities: [ "remote-cluster-ca.crt" ]
    3. Add the cross-cluster API key, created on the remote cluster earlier, to the keystore:

      ./bin/elasticsearch-keystore add cluster.remote.ALIAS.credentials

      Replace ALIAS with the same name that you will use to create the remote cluster entry later. When prompted, enter the encoded cross-cluster API key created on the remote cluster earlier.

  2. Restart the local cluster to load changes to the keystore and settings.

Note: If you are configuring only the cross-cluster API key, you can call the Nodes reload secure settings API, instead of restarting the cluster. Configuring the remote_cluster_client settings in elasticsearch.yml still requires a restart.

Connect to a remote cluster

edit

You must have the manage cluster privilege to connect remote clusters.

The local cluster uses the remote cluster interface to establish communication with remote clusters. The coordinating nodes in the local cluster establish long-lived TCP connections with specific nodes in the remote cluster. Elasticsearch requires these connections to remain open, even if the connections are idle for an extended period.

To add a remote cluster from Stack Management in Kibana:

  1. Select Remote Clusters from the side navigation.
  2. Enter a name (cluster alias) for the remote cluster.
  3. Specify the Elasticsearch endpoint URL, or the IP address or host name of the remote cluster followed by the remote cluster port (defaults to 9443). For example, cluster.es.eastus2.staging.azure.foundit.no:9443 or 192.168.1.1:9443.

Alternatively, use the cluster update settings API to add a remote cluster. You can also use this API to dynamically configure remote clusters for every node in the local cluster. To configure remote clusters on individual nodes in the local cluster, define static settings in elasticsearch.yml for each node.

The following request adds a remote cluster with an alias of cluster_one. This cluster alias is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices.

resp = client.cluster.put_settings(
    persistent={
        "cluster": {
            "remote": {
                "cluster_one": {
                    "seeds": [
                        "127.0.0.1:{remote-interface-default-port}"
                    ]
                }
            }
        }
    },
)
print(resp)
const response = await client.cluster.putSettings({
  persistent: {
    cluster: {
      remote: {
        cluster_one: {
          seeds: ["127.0.0.1:{remote-interface-default-port}"],
        },
      },
    },
  },
});
console.log(response);
PUT /_cluster/settings
{
  "persistent" : {
    "cluster" : {
      "remote" : {
        "cluster_one" : {    
          "seeds" : [
            "127.0.0.1:9443" 
          ]
        }
      }
    }
  }
}

The cluster alias of this remote cluster is cluster_one.

Specifies the hostname and remote cluster port of a seed node in the remote cluster.

You can use the remote cluster info API to verify that the local cluster is successfully connected to the remote cluster:

resp = client.cluster.remote_info()
print(resp)
response = client.cluster.remote_info
puts response
const response = await client.cluster.remoteInfo();
console.log(response);
GET /_remote/info

The API response indicates that the local cluster is connected to the remote cluster with the cluster alias cluster_one:

{
  "cluster_one" : {
    "seeds" : [
      "127.0.0.1:9443"
    ],
    "connected" : true,
    "num_nodes_connected" : 1,  
    "max_connections_per_cluster" : 3,
    "initial_connect_timeout" : "30s",
    "skip_unavailable" : true, 
    "cluster_credentials": "::es_redacted::", 
    "mode" : "sniff"
  }
}

The number of nodes in the remote cluster the local cluster is connected to.

Indicates whether to skip the remote cluster if searched through cross-cluster search but no nodes are available.

If present, indicates the remote cluster has connected using API key authentication.

Dynamically configure remote clusters

edit

Use the cluster update settings API to dynamically configure remote settings on every node in the cluster. The following request adds three remote clusters: cluster_one, cluster_two, and cluster_three.

The seeds parameter specifies the hostname and remote cluster port (default 9443) of a seed node in the remote cluster.

The mode parameter determines the configured connection mode, which defaults to sniff. Because cluster_one doesn’t specify a mode, it uses the default. Both cluster_two and cluster_three explicitly use different modes.

resp = client.cluster.put_settings(
    persistent={
        "cluster": {
            "remote": {
                "cluster_one": {
                    "seeds": [
                        "127.0.0.1:{remote-interface-default-port}"
                    ]
                },
                "cluster_two": {
                    "mode": "sniff",
                    "seeds": [
                        "127.0.0.1:{remote-interface-default-port-plus1}"
                    ],
                    "transport.compress": True,
                    "skip_unavailable": True
                },
                "cluster_three": {
                    "mode": "proxy",
                    "proxy_address": "127.0.0.1:{remote-interface-default-port-plus2}"
                }
            }
        }
    },
)
print(resp)
const response = await client.cluster.putSettings({
  persistent: {
    cluster: {
      remote: {
        cluster_one: {
          seeds: ["127.0.0.1:{remote-interface-default-port}"],
        },
        cluster_two: {
          mode: "sniff",
          seeds: ["127.0.0.1:{remote-interface-default-port-plus1}"],
          "transport.compress": true,
          skip_unavailable: true,
        },
        cluster_three: {
          mode: "proxy",
          proxy_address: "127.0.0.1:{remote-interface-default-port-plus2}",
        },
      },
    },
  },
});
console.log(response);
PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "cluster_one": {
          "seeds": [
            "127.0.0.1:9443"
          ]
        },
        "cluster_two": {
          "mode": "sniff",
          "seeds": [
            "127.0.0.1:9444"
          ],
          "transport.compress": true,
          "skip_unavailable": true
        },
        "cluster_three": {
          "mode": "proxy",
          "proxy_address": "127.0.0.1:9445"
        }
      }
    }
  }
}

You can dynamically update settings for a remote cluster after the initial configuration. The following request updates the compression settings for cluster_two, and the compression and ping schedule settings for cluster_three.

When the compression or ping schedule settings change, all existing node connections must close and re-open, which can cause in-flight requests to fail.

resp = client.cluster.put_settings(
    persistent={
        "cluster": {
            "remote": {
                "cluster_two": {
                    "transport.compress": False
                },
                "cluster_three": {
                    "transport.compress": True,
                    "transport.ping_schedule": "60s"
                }
            }
        }
    },
)
print(resp)
response = client.cluster.put_settings(
  body: {
    persistent: {
      cluster: {
        remote: {
          cluster_two: {
            'transport.compress' => false
          },
          cluster_three: {
            'transport.compress' => true,
            'transport.ping_schedule' => '60s'
          }
        }
      }
    }
  }
)
puts response
const response = await client.cluster.putSettings({
  persistent: {
    cluster: {
      remote: {
        cluster_two: {
          "transport.compress": false,
        },
        cluster_three: {
          "transport.compress": true,
          "transport.ping_schedule": "60s",
        },
      },
    },
  },
});
console.log(response);
PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "cluster_two": {
          "transport.compress": false
        },
        "cluster_three": {
          "transport.compress": true,
          "transport.ping_schedule": "60s"
        }
      }
    }
  }
}

You can delete a remote cluster from the cluster settings by passing null values for each remote cluster setting. The following request removes cluster_two from the cluster settings, leaving cluster_one and cluster_three intact:

resp = client.cluster.put_settings(
    persistent={
        "cluster": {
            "remote": {
                "cluster_two": {
                    "mode": None,
                    "seeds": None,
                    "skip_unavailable": None,
                    "transport.compress": None
                }
            }
        }
    },
)
print(resp)
response = client.cluster.put_settings(
  body: {
    persistent: {
      cluster: {
        remote: {
          cluster_two: {
            mode: nil,
            seeds: nil,
            skip_unavailable: nil,
            'transport.compress' => nil
          }
        }
      }
    }
  }
)
puts response
const response = await client.cluster.putSettings({
  persistent: {
    cluster: {
      remote: {
        cluster_two: {
          mode: null,
          seeds: null,
          skip_unavailable: null,
          "transport.compress": null,
        },
      },
    },
  },
});
console.log(response);
PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "cluster_two": {
          "mode": null,
          "seeds": null,
          "skip_unavailable": null,
          "transport.compress": null
        }
      }
    }
  }
}

Statically configure remote clusters

edit

If you specify settings in elasticsearch.yml, only the nodes with those settings can connect to the remote cluster and serve remote cluster requests.

Remote cluster settings that are specified using the cluster update settings API take precedence over settings that you specify in elasticsearch.yml for individual nodes.

In the following example, cluster_one, cluster_two, and cluster_three are arbitrary cluster aliases representing the connection to each cluster. These names are subsequently used to distinguish between local and remote indices.

cluster:
    remote:
        cluster_one:
            seeds: 127.0.0.1:9443
        cluster_two:
            mode: sniff
            seeds: 127.0.0.1:9444
            transport.compress: true      
            skip_unavailable: true        
        cluster_three:
            mode: proxy
            proxy_address: 127.0.0.1:9445 

Compression is explicitly enabled for requests to cluster_two.

Disconnected remote clusters are optional for cluster_two.

The address for the proxy endpoint used to connect to cluster_three.

Configure roles and users

edit

To use a remote cluster for cross-cluster replication or cross-cluster search, you need to create user roles with remote indices privileges or remote cluster privileges on the local cluster.

You can manage users and roles from Stack Management in Kibana by selecting Security > Roles from the side navigation. You can also use the role management APIs to add, update, remove, and retrieve roles dynamically.

The following examples use the Create or update roles API. You must have at least the manage_security cluster privilege to use this API.

The cross-cluster API key used by the local cluster to connect the remote cluster must have sufficient privileges to cover all remote indices privileges required by individual users.

Configure privileges for cross-cluster replication

edit

Assuming the remote cluster is connected under the name of my_remote_cluster, the following request creates a role called remote-replication on the local cluster that allows replicating the remote leader-index index:

resp = client.security.put_role(
    name="remote-replication",
    cluster=[
        "manage_ccr"
    ],
    remote_indices=[
        {
            "clusters": [
                "my_remote_cluster"
            ],
            "names": [
                "leader-index"
            ],
            "privileges": [
                "cross_cluster_replication"
            ]
        }
    ],
)
print(resp)
const response = await client.security.putRole({
  name: "remote-replication",
  cluster: ["manage_ccr"],
  remote_indices: [
    {
      clusters: ["my_remote_cluster"],
      names: ["leader-index"],
      privileges: ["cross_cluster_replication"],
    },
  ],
});
console.log(response);
POST /_security/role/remote-replication
{
  "cluster": [
    "manage_ccr"
  ],
  "remote_indices": [
    {
      "clusters": [ "my_remote_cluster" ],
      "names": [
        "leader-index"
      ],
      "privileges": [
        "cross_cluster_replication"
      ]
    }
  ]
}

After creating the local remote-replication role, use the Create or update users API to create a user on the local cluster cluster and assign the remote-replication role. For example, the following request assigns the remote-replication role to a user named cross-cluster-user:

resp = client.security.put_user(
    username="cross-cluster-user",
    password="l0ng-r4nd0m-p@ssw0rd",
    roles=[
        "remote-replication"
    ],
)
print(resp)
const response = await client.security.putUser({
  username: "cross-cluster-user",
  password: "l0ng-r4nd0m-p@ssw0rd",
  roles: ["remote-replication"],
});
console.log(response);
POST /_security/user/cross-cluster-user
{
  "password" : "l0ng-r4nd0m-p@ssw0rd",
  "roles" : [ "remote-replication" ]
}

Note that you only need to create this user on the local cluster.

Configure privileges for cross-cluster search

edit

Assuming the remote cluster is connected under the name of my_remote_cluster, the following request creates a remote-search role on the local cluster that allows searching the remote target-index index:

resp = client.security.put_role(
    name="remote-search",
    remote_indices=[
        {
            "clusters": [
                "my_remote_cluster"
            ],
            "names": [
                "target-index"
            ],
            "privileges": [
                "read",
                "read_cross_cluster",
                "view_index_metadata"
            ]
        }
    ],
)
print(resp)
const response = await client.security.putRole({
  name: "remote-search",
  remote_indices: [
    {
      clusters: ["my_remote_cluster"],
      names: ["target-index"],
      privileges: ["read", "read_cross_cluster", "view_index_metadata"],
    },
  ],
});
console.log(response);
POST /_security/role/remote-search
{
  "remote_indices": [
    {
      "clusters": [ "my_remote_cluster" ],
      "names": [
        "target-index"
      ],
      "privileges": [
        "read",
        "read_cross_cluster",
        "view_index_metadata"
      ]
    }
  ]
}

After creating the remote-search role, use the Create or update users API to create a user on the local cluster and assign the remote-search role. For example, the following request assigns the remote-search role to a user named cross-search-user:

resp = client.security.put_user(
    username="cross-search-user",
    password="l0ng-r4nd0m-p@ssw0rd",
    roles=[
        "remote-search"
    ],
)
print(resp)
const response = await client.security.putUser({
  username: "cross-search-user",
  password: "l0ng-r4nd0m-p@ssw0rd",
  roles: ["remote-search"],
});
console.log(response);
POST /_security/user/cross-search-user
{
  "password" : "l0ng-r4nd0m-p@ssw0rd",
  "roles" : [ "remote-search" ]
}

Note that you only need to create this user on the local cluster.