Upgrade migrations

edit

Every time Kibana is upgraded it will perform an upgrade migration to ensure that all saved objects are compatible with the new version.

6.7 includes an Upgrade Assistant to help you prepare for your upgrade to 7.0. To access the assistant, go to Management > 7.0 Upgrade Assistant.

Kibana 7.12.0 and later uses a new migration process and index naming scheme. Be sure to read the documentation for your version of Kibana before proceeding.

The following instructions assumes Kibana is using the default index names. If the kibana.index or xpack.tasks.index configuration settings were changed these instructions will have to be adapted accordingly.

Background

edit

Saved objects are stored in two indices:

  • .kibana_{kibana_version}_001, or if the kibana.index configuration setting is set .{kibana.index}_{kibana_version}_001. E.g. for Kibana v7.12.0 .kibana_7.12.0_001.
  • .kibana_task_manager_{kibana_version}_001, or if the xpack.tasks.index configuration setting is set .{xpack.tasks.index}_{kibana_version}_001 E.g. for Kibana v7.12.0 .kibana_task_manager_7.12.0_001.

The index aliases .kibana and .kibana_task_manager will always point to the most up-to-date saved object indices.

The first time a newer Kibana starts, it will first perform an upgrade migration before starting plugins or serving HTTP traffic. To prevent losing acknowledged writes old nodes should be shutdown before starting the upgrade. To reduce the likelihood of old nodes losing acknowledged writes, Kibana 7.12.0 and later will add a write block to the outdated index. Table 1 lists the saved objects indices used by previous versions of Kibana.

Table 1. Saved object indices and aliases per Kibana version

Upgrading from version Outdated index (alias)

6.0.0 through 6.4.x

.kibana

.kibana_task_manager_7.12.0_001 (.kibana_task_manager alias)

6.5.0 through 7.3.x

.kibana_N (.kibana alias)

7.4.0 through 7.11.x

.kibana_N (.kibana alias)

.kibana_task_manager_N (.kibana_task_manager alias)

Upgrading multiple Kibana instances

edit

When upgrading several Kibana instances connected to the same Elasticsearch cluster, ensure that all outdated instances are shutdown before starting the upgrade.

Kibana does not support rolling upgrades. However, once outdated instances are shutdown, all upgraded instances can be started in parallel in which case all instances will participate in the upgrade migration in parallel.

For large deployments with more than 10 Kibana instances and more than 10 000 saved objects, the upgrade downtime can be reduced by bringing up a single Kibana instance and waiting for it to complete the upgrade migration before bringing up the remaining instances.

Preventing migration failures

edit

This section highlights common causes of Kibana upgrade failures and how to prevent them.

timeout_exception or receive_timeout_transport_exception
edit

There is a known issue in v7.12.0 for users who tried the fleet beta. Upgrade migrations fail because of a large number of documents in the .kibana index.

This can cause Kibana to log errors like:

Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Error: [receive_timeout_transport_exception]: [instance-0000000002][10.32.1.112:19541][cluster:monitor/task/get] request_id [2648] timed out after [59940ms]

Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Error: [timeout_exception]: Timed out waiting for completion of [org.elasticsearch.index.reindex.BulkByScrollTask@6a74c54]

See https://github.com/elastic/kibana/issues/95321 for instructions to work around this issue.

Corrupt saved objects
edit

We highly recommend testing your Kibana upgrade in a development cluster to discover and remedy problems caused by corrupt documents, especially when there are custom integrations creating saved objects in your environment.

Saved objects that were corrupted through manual editing or integrations will cause migration failures with a log message like Failed to transform document. Transform: index-pattern:7.0.0\n Doc: {...} or Unable to migrate the corrupt Saved Object document .... Corrupt documents will have to be fixed or deleted before an upgrade migration can succeed.

For example, given the following error message:

Unable to migrate the corrupt saved object document with _id: 'marketing_space:dashboard:e3c5fc71-ac71-4805-bcab-2bcc9cc93275'. To allow migrations to proceed, please delete this document from the [.kibana_7.12.0_001] index.

The following steps must be followed to delete the document that is causing the migration to fail:

  1. Remove the write block which the migration system has placed on the previous index:

    PUT .kibana_7.12.1_001/_settings
    {
      "index": {
        "blocks.write": false
      }
    }
  2. Delete the corrupt document:

    DELETE .kibana_7.12.0_001/_doc/marketing_space:dashboard:e3c5fc71-ac71-4805-bcab-2bcc9cc93275
  3. Restart Kibana.

In this example, the Dashboard with ID e3c5fc71-ac71-4805-bcab-2bcc9cc93275 that belongs to the space marketing_space will no longer be available.

Be sure you have a snapshot before you delete the corrupt document. If restoring from a snapshot is not an option, it is recommended to also delete the temp and target indices the migration created before restarting Kibana and retrying.

User defined index templates that causes new .kibana* indices to have incompatible settings or mappings
edit

Matching index templates which specify settings.refresh_interval or mappings are known to interfere with Kibana upgrades.

Prevention: narrow down the index patterns of any user-defined index templates to ensure that these won’t apply to new .kibana* indices.

Note: Kibana < 6.5 creates it’s own index template called kibana_index_template:.kibana and index pattern .kibana. This index template will not interfere and does not need to be changed or removed.

An unhealthy Elasticsearch cluster
edit

Problems with your Elasticsearch cluster can prevent Kibana upgrades from succeeding. Ensure that your cluster has:

  • enough free disk space, at least twice the amount of storage taken up by the .kibana and .kibana_task_manager indices
  • sufficient heap size
  • a "green" cluster status
Unsupported routing allocation Elasticsearch cluster settings
edit

As part of upgrade migrations, Kibana creates indices with an auto-expanded replica. When routing allocation is disabled or restricted (cluster.routing.allocation.enable: none/primaries/new_primaries) the replica shard won’t get assigned causing the upgrade to fail.

If routing allocation does not allow auto-expanded replica creation, you might see a log entry similar to:

Action failed with 'Timeout waiting for the status of the [.kibana_task_manager_7.17.1_reindex_temp] index to become 'yellow''. Retrying attempt 11 in 64 seconds.

To prevent the issue, remove the transient and persisted routing allocation settings if they are set:

PUT /_cluster/settings
{
  "transient": {
    "cluster.routing.allocation.enable": null
  },
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
Different versions of Kibana connected to the same Elasticsearch index
edit

When different versions of Kibana are attempting an upgrade migration in parallel this can lead to migration failures. Ensure that all Kibana instances are running the same version, configuration and plugins.

Incompatible xpack.tasks.index configuration setting
edit

For Kibana versions prior to 7.5.1, if the task manager index is set to .tasks with the configuration setting xpack.tasks.index: ".tasks", upgrade migrations will fail. Kibana 7.5.1 and later prevents this by refusing to start with an incompatible configuration setting.

Resolving migration failures

edit

If Kibana terminates unexpectedly while migrating a saved object index it will automatically attempt to perform the migration again once the process has restarted. Do not delete any saved objects indices to attempt to fix a failed migration. Unlike previous versions, Kibana version 7.12.0 and later does not require deleting any indices to release a failed migration lock.

If upgrade migrations fail repeatedly, follow the advice in preventing migration failures. Once the root cause for the migration failure has been addressed, Kibana will automatically retry the migration without any further intervention. If you’re unable to resolve a failed migration following these steps, please contact support.

Rolling back to a previous version of Kibana

edit

If you’ve followed the advice in preventing migration failures and resolving migration failures and Kibana is still not able to upgrade successfully, you might choose to rollback Kibana until you’re able to identify and fix the root cause.

Before rolling back Kibana, ensure that the version you wish to rollback to is compatible with your Elasticsearch cluster. If the version you’re rolling back to is not compatible, you will have to also rollback Elasticsearch.
Any changes made after an upgrade will be lost when rolling back to a previous version.

In order to rollback after a failed upgrade migration, the saved object indices have to be rolled back to be compatible with the previous {kibana} version.

Rollback by restoring a backup snapshot:
edit
  1. Before proceeding, take a snapshot that contains the kibana feature state or all .kibana* indices.
  2. Shutdown all Kibana instances to be 100% sure that there are no instances currently performing a migration.
  3. Delete all saved object indices with DELETE /.kibana*
  4. Restore the kibana feature state or all `.kibana* indices and their aliases from the snapshot.
  5. Start up all Kibana instances on the older version you wish to rollback to.
(Not recommended) Rollback without a backup snapshot:
edit
  1. Shutdown all Kibana instances to be 100% sure that there are no Kibana instances currently performing a migration.
  2. Take a snapshot that includes the kibana feature state or all .kibana* indices.
  3. Delete the version specific indices created by the failed upgrade migration. E.g. if you wish to rollback from a failed upgrade to v7.12.0 DELETE /.kibana_7.12.0_*,.kibana_task_manager_7.12.0_*
  4. Inspect the output of GET /_cat/aliases. If either the .kibana and/or .kibana_task_manager alias is missing, these will have to be created manually. Find the latest index from the output of GET /_cat/indices and create the missing alias to point to the latest index. E.g. if the .kibana alias was missing and the latest index is .kibana_3 create a new alias with POST /.kibana_3/_aliases/.kibana.
  5. Remove the write block from the rollback indices. PUT /.kibana,.kibana_task_manager/_settings {"index.blocks.write": false}
  6. Start up Kibana on the older version you wish to rollback to.

Handling old .kibana_N indices

edit

After migrations have completed, there will be multiple Kibana indices in Elasticsearch: (.kibana_1, .kibana_2, .kibana_7.12.0 etc). Kibana only uses the index that the .kibana and .kibana_task_manager alias points to. The other Kibana indices can be safely deleted, but are left around as a matter of historical record, and to facilitate rolling Kibana back to a previous version.