From b47b9d8419fee70dc54756924ce0529eefb34291 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Edu=20Gonz=C3=A1lez=20de=20la=20Herr=C3=A1n?= <25320357+eedugon@users.noreply.github.com> Date: Mon, 19 May 2025 05:21:00 +0200 Subject: [PATCH 1/6] updates to main migrate doc done, wip on system indices --- manage-data/migrate.md | 129 +++++++----------- manage-data/migrate/_snippets/setup-repo.md | 61 +++++++++ .../migrate/migrate-internal-indices.md | 21 ++- 3 files changed, 124 insertions(+), 87 deletions(-) create mode 100644 manage-data/migrate/_snippets/setup-repo.md diff --git a/manage-data/migrate.md b/manage-data/migrate.md index aea8daf965..3ccdbb6ee2 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -20,7 +20,7 @@ You might have switched to {{ech}} or {{ece}} for any number of reasons, and you * Reindex from a remote cluster, which rebuilds the index from scratch. * Restore from a snapshot, which copies the existing indices. -### Before you begin [ec_migrate_before_you_begin] +## Before you begin [ec_migrate_before_you_begin] Depending on which option that you choose, you might have limitations or need to do some preparation beforehand. @@ -35,33 +35,36 @@ Reindex from a remote cluster Restore from a snapshot : The new cluster must be the same size as your old one, or larger, to accommodate the data. The new cluster must also be an Elasticsearch version that is compatible with the old cluster (check [Elasticsearch snapshot version compatibility](/deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for details). If you have not already done so, you will need to [set up snapshots for your old cluster](/deploy-manage/tools/snapshot-and-restore/self-managed.md) using a repository that can be accessed from the new cluster. -Migrating internal {{es}} indices -: For {{ech}}, if you are migrating internal {{es}} indices from another cluster, specifically the `.kibana` index or the `.security` index, there are two options: +Migrating system {{es}} indices +: In {{es}} 8.0 and later versions, [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) are the only way to back up and restore system indices and system data streams, such as `.kibana` or `.security`. + + Check [Migrating internal indices](./migrate/migrate-internal-indices.md) to restore the internal {{es}} indices from a snapshot. - * Use the steps on this page to reindex the internal indices from a remote cluster. The steps for reindexing internal indices and regular, data indices are the same. - * Check [Migrating internal indices](migrate/migrate-internal-indices.md) to restore the internal {{es}} indices from a snapshot. - -::::{warning} -Before you migrate your {{es}} data, [define your index mappings](/manage-data/data-store/mapping.md) on the new cluster. Index mappings are unable to migrate during reindex operations. -:::: - -### Index from the source [ec-index-source] +## Index from the source [ec-index-source] If you still have access to the original data source, outside of your old {{es}} cluster, you can load the data from there. This might be the simplest option, allowing you to choose the {{es}} version and take advantage of the latest features. You have the option to use any ingestion method that you want—​Logstash, Beats, the {{es}} clients, or whatever works best for you. If the original source isn’t available or has other issues that make it non-viable, there are still two more migration options, getting the data from a remote cluster or restoring from a snapshot. -### Reindex from a remote cluster [ech-reindex-remote] +## Reindex from a remote cluster [ech-reindex-remote] + +Through the {{es}} [reindex API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex), you can connect your new {{es}} deployment remotely to your old {{es}} cluster. This pulls the data from your old cluster and indexes it into your new one. Reindexing essentially rebuilds the index from scratch and it can be more resource intensive to run than a [snapshot restore](#ec-restore-snapshots). -Through the {{es}} reindex API, you can connect your new {{es}} Service deployment remotely to your old {{es}} cluster. This pulls the data from your old cluster and indexes it into your new one. Reindexing essentially rebuilds the index from scratch and it can be more resource intensive to run. +::::{warning} +Reindex operations do not migrate index mappings, settings, or associated index templates from the source cluster. + +Before migrating your {{es}} data, define the necessary [mappings](/manage-data/data-store/mapping.md) and [templates](/manage-data/data-store/templates.md) on the new cluster. The easiest way to do this is to copy the relevant index templates from the old cluster to the new one in advance. +:::: + +Follow these steps to reindex data remotely: 1. Log in to {{ech}} or {{ece}}. 2. Select a deployment or create one. -3. If the old {{es}} cluster is on a remote host (any type of host accessible over the internet), you need to make sure that the host can be accessed. Access is determined by the {{es}} `reindex.remote.whitelist` user setting. +3. Ensure that the new {{es}} cluster can access the remote source cluster to perform the reindex operation. Access is controlled by the {{es}} `reindex.remote.whitelist` user setting. Domains matching the pattern `["*.io:*", "*.com:*"]` are allowed by default, so if your remote host URL matches that pattern you do not need to explicitly define `reindex.remote.whitelist`. - Otherwise, if your remote endpoint is not covered by the default settings, adjust the setting to add the remote {{es}} cluster as an allowed host: + Otherwise, if your remote endpoint is not covered by the default pattern, adjust the setting to add the remote {{es}} cluster as an allowed host: 1. From your deployment menu, go to the **Edit** page. 2. In the **Elasticsearch** section, select **Manage user settings and extensions**. For deployments with existing user settings, you may have to expand the **Edit elasticsearch.yml** caret for each node type instead. @@ -75,8 +78,9 @@ Through the {{es}} reindex API, you can connect your new {{es}} Service deployme 4. Save your changes. -4. From the **API Console** or in the Kibana Console app, create the destination index. -5. Copy the index from the remote cluster: +4. Using the **API Console** or within {{kib}}, either create the destination index with the appropriate settings and [mappings](/manage-data/data-store/mapping.md), or ensure that the relevant [index templates](/manage-data/data-store/templates.md) are in place. + +5. Using the **API Console** or [{{kib}} DevTools Console](/explore-analyze/query-filter/tools/console.md), reindex the data remotely from the old cluster: ```sh POST _reindex @@ -104,86 +108,49 @@ Through the {{es}} reindex API, you can connect your new {{es}} Service deployme GET INDEX-NAME/_search?pretty ``` -7. You can remove the reindex.remote.whitelist user setting that you added previously. +7. If you are not planning to reindex more data from the remote, you can remove the `reindex.remote.whitelist` user setting that you added previously. +## Restore from a snapshot [ec-restore-snapshots] -### Restore from a snapshot [ec-restore-snapshots] +Restoring from a snapshot is often the fastest and most reliable way to migrate data between {{es}} clusters. It preserves mappings, settings, and optionally parts of the cluster state such as index templates, component templates, and system indices. -If you cannot connect to a remote index for whatever reason, such as if it’s in a non-working state, you can try restoring from the most recent working snapshot. +System indices can be easily restored by including their corresponding [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) in the restore operation, allowing you to retain internal configurations related to security, {{kib}}, or other stack features. -::::{note} -For {{ece}} users, while it is most common to have Amazon S3 buckets, you should be able to restore from any addressable external storage that has your {{es}} snapshots. -:::: +This method is especially useful when you want to fully replicate the source cluster or when remote reindexing is not possible, for example if the source cluster is in a degraded or unreachable state. -1. On your old {{es}} cluster, choose an option to get the name of your snapshot repository bucket: +To use this method, the new cluster must have access to the snapshot repository that contains data from the old cluster. Also ensure that both clusters use [compatible versions](/deploy-manage/tools/snapshot-and-restore.md#snapshot-compatibility). - ```sh - GET /_snapshot - GET /_snapshot/_all - ``` - -2. Get the snapshot name: - - ```sh - GET /_snapshot/NEW-REPOSITORY-NAME/_all - ``` +For more information, refer to [Restore into a different cluster](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md#restore-different-cluster) - The output for each entry provides a `"snapshot":` value which is the snapshot name. - - ```json - { - "snapshots": [ - { - "snapshot": "scheduled-1527616008-instance-0000000004", - ... - }, - ... - ] - } - ``` - - -3. Add the snapshot repository: - - ::::{tab-set} - - :::{tab-item} {{ech}} - - From the [console](https://cloud.elastic.co?page=docs&placement=docs-body) of the **new** {{es}} cluster, add the snapshot repository. - - For details, check our guidelines for: - * [Amazon Web Services (AWS) Storage](../deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md) - * [Google Cloud Storage (GCS)](../deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md) - * [Azure Blob Storage](../deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md). - - If you’re migrating [searchable snapshots](../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. +::::{note} +For {{ece}} users, while it is most common to have Amazon S3 buckets, you should be able to restore from any addressable external storage that has your {{es}} snapshots. +:::: - If the source cluster is still writing to the repository, you need to set the destination cluster’s repository connection to `readonly:true` to avoid data corruption. Refer to [backup a repository](../deploy-manage/tools/snapshot-and-restore/self-managed.md#snapshots-repository-backup) for details. - ::: +### Step 1: Set up the repository in the new cluster - :::{tab-item} {{ece}} +::::{include} ./migrate/_snippets/setup-repo.md +:::: - From the Cloud UI of the **new** {{es}} cluster add the snapshot repository. +### Step 2: Run the snapshot restore - For details about configuring snapshot repositories on Amazon Web Services (AWS), Google Cloud Storage (GCS), or Azure Blob Storage, check [manage Snapshot Repositories](../deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md). +Once the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. - If you’re migrating [searchable snapshots](../deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. - ::: +For extra details about the contents of a snapshot refer to [](/deploy-manage/tools/snapshot-and-restore.md#snapshot-contents). - :::: +To start the restore process: -4. Start the Restore process. +1. Open Kibana and go to **Management** > **Snapshot and Restore**. +2. Under the **Snapshots** tab, you can find the available snapshots from your newly added snapshot repository. Select any snapshot to view its details, and from there you can choose to restore it. +3. Select **Restore**. +4. Select the index or indices you wish to restore. +5. Optionally, configure additional restore options, such as **Restore aliases**, **Restore global state**, or **Restore feature state**. - 1. Open Kibana and go to **Management** > **Snapshot and Restore**. - 2. Under the **Snapshots** tab, you can find the available snapshots from your newly added snapshot repository. Select any snapshot to view its details, and from there you can choose to restore it. - 3. Select **Restore**. - 4. Select the indices you wish to restore. - 5. Configure any additional index settings. - 6. Select **Restore snapshot** to begin the process. + Refer to [Restore a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) for more details about restore operations in {{es}}. + +6. Select **Restore snapshot** to begin the process. -5. Verify that the new index is restored in your deployment with this query: +7. Verify that the new index is restored in your deployment with this query: ```sh GET INDEX_NAME/_search?pretty - ``` - + ``` \ No newline at end of file diff --git a/manage-data/migrate/_snippets/setup-repo.md b/manage-data/migrate/_snippets/setup-repo.md new file mode 100644 index 0000000000..41b5031b59 --- /dev/null +++ b/manage-data/migrate/_snippets/setup-repo.md @@ -0,0 +1,61 @@ +In this step, you’ll configure a read-only snapshot repository in the new cluster that points to the storage location used by the old cluster. This allows the new cluster to access and restore snapshots created in the original environment. + +1. On your old {{es}} cluster, choose an option to get the name and details of your snapshot repository bucket: + + ```sh + GET /_snapshot + GET /_snapshot/_all + ``` + + +2. Add the snapshot repository on the new cluster: + + If the original cluster still has write access to the repository, register the repository as read-only. + + ::::{tab-set} + + :::{tab-item} {{ech}} + + From the [console](https://cloud.elastic.co?page=docs&placement=docs-body) of the **new** {{es}} cluster, add the snapshot repository. + + For details, check our guidelines for: + * [Amazon Web Services (AWS) Storage](/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md) + * [Google Cloud Storage (GCS)](/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md) + * [Azure Blob Storage](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md). + + If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. + + If the source cluster is still writing to the repository, you need to set the destination cluster’s repository connection to `readonly:true` to avoid data corruption. Refer to [backup a repository](../deploy-manage/tools/snapshot-and-restore/self-managed.md#snapshots-repository-backup) for details. + ::: + + :::{tab-item} {{ece}} + + From the Cloud UI of the **new** {{es}} cluster add the snapshot repository. + + For details about configuring snapshot repositories on Amazon Web Services (AWS), Google Cloud Storage (GCS), or Azure Blob Storage, check [manage Snapshot Repositories](../deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md). + + If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. + ::: + + :::: diff --git a/manage-data/migrate/migrate-internal-indices.md b/manage-data/migrate/migrate-internal-indices.md index ba470aae5a..5259cda9c0 100644 --- a/manage-data/migrate/migrate-internal-indices.md +++ b/manage-data/migrate/migrate-internal-indices.md @@ -11,16 +11,25 @@ applies_to: serverless: unavailable --- -# Migrate internal indices + -To reindex internal indices from a remote cluster, you can follow the same steps that you use to reindex regular indices when you [migrate your {{es}} data indices](../migrate.md#ech-reindex-remote). +# Migrate system indices + +When you migrate your {{es}} data into a new infrastructure you may also want to migrate your {{es}} system internal indices, specifically the `.kibana` index and the `.security` index. + +In {{es}} 8.0 and later versions, the snapshot and restore of [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) is the only way to back up and restore system indices and system data streams. + +## Migrate system indices through snapshot and restore To restore internal indices from a snapshot, the procedure is a bit different from migrating {{es}} data indices. Use these steps to restore internal indices from a snapshot: From f7c9bad0c62650c21371a58d98665184b67929a3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Edu=20Gonz=C3=A1lez=20de=20la=20Herr=C3=A1n?= <25320357+eedugon@users.noreply.github.com> Date: Mon, 19 May 2025 21:20:55 +0200 Subject: [PATCH 2/6] ECE and ECH merged as previous instructions were invalid --- manage-data/migrate.md | 70 +++++++++-- manage-data/migrate/_snippets/setup-repo.md | 61 --------- .../migrate/migrate-internal-indices.md | 117 +++--------------- 3 files changed, 79 insertions(+), 169 deletions(-) delete mode 100644 manage-data/migrate/_snippets/setup-repo.md diff --git a/manage-data/migrate.md b/manage-data/migrate.md index 3ccdbb6ee2..ec5b5ed6b2 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -14,12 +14,16 @@ applies_to: # Migrate your {{es}} data -You might have switched to {{ech}} or {{ece}} for any number of reasons, and you’re likely wondering how to get your existing {{es}} data into your new infrastructure. Along with easily creating as many new deployments with {{es}} clusters that you need, you have several options for moving your data over. Choose the option that works best for you: +You might have switched to {{ech}} (ECH) or {{ece}} (ECE) for any number of reasons, and you’re likely wondering how to get your existing {{es}} data into your new infrastructure. Along with easily creating as many new deployments with {{es}} clusters that you need, you have several options for moving your data over. Choose the option that works best for you: * Index your data from the original source, which is the simplest method and provides the greatest flexibility for the {{es}} version and ingestion method. * Reindex from a remote cluster, which rebuilds the index from scratch. * Restore from a snapshot, which copies the existing indices. +::::{note} +This guide focuses on migrating data from a self-managed cluster to an ECH or ECE deployment. Refer to [](/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md) if the clusters are in the same ECH or ECE environments. +:::: + ## Before you begin [ec_migrate_before_you_begin] Depending on which option that you choose, you might have limitations or need to do some preparation beforehand. @@ -36,9 +40,9 @@ Restore from a snapshot : The new cluster must be the same size as your old one, or larger, to accommodate the data. The new cluster must also be an Elasticsearch version that is compatible with the old cluster (check [Elasticsearch snapshot version compatibility](/deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for details). If you have not already done so, you will need to [set up snapshots for your old cluster](/deploy-manage/tools/snapshot-and-restore/self-managed.md) using a repository that can be accessed from the new cluster. Migrating system {{es}} indices -: In {{es}} 8.0 and later versions, [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) are the only way to back up and restore system indices and system data streams, such as `.kibana` or `.security`. +: In {{es}} 8.0 and later versions, snapshot and restore of [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) are the only way to back up and restore system indices and system data streams, such as `.kibana` or `.security`. - Check [Migrating internal indices](./migrate/migrate-internal-indices.md) to restore the internal {{es}} indices from a snapshot. + Check [Migrate system indices](./migrate/migrate-internal-indices.md) to restore the internal {{es}} indices from a snapshot. ## Index from the source [ec-index-source] @@ -53,7 +57,7 @@ Through the {{es}} [reindex API](https://www.elastic.co/docs/api/doc/elasticsear ::::{warning} Reindex operations do not migrate index mappings, settings, or associated index templates from the source cluster. -Before migrating your {{es}} data, define the necessary [mappings](/manage-data/data-store/mapping.md) and [templates](/manage-data/data-store/templates.md) on the new cluster. The easiest way to do this is to copy the relevant index templates from the old cluster to the new one in advance. +Before migrating your {{es}} data, define the necessary [mappings](/manage-data/data-store/mapping.md) and [templates](/manage-data/data-store/templates.md) on the new cluster. The easiest way to do this is to copy the relevant index templates from the old cluster to the new one before starting reindex operations. :::: Follow these steps to reindex data remotely: @@ -118,7 +122,7 @@ System indices can be easily restored by including their corresponding [feature This method is especially useful when you want to fully replicate the source cluster or when remote reindexing is not possible, for example if the source cluster is in a degraded or unreachable state. -To use this method, the new cluster must have access to the snapshot repository that contains data from the old cluster. Also ensure that both clusters use [compatible versions](/deploy-manage/tools/snapshot-and-restore.md#snapshot-compatibility). +To use this method, the new cluster must have access to the snapshot repository that contains the data from the old cluster. Also ensure that both clusters use [compatible versions](/deploy-manage/tools/snapshot-and-restore.md#snapshot-compatibility). For more information, refer to [Restore into a different cluster](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md#restore-different-cluster) @@ -126,14 +130,60 @@ For more information, refer to [Restore into a different cluster](/deploy-manage For {{ece}} users, while it is most common to have Amazon S3 buckets, you should be able to restore from any addressable external storage that has your {{es}} snapshots. :::: -### Step 1: Set up the repository in the new cluster +The following steps assume you already have a snapshot repository configured in the old cluster with at least one valid snapshot. + +### Step 1: Set up the repository in the new cluster [migrate-repo-setup] + +In this step, you’ll configure a snapshot repository in the new cluster that points to the storage location used by the old cluster. This allows the new cluster to access and restore snapshots created in the original environment. + +::::{tip} +If your new {{ech}} or {{ece}} deployment cannot connect to the same repository used by your self-managed cluster, for example if it's a private NFS share, consider one of these alternatives: -::::{include} ./migrate/_snippets/setup-repo.md +* [Back up your repository](/deploy-manage/tools/snapshot-and-restore/self-managed.md#snapshots-repository-backup) to a supported storage system such as AWS S3, Google Cloud Storage, or Azure Blob Storage, and then configure your new cluster to use that location for the data migration. +* Expose the repository contents over `ftp`, `http`, or `https`, and use a [read-only URL repository](/deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md) type in your new deployment to access the snapshots. :::: -### Step 2: Run the snapshot restore +1. On your old {{es}} cluster, retrieve the snapshot repository configuration: + + ```sh + GET /_snapshot/_all + ``` + + Take note of the repository name and type (for example, `s3`, `gcs`, or `azure`), its base path, and any additional settings. Authentication credentials are often stored in the [secure settings](/deploy-manage/security/secure-settings.md) on each node. You’ll need to replicate all this configuration when registering the repository in the new ECH or ECE deployment. + + If your old cluster has multiple repositories configured, identify the repository with the snapshots containing the data that you want to migrate. + +2. Add the snapshot repository on the new cluster: + + The new cluster must register a snapshot repository that points to the same physical storage location used by the old cluster. This ensures the new cluster can access the existing snapshots. + + Considerations: + + * If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. + * If the old cluster still has write access to the repository, register the repository as read-only, using the `readonly: true` option. + + To configure a custom snapshot repository for your {{ech}} or {{ece}} deployment, follow the steps for the storage provider used by your existing repository: + + * **Amazon Web Services (AWS) Storage** + * [Store credentials in the keystore](/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md#ec-snapshot-secrets-keystore) + * [Create the repository](/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md#ec-create-aws-repository) + * **Google Cloud Storage (GCS)** + * [Store credentials in the keystore](/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md#ec-configure-gcs-keystore) + * [Create the repository](/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md#ec-create-gcs-repository) + * **Azure Blob Storage** + * [Store credentials in the keystore](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md#ec-configure-azure-keystore). + * [Create the repository](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md#ec-create-azure-repository). + + ::::{important} + Although the previous instructions are focused on {{ech}}, you should follow the same steps for {{ece}} by configuring the repository directly **at the deployment level**. + + **Do not** configure the repository as an [ECE-managed repository](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md), which is intended for automatic snapshots of deployments. In this case, you need to add a custom repository that already contains snapshots from another cluster. + :::: + + +### Step 2: Run the snapshot restore [migrate-restore] -Once the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. +Once the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. You can do this using {{kib}} management UI, or directly with the {{es}} API. For extra details about the contents of a snapshot refer to [](/deploy-manage/tools/snapshot-and-restore.md#snapshot-contents). @@ -145,7 +195,7 @@ To start the restore process: 4. Select the index or indices you wish to restore. 5. Optionally, configure additional restore options, such as **Restore aliases**, **Restore global state**, or **Restore feature state**. - Refer to [Restore a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) for more details about restore operations in {{es}}. + Refer to [Restore a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) for more details about restore operations in {{es}}, including API based examples. 6. Select **Restore snapshot** to begin the process. diff --git a/manage-data/migrate/_snippets/setup-repo.md b/manage-data/migrate/_snippets/setup-repo.md deleted file mode 100644 index 41b5031b59..0000000000 --- a/manage-data/migrate/_snippets/setup-repo.md +++ /dev/null @@ -1,61 +0,0 @@ -In this step, you’ll configure a read-only snapshot repository in the new cluster that points to the storage location used by the old cluster. This allows the new cluster to access and restore snapshots created in the original environment. - -1. On your old {{es}} cluster, choose an option to get the name and details of your snapshot repository bucket: - - ```sh - GET /_snapshot - GET /_snapshot/_all - ``` - - -2. Add the snapshot repository on the new cluster: - - If the original cluster still has write access to the repository, register the repository as read-only. - - ::::{tab-set} - - :::{tab-item} {{ech}} - - From the [console](https://cloud.elastic.co?page=docs&placement=docs-body) of the **new** {{es}} cluster, add the snapshot repository. - - For details, check our guidelines for: - * [Amazon Web Services (AWS) Storage](/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md) - * [Google Cloud Storage (GCS)](/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md) - * [Azure Blob Storage](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md). - - If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. - - If the source cluster is still writing to the repository, you need to set the destination cluster’s repository connection to `readonly:true` to avoid data corruption. Refer to [backup a repository](../deploy-manage/tools/snapshot-and-restore/self-managed.md#snapshots-repository-backup) for details. - ::: - - :::{tab-item} {{ece}} - - From the Cloud UI of the **new** {{es}} cluster add the snapshot repository. - - For details about configuring snapshot repositories on Amazon Web Services (AWS), Google Cloud Storage (GCS), or Azure Blob Storage, check [manage Snapshot Repositories](../deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md). - - If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. - ::: - - :::: diff --git a/manage-data/migrate/migrate-internal-indices.md b/manage-data/migrate/migrate-internal-indices.md index 5259cda9c0..5f19b3bbfc 100644 --- a/manage-data/migrate/migrate-internal-indices.md +++ b/manage-data/migrate/migrate-internal-indices.md @@ -11,112 +11,33 @@ applies_to: serverless: unavailable --- - - # Migrate system indices When you migrate your {{es}} data into a new infrastructure you may also want to migrate your {{es}} system internal indices, specifically the `.kibana` index and the `.security` index. In {{es}} 8.0 and later versions, the snapshot and restore of [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) is the only way to back up and restore system indices and system data streams. -## Migrate system indices through snapshot and restore - -To restore internal indices from a snapshot, the procedure is a bit different from migrating {{es}} data indices. Use these steps to restore internal indices from a snapshot: - -1. On your old {{es}} cluster, choose an option to get the name of your snapshot repository bucket: - - ```sh - GET /_snapshot - GET /_snapshot/_all - ``` - -2. Get the snapshot name: - - ```sh - GET /_snapshot/NEW-REPOSITORY-NAME/_all - ``` - - The output for each entry provides a `"snapshot":` value which is the snapshot name. - - ``` - { - "snapshots": [ - { - "snapshot": "scheduled-1527616008-instance-0000000004", - ``` - - - -3. To restore internal {{es}} indices, you need to register the snapshot repository in `read-only` mode. - - First, add the authentication information for the repository to the {{ech}} keystore, following the steps for your cloud provider: - * [AWS S3](../../deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md#ec-snapshot-secrets-keystore) - * [Google Cloud Storage](../../deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md#ec-configure-gcs-keystore) - * [Azure Blog storage](../../deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md#ec-configure-azure-keystore) - - Next, register a read-only repository. Open an {{es}} [API console](../../explore-analyze/query-filter/tools/console.md) and run the [Read-only URL repository](../../deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md) API call. - -4. Once the repository has been registered and verified, you are ready to restore the internal indices to your new cluster, either all at once or individually. - - * **Restore all internal indices** - - Run the following API call to restore all internal indices from a snapshot to the cluster: - - ```sh - POST /_snapshot/repo/snapshot/_restore - { - "indices": ".*", - "ignore_unavailable": true, - "include_global_state": false, - "include_aliases": false, - "rename_pattern": ".(.+)", - "rename_replacement": "restored_security_$1" - } - ``` - - * **Restore an individual internal index** - - ::::{warning} - When restoring internal indices, ensure that the `include_aliases` parameter is set to `false`. Not doing so will make Kibana inaccessible. If you do run the restore without `include_aliases`, the restored index can be deleted or the alias reference to it can be removed. This will have to be done from either the API console or a curl command as Kibana will not be accessible. - :::: - - Run the following API call to restore one internal index from a snapshot to the cluster: +## Migrate system indices using snapshot and restore - ```sh - POST /_snapshot/repo/snapshot/_restore - { - "indices": ".kibana", - "ignore_unavailable": true, - "include_global_state": false, - "include_aliases": false, - "rename_pattern": ".(.+)", - "rename_replacement": "restored_security_$1" - } - ``` +To restore system indices from a snapshot, follow the same procedure described in [](../migrate.md#ec-restore-snapshots) and select the appropriate **feature states** when preparing the restore operation, such as `kibana` or `security`. - Next, the restored index needs to be reindexed into the internal index, as shown: +For more details about restoring feature states, or the entire cluster state, refer to [](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md#restore-feature-state). - ```sh - POST _reindex - { - "source": { - "index": "restored_kibana" - }, - "dest": { - "index": ".kibana" - } - } - ``` +The following example describes how to restore the `security` feature using the [restore snapshot API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-snapshot-restore): +```sh +POST _snapshot/REPOSITORY/SNAPSHOT_NAME/_restore +{ + "indices": "-*", + "ignore_unavailable": true, + "include_global_state": false, + "include_aliases": false, + "feature_states": [ + "security" + ] +} +``` -Your internal {{es}} index or indices should now be available in your new {{es}} cluster. Once verified, the `restored_*` indices are safe to delete. +Tips: +* Get the list of available snapshots in the repository with `GET _snapshot/REPOSITORY/_all`, or with `GET _cat/snapshots/REPOSITORY`. +* Each snapshot shows its available feature states in the xxxx section of the details. \ No newline at end of file From 90fb74fcc4c1f558fe5bd465015b75914c354aea Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Edu=20Gonz=C3=A1lez=20de=20la=20Herr=C3=A1n?= <25320357+eedugon@users.noreply.github.com> Date: Mon, 19 May 2025 21:41:44 +0200 Subject: [PATCH 3/6] final refinement --- manage-data/migrate.md | 4 ++-- manage-data/migrate/migrate-internal-indices.md | 6 +----- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/manage-data/migrate.md b/manage-data/migrate.md index ec5b5ed6b2..b222d6d6ae 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -130,7 +130,7 @@ For more information, refer to [Restore into a different cluster](/deploy-manage For {{ece}} users, while it is most common to have Amazon S3 buckets, you should be able to restore from any addressable external storage that has your {{es}} snapshots. :::: -The following steps assume you already have a snapshot repository configured in the old cluster with at least one valid snapshot. +The following steps assume you already have a snapshot repository configured in the old cluster with at least one valid snapshot containing the data you want to migrate. ### Step 1: Set up the repository in the new cluster [migrate-repo-setup] @@ -160,7 +160,7 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository Considerations: * If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. - * If the old cluster still has write access to the repository, register the repository as read-only, using the `readonly: true` option. + * If the old cluster still has write access to the repository, register the repository as read-only to avoid data corruption. This can be done using the `readonly: true` option. To configure a custom snapshot repository for your {{ech}} or {{ece}} deployment, follow the steps for the storage provider used by your existing repository: diff --git a/manage-data/migrate/migrate-internal-indices.md b/manage-data/migrate/migrate-internal-indices.md index 5f19b3bbfc..6c29af1a5a 100644 --- a/manage-data/migrate/migrate-internal-indices.md +++ b/manage-data/migrate/migrate-internal-indices.md @@ -7,7 +7,7 @@ applies_to: deployment: eck: unavailable ess: ga - ece: unavailable + ece: ga serverless: unavailable --- @@ -37,7 +37,3 @@ POST _snapshot/REPOSITORY/SNAPSHOT_NAME/_restore ] } ``` - -Tips: -* Get the list of available snapshots in the repository with `GET _snapshot/REPOSITORY/_all`, or with `GET _cat/snapshots/REPOSITORY`. -* Each snapshot shows its available feature states in the xxxx section of the details. \ No newline at end of file From f00ad8c6dd7375fdfc3c72821385f881ba89425a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Edu=20Gonz=C3=A1lez=20de=20la=20Herr=C3=A1n?= <25320357+eedugon@users.noreply.github.com> Date: Fri, 23 May 2025 11:13:43 +0200 Subject: [PATCH 4/6] Apply suggestions from code review Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- manage-data/migrate.md | 20 +++++++++---------- .../migrate/migrate-internal-indices.md | 4 ++-- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/manage-data/migrate.md b/manage-data/migrate.md index f2d088cfef..c80fa86b59 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -69,9 +69,9 @@ Follow these steps to reindex data remotely: 2. Select a deployment or create one. 3. Ensure that the new {{es}} cluster can access the remote source cluster to perform the reindex operation. Access is controlled by the {{es}} `reindex.remote.whitelist` user setting. - Domains matching the pattern `["*.io:*", "*.com:*"]` are allowed by default, so if your remote host URL matches that pattern you do not need to explicitly define `reindex.remote.whitelist`. + Domains matching the patterns `["*.io:*", "*.com:*"]` are allowed by default, so if your remote host URL matches that pattern you do not need to explicitly define `reindex.remote.whitelist`. - Otherwise, if your remote endpoint is not covered by the default pattern, adjust the setting to add the remote {{es}} cluster as an allowed host: + Otherwise, if your remote endpoint is not covered by the default patterns, adjust the setting to add the remote {{es}} cluster as an allowed host: 1. From your deployment menu, go to the **Edit** page. 2. In the **Elasticsearch** section, select **Manage user settings and extensions**. For deployments with existing user settings, you may have to expand the **Edit elasticsearch.yml** caret for each node type instead. @@ -121,7 +121,7 @@ Follow these steps to reindex data remotely: Restoring from a snapshot is often the fastest and most reliable way to migrate data between {{es}} clusters. It preserves mappings, settings, and optionally parts of the cluster state such as index templates, component templates, and system indices. -System indices can be easily restored by including their corresponding [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) in the restore operation, allowing you to retain internal configurations related to security, {{kib}}, or other stack features. +System indices can be restored by including their corresponding [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) in the restore operation, allowing you to retain internal configurations related to security, {{kib}}, or other stack features. This method is especially useful when you want to fully replicate the source cluster or when remote reindexing is not possible, for example if the source cluster is in a degraded or unreachable state. @@ -133,14 +133,14 @@ For more information, refer to [Restore into a different cluster](/deploy-manage For {{ece}} users, while it is most common to have Amazon S3 buckets, you should be able to restore from any addressable external storage that has your {{es}} snapshots. :::: -The following steps assume you already have a snapshot repository configured in the old cluster with at least one valid snapshot containing the data you want to migrate. +The following steps assume you already have a snapshot repository configured in the old cluster, with at least one valid snapshot containing the data you want to migrate. ### Step 1: Set up the repository in the new cluster [migrate-repo-setup] In this step, you’ll configure a snapshot repository in the new cluster that points to the storage location used by the old cluster. This allows the new cluster to access and restore snapshots created in the original environment. ::::{tip} -If your new {{ech}} or {{ece}} deployment cannot connect to the same repository used by your self-managed cluster, for example if it's a private NFS share, consider one of these alternatives: +If your new {{ech}} or {{ece}} deployment cannot connect to the same repository used by your self-managed cluster, for example if it's a private NFS share, consider one of the following alternatives: * [Back up your repository](/deploy-manage/tools/snapshot-and-restore/self-managed.md#snapshots-repository-backup) to a supported storage system such as AWS S3, Google Cloud Storage, or Azure Blob Storage, and then configure your new cluster to use that location for the data migration. * Expose the repository contents over `ftp`, `http`, or `https`, and use a [read-only URL repository](/deploy-manage/tools/snapshot-and-restore/read-only-url-repository.md) type in your new deployment to access the snapshots. @@ -156,7 +156,7 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository If your old cluster has multiple repositories configured, identify the repository with the snapshots containing the data that you want to migrate. -2. Add the snapshot repository on the new cluster: +2. Add the snapshot repository on the new cluster. The new cluster must register a snapshot repository that points to the same physical storage location used by the old cluster. This ensures the new cluster can access the existing snapshots. @@ -165,7 +165,7 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository * If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. * If the old cluster still has write access to the repository, register the repository as read-only to avoid data corruption. This can be done using the `readonly: true` option. - To configure a custom snapshot repository for your {{ech}} or {{ece}} deployment, follow the steps for the storage provider used by your existing repository: + To connect the existing snapshot repository to your new deployment, follow the steps for the storage provider where the repository is hosted: * **Amazon Web Services (AWS) Storage** * [Store credentials in the keystore](/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md#ec-snapshot-secrets-keystore) @@ -178,7 +178,7 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository * [Create the repository](/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md#ec-create-azure-repository). ::::{important} - Although the previous instructions are focused on {{ech}}, you should follow the same steps for {{ece}} by configuring the repository directly **at the deployment level**. + Although these instructions are focused on {{ech}}, you should follow the same steps for {{ece}} by configuring the repository directly **at the deployment level**. **Do not** configure the repository as an [ECE-managed repository](/deploy-manage/tools/snapshot-and-restore/cloud-enterprise.md), which is intended for automatic snapshots of deployments. In this case, you need to add a custom repository that already contains snapshots from another cluster. :::: @@ -186,9 +186,9 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository ### Step 2: Run the snapshot restore [migrate-restore] -Once the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. You can do this using {{kib}} management UI, or directly with the {{es}} API. +After the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. You can do this using {{kib}} management UI, or using the {{es}} API. -For extra details about the contents of a snapshot refer to [](/deploy-manage/tools/snapshot-and-restore.md#snapshot-contents). +For details about the contents of a snapshot, refer to [](/deploy-manage/tools/snapshot-and-restore.md#snapshot-contents). To start the restore process: diff --git a/manage-data/migrate/migrate-internal-indices.md b/manage-data/migrate/migrate-internal-indices.md index cc00296d44..82ffaf6e9b 100644 --- a/manage-data/migrate/migrate-internal-indices.md +++ b/manage-data/migrate/migrate-internal-indices.md @@ -15,13 +15,13 @@ products: # Migrate system indices -When you migrate your {{es}} data into a new infrastructure you may also want to migrate your {{es}} system internal indices, specifically the `.kibana` index and the `.security` index. +When you migrate your {{es}} data into a new infrastructure, you might also want to migrate your {{es}} system internal indices, specifically the `.kibana` index and the `.security` index. In {{es}} 8.0 and later versions, the snapshot and restore of [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) is the only way to back up and restore system indices and system data streams. ## Migrate system indices using snapshot and restore -To restore system indices from a snapshot, follow the same procedure described in [](../migrate.md#ec-restore-snapshots) and select the appropriate **feature states** when preparing the restore operation, such as `kibana` or `security`. +To restore system indices from a snapshot, follow the same procedure described in [](../migrate.md#ec-restore-snapshots) and select the appropriate feature states when preparing the restore operation, such as `kibana` or `security`. For more details about restoring feature states, or the entire cluster state, refer to [](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md#restore-feature-state). From 6bc562ad4c18647fbcb1eb8657f590b5b4eca3de Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Edu=20Gonz=C3=A1lez=20de=20la=20Herr=C3=A1n?= <25320357+eedugon@users.noreply.github.com> Date: Sat, 24 May 2025 10:01:44 +0200 Subject: [PATCH 5/6] changes from code review --- manage-data/migrate.md | 26 +++++++++---------- .../migrate/migrate-internal-indices.md | 6 ++--- 2 files changed, 15 insertions(+), 17 deletions(-) diff --git a/manage-data/migrate.md b/manage-data/migrate.md index c80fa86b59..a54d94450e 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -6,10 +6,8 @@ mapped_pages: applies_to: stack: ga deployment: - eck: unavailable ess: ga ece: ga - serverless: unavailable products: - id: cloud-hosted - id: cloud-enterprise @@ -42,10 +40,11 @@ Reindex from a remote cluster Restore from a snapshot : The new cluster must be the same size as your old one, or larger, to accommodate the data. The new cluster must also be an Elasticsearch version that is compatible with the old cluster (check [Elasticsearch snapshot version compatibility](/deploy-manage/tools/snapshot-and-restore.md#snapshot-restore-version-compatibility) for details). If you have not already done so, you will need to [set up snapshots for your old cluster](/deploy-manage/tools/snapshot-and-restore/self-managed.md) using a repository that can be accessed from the new cluster. -Migrating system {{es}} indices -: In {{es}} 8.0 and later versions, snapshot and restore of [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) are the only way to back up and restore system indices and system data streams, such as `.kibana` or `.security`. +:::{admonition} Migrating system {{es}} indices +In {{es}} 8.0 and later versions, to back up and restore system indices and system data streams such as `.kibana` or `.security`, you must snapshot and restore the related feature's [feature state](/deploy-manage/tools/snapshot-and-restore.md#feature-state). - Check [Migrate system indices](./migrate/migrate-internal-indices.md) to restore the internal {{es}} indices from a snapshot. +Refer to [Migrate system indices](./migrate/migrate-internal-indices.md) to learn how to restore the internal {{es}} system indices from a snapshot. +::: ## Index from the source [ec-index-source] @@ -109,6 +108,8 @@ Follow these steps to reindex data remotely: } ``` + For additional options and details, refer to the [reindex API documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-reindex). + 6. Verify that the new index is present: ```sh @@ -158,8 +159,6 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository 2. Add the snapshot repository on the new cluster. - The new cluster must register a snapshot repository that points to the same physical storage location used by the old cluster. This ensures the new cluster can access the existing snapshots. - Considerations: * If you’re migrating [searchable snapshots](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md), the repository name must be identical in the source and destination clusters. @@ -186,7 +185,9 @@ If your new {{ech}} or {{ece}} deployment cannot connect to the same repository ### Step 2: Run the snapshot restore [migrate-restore] -After the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. You can do this using {{kib}} management UI, or using the {{es}} API. +After the repository has been registered and verified, you are ready to restore any data from any of its snapshots to your new cluster. + +You can run a restore operation using the {{kib}} Management UI, or using the {{es}} API. Refer to [Restore a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) for more details, including API-based examples. For details about the contents of a snapshot, refer to [](/deploy-manage/tools/snapshot-and-restore.md#snapshot-contents). @@ -197,13 +198,12 @@ To start the restore process: 3. Select **Restore**. 4. Select the index or indices you wish to restore. 5. Optionally, configure additional restore options, such as **Restore aliases**, **Restore global state**, or **Restore feature state**. - - Refer to [Restore a snapshot](/deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) for more details about restore operations in {{es}}, including API based examples. - 6. Select **Restore snapshot** to begin the process. -7. Verify that the new index is restored in your deployment with this query: +7. Verify that each restored index is available in your deployment. You can do this using {{kib}} **Index Management** UI, or by running this query: ```sh GET INDEX_NAME/_search?pretty - ``` \ No newline at end of file + ``` + + If you have restored many indices you can also run `GET _cat/indices?s=index` to list all indices for verification. diff --git a/manage-data/migrate/migrate-internal-indices.md b/manage-data/migrate/migrate-internal-indices.md index 82ffaf6e9b..93a3cc6403 100644 --- a/manage-data/migrate/migrate-internal-indices.md +++ b/manage-data/migrate/migrate-internal-indices.md @@ -5,19 +5,17 @@ mapped_pages: applies_to: stack: ga deployment: - eck: unavailable ess: ga ece: ga - serverless: unavailable products: - id: cloud-hosted --- # Migrate system indices -When you migrate your {{es}} data into a new infrastructure, you might also want to migrate your {{es}} system internal indices, specifically the `.kibana` index and the `.security` index. +When you migrate your {{es}} data into a new infrastructure, you might also want to migrate system-level indices and data streams, such as those used by {{kib}} or security features (for example, `.kibana` and `.security`). -In {{es}} 8.0 and later versions, the snapshot and restore of [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) is the only way to back up and restore system indices and system data streams. +Starting in {{es}} 8.0, you can use [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) to back up and restore all system indices and system data streams. This is the only available method for migrating this data. ## Migrate system indices using snapshot and restore From 01fcd44ad079e2d38c0d894a52eb93b3b65bc46b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Edu=20Gonz=C3=A1lez=20de=20la=20Herr=C3=A1n?= <25320357+eedugon@users.noreply.github.com> Date: Mon, 26 May 2025 21:46:01 +0200 Subject: [PATCH 6/6] final suggestions implemented --- manage-data/migrate.md | 6 ++++-- manage-data/migrate/migrate-internal-indices.md | 4 +++- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/manage-data/migrate.md b/manage-data/migrate.md index a54d94450e..6b9d4bef3d 100644 --- a/manage-data/migrate.md +++ b/manage-data/migrate.md @@ -22,12 +22,14 @@ You might have switched to {{ech}} (ECH) or {{ece}} (ECE) for any number of reas * Restore from a snapshot, which copies the existing indices. ::::{note} -This guide focuses on migrating data from a self-managed cluster to an ECH or ECE deployment. Refer to [](/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md) if the clusters are in the same ECH or ECE environments. +Although this guide focuses on migrating data from a self-managed cluster to an {{ech}} or {{ece}} deployment, the steps can also be adapted for other scenarios, such as when the source cluster is managed by {{eck}}, or when migrating from {{ece}} to {{ech}}. + +If both the source and destination clusters belong to the same {{ech}} or {{ece}} environment, refer to [](/deploy-manage/tools/snapshot-and-restore/ece-restore-across-clusters.md). :::: ## Before you begin [ec_migrate_before_you_begin] -Depending on which option that you choose, you might have limitations or need to do some preparation beforehand. +Depending on which option you choose, you might have limitations or need to do some preparation beforehand. Indexing from the source : The new cluster must be the same size as your old one, or larger, to accommodate the data. diff --git a/manage-data/migrate/migrate-internal-indices.md b/manage-data/migrate/migrate-internal-indices.md index 93a3cc6403..2416438891 100644 --- a/manage-data/migrate/migrate-internal-indices.md +++ b/manage-data/migrate/migrate-internal-indices.md @@ -15,7 +15,9 @@ products: When you migrate your {{es}} data into a new infrastructure, you might also want to migrate system-level indices and data streams, such as those used by {{kib}} or security features (for example, `.kibana` and `.security`). -Starting in {{es}} 8.0, you can use [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) to back up and restore all system indices and system data streams. This is the only available method for migrating this data. +Starting in {{es}} 8.0, you can use [feature states](/deploy-manage/tools/snapshot-and-restore.md#feature-state) to back up and restore all system indices and system data streams. This is the only available method for migrating this type of data. + +However, using snapshot and restore for system indices does not mean you must use it for everything. You can still migrate other data by re-indexing from the source or a remote cluster. ## Migrate system indices using snapshot and restore