diff --git a/.changelog/3439.txt b/.changelog/3439.txt
new file mode 100644
index 0000000000..df907a396f
--- /dev/null
+++ b/.changelog/3439.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+data-source/mongodbatlas_project: Adds `users` attribute
+```
diff --git a/.changelog/3451.txt b/.changelog/3451.txt
new file mode 100644
index 0000000000..06fe941f85
--- /dev/null
+++ b/.changelog/3451.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+data-source/mongodbatlas_projects: Adds `users` attribute
+```
\ No newline at end of file
diff --git a/.changelog/3468.txt b/.changelog/3468.txt
new file mode 100644
index 0000000000..c1808f531d
--- /dev/null
+++ b/.changelog/3468.txt
@@ -0,0 +1,7 @@
+```release-note:enhancement
+data-source/mongodbatlas_organization: Adds `users` attribute
+```
+
+```release-note:enhancement
+data-source/mongodbatlas_organizations: Adds `users` attribute
+```
\ No newline at end of file
diff --git a/.changelog/3483.txt b/.changelog/3483.txt
new file mode 100644
index 0000000000..580e6a21f1
--- /dev/null
+++ b/.changelog/3483.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+data-source/mongodbatlas_team: Adds `users` attribute
+```
\ No newline at end of file
diff --git a/.changelog/3486.txt b/.changelog/3486.txt
new file mode 100644
index 0000000000..8b7fc08259
--- /dev/null
+++ b/.changelog/3486.txt
@@ -0,0 +1,3 @@
+```release-note:new-resource
+resource/mongodbatlas_cloud_user_org_assignment
+```
diff --git a/.changelog/3491.txt b/.changelog/3491.txt
new file mode 100644
index 0000000000..edef84f991
--- /dev/null
+++ b/.changelog/3491.txt
@@ -0,0 +1,3 @@
+```release-note:new-datasource
+data-source/mongodbatlas_cloud_user_org_assignment
+```
diff --git a/.changelog/3494.txt b/.changelog/3494.txt
new file mode 100644
index 0000000000..f8b71d5430
--- /dev/null
+++ b/.changelog/3494.txt
@@ -0,0 +1,7 @@
+```release-note:note
+resource/mongodbatlas_team: Deprecates `usernames` attribute & makes it Optional & Computed in favour of `mongodbatlas_cloud_user_team_assignment` resource
+```
+
+```release-note:note
+data-source/mongodbatlas_team: Deprecates `usernames` attribute in favour of `data.mongodbatlas_team.users` attribute
+```
diff --git a/.changelog/3499.txt b/.changelog/3499.txt
new file mode 100644
index 0000000000..4ec8aa44e8
--- /dev/null
+++ b/.changelog/3499.txt
@@ -0,0 +1,7 @@
+```release-note:breaking-change
+resource/mongodbatlas_maintenance_window: Changes `hour_of_day` to Required
+```
+
+```release-note:breaking-change
+resource/mongodbatlas_maintenance_window: Changes `start_asap` to Computed only
+```
diff --git a/.changelog/3500.txt b/.changelog/3500.txt
new file mode 100644
index 0000000000..c36eb278e4
--- /dev/null
+++ b/.changelog/3500.txt
@@ -0,0 +1,3 @@
+```release-note:breaking-change
+resource/mongodbatlas_cloud_backup_schedule: Changes `export` and `auto_export_enabled` to Optional only
+```
diff --git a/.changelog/3502.txt b/.changelog/3502.txt
new file mode 100644
index 0000000000..7fecf382e9
--- /dev/null
+++ b/.changelog/3502.txt
@@ -0,0 +1,3 @@
+```release-note:new-resource
+resource/mongodbatlas_cloud_user_team_assignment
+```
\ No newline at end of file
diff --git a/.changelog/3508.txt b/.changelog/3508.txt
new file mode 100644
index 0000000000..c5e7c3f214
--- /dev/null
+++ b/.changelog/3508.txt
@@ -0,0 +1,3 @@
+```release-note:breaking-change
+resource/mongodbatlas_custom_db_role: Changes actions attribute to not be sensitive to order
+```
diff --git a/.changelog/3515.txt b/.changelog/3515.txt
new file mode 100644
index 0000000000..ff27aed941
--- /dev/null
+++ b/.changelog/3515.txt
@@ -0,0 +1,7 @@
+```release-note:enhancement
+resource/mongodbatlas_network_peering: Adds `timeouts` attribute for create, update and delete operations
+```
+
+```release-note:enhancement
+resource/mongodbatlas_network_peering: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3517.txt b/.changelog/3517.txt
new file mode 100644
index 0000000000..7d0d1c6744
--- /dev/null
+++ b/.changelog/3517.txt
@@ -0,0 +1,3 @@
+```release-note:new-datasource
+data-source/mongodbatlas_cloud_user_team_assignment
+```
\ No newline at end of file
diff --git a/.changelog/3525.txt b/.changelog/3525.txt
new file mode 100644
index 0000000000..a4b2539fac
--- /dev/null
+++ b/.changelog/3525.txt
@@ -0,0 +1,7 @@
+```release-note:enhancement
+resource/mongodbatlas_flex_cluster: Adds `timeouts` attribute for create, update and delete operations
+```
+
+```release-note:enhancement
+resource/mongodbatlas_flex_cluster: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3526.txt b/.changelog/3526.txt
new file mode 100644
index 0000000000..377affd39f
--- /dev/null
+++ b/.changelog/3526.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_online_archive: Adds `timeouts` attribute for create operation
+```
diff --git a/.changelog/3536.txt b/.changelog/3536.txt
new file mode 100644
index 0000000000..349cbb60b3
--- /dev/null
+++ b/.changelog/3536.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_cloud_backup_snapshot: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3539.txt b/.changelog/3539.txt
new file mode 100644
index 0000000000..1a15422b42
--- /dev/null
+++ b/.changelog/3539.txt
@@ -0,0 +1,3 @@
+```release-note:new-resource
+resource/mongodbatlas_team_project_assignment
+```
\ No newline at end of file
diff --git a/.changelog/3541.txt b/.changelog/3541.txt
new file mode 100644
index 0000000000..63f71583b7
--- /dev/null
+++ b/.changelog/3541.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_cluster_outage_simulation: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3542.txt b/.changelog/3542.txt
new file mode 100644
index 0000000000..2694cc4493
--- /dev/null
+++ b/.changelog/3542.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_online_archive: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3543.txt b/.changelog/3543.txt
new file mode 100644
index 0000000000..11b28096a2
--- /dev/null
+++ b/.changelog/3543.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_privatelink_endpoint: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3544.txt b/.changelog/3544.txt
new file mode 100644
index 0000000000..cc67a220b8
--- /dev/null
+++ b/.changelog/3544.txt
@@ -0,0 +1,3 @@
+```release-note:new-datasource
+data-source/mongodbatlas_team_project_assignment
+```
diff --git a/.changelog/3545.txt b/.changelog/3545.txt
new file mode 100644
index 0000000000..0968d0af8c
--- /dev/null
+++ b/.changelog/3545.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_privatelink_endpoint_service: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3547.txt b/.changelog/3547.txt
new file mode 100644
index 0000000000..8259e2ae00
--- /dev/null
+++ b/.changelog/3547.txt
@@ -0,0 +1,11 @@
+```release-note:breaking-change
+resource/mongodbatlas_advanced_cluster: Disables legacy SDKv2 implementation of this resource and removes the MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER feature flag
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_advanced_cluster: Disables legacy SDKv2 implementation of this datasource and removes the MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER feature flag
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_advanced_clusters: Disables legacy SDKv2 implementation of this datasource and removes the MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER feature flag
+```
diff --git a/.changelog/3560.txt b/.changelog/3560.txt
new file mode 100644
index 0000000000..e46acd147e
--- /dev/null
+++ b/.changelog/3560.txt
@@ -0,0 +1,7 @@
+```release-note:breaking-change
+resource/mongodbatlas_cloud_backup_schedule: Removes the deprecated `copy_settings.#.replication_spec_id` attribute in favor of `zone_id`
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_cloud_backup_schedule: Removes the deprecated `copy_settings.#.replication_spec_id` and `use_zone_id_for_copy_settings` attribute in favor of `zone_id`
+```
diff --git a/.changelog/3561.txt b/.changelog/3561.txt
new file mode 100644
index 0000000000..a9827c16f5
--- /dev/null
+++ b/.changelog/3561.txt
@@ -0,0 +1,7 @@
+```release-note:enhancement
+resource/mongodbatlas_encryption_at_rest_private_endpoint: Adds `timeouts` attribute for create and delete operations
+```
+
+```release-note:enhancement
+resource/mongodbatlas_encryption_at_rest_private_endpoint: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3562.txt b/.changelog/3562.txt
new file mode 100644
index 0000000000..28a2116fa2
--- /dev/null
+++ b/.changelog/3562.txt
@@ -0,0 +1,7 @@
+```release-note:breaking-change
+resource/mongodbatlas_global_cluster_config: Removes the deprecated `custom_zone_mapping` attribute in favor of `custom_zone_mapping_zone_id`
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_global_cluster_config: Removes the deprecated `custom_zone_mapping` attribute in favor of `custom_zone_mapping_zone_id`
+```
diff --git a/.changelog/3568.txt b/.changelog/3568.txt
new file mode 100644
index 0000000000..7cd775d0b6
--- /dev/null
+++ b/.changelog/3568.txt
@@ -0,0 +1,3 @@
+```release-note:new-resource
+resource/mongodbatlas_cloud_user_project_assignment
+```
diff --git a/.changelog/3569.txt b/.changelog/3569.txt
new file mode 100644
index 0000000000..d3e1358468
--- /dev/null
+++ b/.changelog/3569.txt
@@ -0,0 +1,3 @@
+```release-note:new-datasource
+data-source/mongodbatlas_cloud_user_project_assignment
+```
diff --git a/.changelog/3570.txt b/.changelog/3570.txt
new file mode 100644
index 0000000000..6467e23384
--- /dev/null
+++ b/.changelog/3570.txt
@@ -0,0 +1,3 @@
+```release-note:enhancement
+resource/mongodbatlas_push_based_log_export: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.changelog/3571.txt b/.changelog/3571.txt
new file mode 100644
index 0000000000..0652fdf4e8
--- /dev/null
+++ b/.changelog/3571.txt
@@ -0,0 +1,7 @@
+```release-note:enhancement
+resource/mongodbatlas_stream_processor: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
+
+```release-note:enhancement
+resource/mongodbatlas_stream_processor: Adds `timeouts` attribute for create operation
+```
diff --git a/.changelog/3593.txt b/.changelog/3593.txt
new file mode 100644
index 0000000000..02345a93a2
--- /dev/null
+++ b/.changelog/3593.txt
@@ -0,0 +1,7 @@
+```release-note:breaking-change
+resource/mongodbatlas_teams: Removes resource
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_teams: Removes datasource
+```
diff --git a/.changelog/3594.txt b/.changelog/3594.txt
new file mode 100644
index 0000000000..c81ccbf7a3
--- /dev/null
+++ b/.changelog/3594.txt
@@ -0,0 +1,11 @@
+```release-note:note
+resource/mongodbatlas_cluster: Deprecates this resource in favour of `mongodbatlas_advanced_cluster`
+```
+
+```release-note:note
+data-source/mongodbatlas_cluster: Deprecates this datasource in favour of `data.mongodbatlas_advanced_cluster`
+```
+
+```release-note:note
+data-source/mongodbatlas_clusters: Deprecates this datasource in favour of `data.mongodbatlas_advanced_clusters`
+```
diff --git a/.changelog/3595.txt b/.changelog/3595.txt
new file mode 100644
index 0000000000..3db7d8709a
--- /dev/null
+++ b/.changelog/3595.txt
@@ -0,0 +1,7 @@
+```release-note:breaking-change
+resource/mongodbatlas_advanced_cluster: Changes default value of `delete_on_create_timeout` to `true`
+```
+
+```release-note:breaking-change
+resource/mongodbatlas_search_deployment: Changes default value of `delete_on_create_timeout` to `true`
+```
diff --git a/.changelog/3613.txt b/.changelog/3613.txt
new file mode 100644
index 0000000000..f395e96fea
--- /dev/null
+++ b/.changelog/3613.txt
@@ -0,0 +1,43 @@
+```release-note:note
+data-source/mongodbatlas_atlas_user: Deprecates the data source
+```
+
+```release-note:note
+data-source/mongodbatlas_atlas_users: Deprecates the data source
+```
+
+```release-note:note
+data-source/mongodbatlas_atlas_user: Deprecates `email_address` attribute
+```
+
+```release-note:note
+data-source/mongodbatlas_atlas_users: Deprecates `results.email_address` attribute
+```
+
+```release-note:note
+resource/mongodbatlas_org_invitation: Deprecates the resource
+```
+
+```release-note:note
+data-source/mongodbatlas_org_invitation: Deprecates the data source
+```
+
+```release-note:note
+resource/mongodbatlas_project_invitation: Deprecates the resource
+```
+
+```release-note:note
+data-source/mongodbatlas_project_invitation: Deprecates the data source
+```
+
+```release-note:note
+resource/mongodbatlas_project: Deprecates `teams` attribute
+```
+
+```release-note:note
+data-source/mongodbatlas_project: Deprecates `teams` attribute
+```
+
+```release-note:note
+data-source/mongodbatlas_projects: Deprecates `results.teams` attribute
+```
diff --git a/.changelog/3615.txt b/.changelog/3615.txt
new file mode 100644
index 0000000000..7a93e634d0
--- /dev/null
+++ b/.changelog/3615.txt
@@ -0,0 +1,15 @@
+```release-note:breaking-change
+resource/mongodbatlas_privatelink_endpoint_serverless: Removes resource
+```
+
+```release-note:breaking-change
+resource/mongodbatlas_privatelink_endpoint_service_serverless: Removes resource
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_privatelink_endpoint_serverless: Removes data source
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_privatelink_endpoint_serverless: Removes data source
+```
diff --git a/.changelog/3635.txt b/.changelog/3635.txt
new file mode 100644
index 0000000000..3fd147a6f7
--- /dev/null
+++ b/.changelog/3635.txt
@@ -0,0 +1,23 @@
+```release-note:breaking-change
+resource/mongodbatlas_advanced_cluster: Removes deprecated attributes `id`, `disk_size_gb`, `replication_specs.#.id`, `replication_specs.#.num_shards`
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_advanced_cluster: Removes deprecated attributes `use_replication_spec_per_shard`, `id`, `disk_size_gb`, `replication_specs.#.id`, `replication_specs.#.num_shards`
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_advanced_clusters: Removes deprecated attributes `use_replication_spec_per_shard`, `results.#.id`, `results.#.disk_size_gb`, `results.#.replication_specs.#.id`, `results.#.replication_specs.#.num_shards`
+```
+
+```release-note:breaking-change
+resource/mongodbatlas_advanced_cluster: Removes deprecated attributes `advanced_configuration.default_read_concern` and `advanced_configuration.fail_index_key_too_long`
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_advanced_cluster: Removes deprecated attributes `advanced_configuration.default_read_concern` and `advanced_configuration.fail_index_key_too_long`
+```
+
+```release-note:breaking-change
+data-source/mongodbatlas_advanced_clusters: Removes deprecated attributes `results.#.advanced_configuration.default_read_concern` and `results.#.advanced_configuration.fail_index_key_too_long`
+```
diff --git a/.changelog/3660.txt b/.changelog/3660.txt
new file mode 100644
index 0000000000..2fccddcce4
--- /dev/null
+++ b/.changelog/3660.txt
@@ -0,0 +1,7 @@
+```release-note:enhancement
+resource/mongodbatlas_cloud_provider_access_setup: Adds `timeouts` attribute for create operation
+```
+
+```release-note:enhancement
+resource/mongodbatlas_cloud_provider_access_setup: Adds `delete_on_create_timeout` attribute to indicate whether to delete the resource if its creation times out
+```
diff --git a/.github/workflows/acceptance-tests-runner.yml b/.github/workflows/acceptance-tests-runner.yml
index f484d63b0d..beb79621c1 100644
--- a/.github/workflows/acceptance-tests-runner.yml
+++ b/.github/workflows/acceptance-tests-runner.yml
@@ -118,6 +118,9 @@ on:
mongodb_atlas_asp_project_aws_role_arn:
type: string
required: true
+ mongodb_atlas_last_1x_version:
+ type: string
+ required: true
secrets: # all secrets are passed explicitly in this workflow
mongodb_atlas_public_key:
required: true
@@ -239,11 +242,11 @@ jobs:
mustTrigger: ${{ github.event_name == 'schedule' || (github.event_name == 'workflow_dispatch' && inputs.test_group == '' ) }}
outputs: # ensure resources are sorted alphabetically
advanced_cluster: ${{ steps.filter.outputs.advanced_cluster == 'true' || env.mustTrigger == 'true' }}
- advanced_cluster_tpf: ${{ steps.filter.outputs.advanced_cluster_tpf == 'true' || env.mustTrigger == 'true' }}
assume_role: ${{ steps.filter.outputs.assume_role == 'true' || env.mustTrigger == 'true' }}
autogen: ${{ steps.filter.outputs.autogen == 'true' || env.mustTrigger == 'true' }}
backup: ${{ steps.filter.outputs.backup == 'true' || env.mustTrigger == 'true' }}
control_plane_ip_addresses: ${{ steps.filter.outputs.control_plane_ip_addresses == 'true' || env.mustTrigger == 'true' }}
+ cloud_user: ${{ steps.filter.outputs.cloud_user == 'true' || env.mustTrigger == 'true' }}
cluster: ${{ steps.filter.outputs.cluster == 'true' || env.mustTrigger == 'true' }}
cluster_outage_simulation: ${{ steps.filter.outputs.cluster_outage_simulation == 'true' || env.mustTrigger == 'true' }}
config: ${{ steps.filter.outputs.config == 'true' || env.mustTrigger == 'true' }}
@@ -271,11 +274,7 @@ jobs:
with:
filters: |
advanced_cluster:
- - 'internal/service/advancedcluster/!(*_test).go' # matches any adv_cluster file change except test files
- - 'internal/service/advancedclustertpf/common*.go'
- advanced_cluster_tpf:
- 'internal/service/advancedclustertpf/*.go'
- - 'internal/service/advancedcluster/*_test.go'
assume_role:
- 'internal/provider/*.go'
autogen:
@@ -299,6 +298,10 @@ jobs:
- 'internal/service/onlinearchive/*.go'
control_plane_ip_addresses:
- 'internal/service/controlplaneipaddresses/*.go'
+ cloud_user:
+ - 'internal/service/clouduserorgassignment/*.go'
+ - 'internal/service/clouduserprojectassignment/*.go'
+ - 'internal/service/clouduserteamassignment/*.go'
cluster:
- 'internal/service/cluster/*.go'
cluster_outage_simulation:
@@ -319,6 +322,7 @@ jobs:
- 'internal/service/projectapikey/*.go'
- 'internal/service/rolesorgid/*.go'
- 'internal/service/team/*.go'
+ - 'internal/service/teamprojectassignment/*.go'
- 'internal/service/thirdpartyintegration/*.go'
encryption:
- 'internal/service/encryptionatrest/*.go'
@@ -366,8 +370,6 @@ jobs:
search_index:
- 'internal/service/searchindex/*.go'
serverless:
- - 'internal/service/privatelinkendpointserverless/*.go'
- - 'internal/service/privatelinkendpointserviceserverless/*.go'
- 'internal/service/serverlessinstance/*.go'
stream:
- 'internal/service/streamaccountdetails/*.go'
@@ -376,6 +378,7 @@ jobs:
- 'internal/service/streamprocessor/*.go'
- 'internal/service/streamprivatelinkendpoint/*.go'
+
advanced_cluster:
needs: [ change-detection, get-provider-version ]
if: ${{ needs.change-detection.outputs.advanced_cluster == 'true' || inputs.test_group == 'advanced_cluster' }}
@@ -395,15 +398,15 @@ jobs:
- name: Acceptance Tests
env:
MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
+ HTTP_MOCKER_CAPTURE: 'true'
ACCTEST_REGEX_RUN: ${{ inputs.reduced_tests && '^TestAccMockable' || env.ACCTEST_REGEX_RUN }}
- ACCTEST_PACKAGES: ./internal/service/advancedcluster
+ ACCTEST_PACKAGES: |
+ ./internal/service/advancedclustertpf
run: make testacc
-
- advanced_cluster_tpf:
+
+ advanced_cluster_tpf_mig_from_sdkv2:
needs: [ change-detection, get-provider-version ]
- if: ${{ needs.change-detection.outputs.advanced_cluster_tpf == 'true' || inputs.test_group == 'advanced_cluster_tpf' }}
- env:
- MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: 'true'
+ if: ${{ inputs.reduced_tests == false && (needs.change-detection.outputs.advanced_cluster == 'true' || inputs.test_group == 'advanced_cluster') }}
runs-on: ubuntu-latest
permissions: {}
steps:
@@ -419,20 +422,17 @@ jobs:
terraform_wrapper: false
- name: Acceptance Tests
env:
- MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
- HTTP_MOCKER_CAPTURE: 'true'
- ACCTEST_REGEX_RUN: ${{ inputs.reduced_tests && '^TestAccMockable' || env.ACCTEST_REGEX_RUN }}
+ MONGODB_ATLAS_LAST_VERSION: ${{ inputs.mongodb_atlas_last_1x_version }}
+ MONGODB_ATLAS_LAST_1X_VERSION: ${{ inputs.mongodb_atlas_last_1x_version }}
+ MONGODB_ATLAS_TEST_SDKV2_TO_TPF: 'true'
+ ACCTEST_REGEX_RUN: '^TestV1xMig'
ACCTEST_PACKAGES: |
- ./internal/service/advancedcluster
./internal/service/advancedclustertpf
run: make testacc
-
- advanced_cluster_tpf_mig_from_sdkv2:
+
+ advanced_cluster_tpf_mig_from_tpf_preview:
needs: [ change-detection, get-provider-version ]
- if: ${{ inputs.reduced_tests == false && (needs.change-detection.outputs.advanced_cluster_tpf == 'true' || inputs.test_group == 'advanced_cluster_tpf') }}
- env:
- MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: 'true'
- MONGODB_ATLAS_TEST_SDKV2_TO_TPF: 'true'
+ if: ${{ inputs.reduced_tests == false && (needs.change-detection.outputs.advanced_cluster == 'true' || inputs.test_group == 'advanced_cluster') }}
runs-on: ubuntu-latest
permissions: {}
steps:
@@ -448,9 +448,13 @@ jobs:
terraform_wrapper: false
- name: Acceptance Tests
env:
- MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
- ACCTEST_REGEX_RUN: '^TestMig'
- ACCTEST_PACKAGES: ./internal/service/advancedcluster
+ MONGODB_ATLAS_LAST_VERSION: ${{ inputs.mongodb_atlas_last_1x_version }}
+ MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: 'true' # required for migration tests that use provider version < 2.0.0
+ MONGODB_ATLAS_LAST_1X_VERSION: ${{ inputs.mongodb_atlas_last_1x_version }}
+ MONGODB_ATLAS_TEST_SDKV2_TO_TPF: 'false'
+ ACCTEST_REGEX_RUN: '^TestV1xMig'
+ ACCTEST_PACKAGES: |
+ ./internal/service/advancedclustertpf
run: make testacc
assume_role:
@@ -550,6 +554,7 @@ jobs:
env:
MONGODB_ATLAS_PROJECT_OWNER_ID: ${{ inputs.mongodb_atlas_project_owner_id }}
MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
+ MONGODB_ATLAS_LAST_1X_VERSION: ${{ inputs.mongodb_atlas_last_1x_version }}
AWS_REGION: ${{ vars.AWS_REGION_LOWERCASE }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
@@ -559,6 +564,7 @@ jobs:
AZURE_ATLAS_APP_ID: ${{ inputs.azure_atlas_app_id }}
AZURE_SERVICE_PRINCIPAL_ID: ${{ inputs.azure_service_principal_id }}
AZURE_TENANT_ID: ${{ inputs.azure_tenant_id }}
+ MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: 'true' # required for migration tests that use provider version < 2.0.0
ACCTEST_PACKAGES: |
./internal/service/cloudbackupschedule
./internal/service/cloudbackupsnapshot
@@ -589,7 +595,37 @@ jobs:
MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
ACCTEST_PACKAGES: ./internal/service/controlplaneipaddresses
run: make testacc
-
+
+ cloud_user:
+ needs: [ change-detection, get-provider-version ]
+ if: ${{ needs.change-detection.outputs.cloud_user == 'true' || inputs.test_group == 'cloud_user' }}
+ runs-on: ubuntu-latest
+ permissions: {}
+ steps:
+ - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
+ with:
+ ref: ${{ inputs.ref || github.ref }}
+ - uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5
+ with:
+ go-version-file: 'go.mod'
+ - uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd
+ with:
+ terraform_version: ${{ inputs.terraform_version }}
+ terraform_wrapper: false
+ - name: Acceptance Tests
+ env:
+ MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
+ MONGODB_ATLAS_TEAMS_IDS: ${{ inputs.mongodb_atlas_teams_ids }}
+ MONGODB_ATLAS_PROJECT_OWNER_ID: ${{ inputs.mongodb_atlas_project_owner_id }}
+ MONGODB_ATLAS_ORG_ID: ${{ inputs.mongodb_atlas_org_id }}
+ MONGODB_ATLAS_USERNAME: ${{ vars.MONGODB_ATLAS_USERNAME }}
+ MONGODB_ATLAS_USERNAME_2: ${{ vars.MONGODB_ATLAS_USERNAME_2 }}
+ ACCTEST_PACKAGES: |
+ ./internal/service/clouduserorgassignment
+ ./internal/service/clouduserprojectassignment
+ ./internal/service/clouduserteamassignment
+ run: make testacc
+
cluster:
needs: [ change-detection, get-provider-version ]
if: ${{ needs.change-detection.outputs.cluster == 'true' || inputs.test_group == 'cluster' }}
@@ -679,6 +715,7 @@ jobs:
./internal/service/apikey
./internal/service/rolesorgid
./internal/service/team
+ ./internal/service/teamprojectassignment
./internal/service/thirdpartyintegration
run: make testacc
@@ -1064,6 +1101,7 @@ jobs:
terraform_wrapper: false
- name: Acceptance Tests
env:
+ MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: 'true' # required for migration tests that use provider version < 2.0.0
MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
ACCTEST_PACKAGES: ./internal/service/searchdeployment
run: make testacc
@@ -1112,8 +1150,6 @@ jobs:
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
ACCTEST_PACKAGES: |
- ./internal/service/privatelinkendpointserverless
- ./internal/service/privatelinkendpointserviceserverless
./internal/service/serverlessinstance
run: make testacc
stream:
diff --git a/.github/workflows/acceptance-tests.yml b/.github/workflows/acceptance-tests.yml
index 826b2890ac..bce7474316 100644
--- a/.github/workflows/acceptance-tests.yml
+++ b/.github/workflows/acceptance-tests.yml
@@ -132,3 +132,4 @@ jobs:
confluent_cloud_privatelink_access_id: ${{ vars.CONFLUENT_CLOUD_PRIVATELINK_ACCESS_ID }}
mongodb_atlas_asp_project_ear_pe_id: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_ASP_PROJECT_EAR_PE_ID_QA || vars.MONGODB_ATLAS_ASP_PROJECT_EAR_PE_ID_DEV }}
mongodb_atlas_asp_project_aws_role_arn: ${{ inputs.atlas_cloud_env == 'qa' && vars.MONGODB_ATLAS_ASP_PROJECT_AWS_ROLE_ARN_QA || vars.MONGODB_ATLAS_ASP_PROJECT_AWS_ROLE_ARN_DEV }}
+ mongodb_atlas_last_1x_version: ${{ vars.MONGODB_ATLAS_LAST_1X_VERSION }}
diff --git a/.github/workflows/code-health.yml b/.github/workflows/code-health.yml
index 613817b463..01040740c7 100644
--- a/.github/workflows/code-health.yml
+++ b/.github/workflows/code-health.yml
@@ -41,8 +41,6 @@ jobs:
go-version-file: 'go.mod'
- name: Unit Test
run: make test
- env:
- MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: "true"
lint:
runs-on: ubuntu-latest
permissions: {}
diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml
index 2c72070f4e..956cf60bed 100644
--- a/.github/workflows/release.yml
+++ b/.github/workflows/release.yml
@@ -61,8 +61,8 @@ jobs:
- uses: ./.github/templates/run-script-and-commit
with:
script_call: './scripts/update-examples-reference-in-docs.sh ${{inputs.version_number}}'
- file_to_commit: 'docs/index.md'
- commit_message: 'chore: Updates examples link in index.md for ${{ github.event.inputs.version_number }} release'
+ file_to_commit: 'docs/* templates/*' # only docs files are updated
+ commit_message: 'chore: Update example links in registry docs for ${{ github.event.inputs.version_number }} release'
apix_bot_pat: ${{ secrets.APIX_BOT_PAT }}
remote: https://svc-apix-bot:${{ secrets.APIX_BOT_PAT }}@github.com/${{ github.repository }}
gpg_private_key: ${{ secrets.APIX_BOT_GPG_PRIVATE_KEY }}
diff --git a/.github/workflows/update-dev-branches.yml b/.github/workflows/update-dev-branches.yml
index 7d30645358..c318880794 100644
--- a/.github/workflows/update-dev-branches.yml
+++ b/.github/workflows/update-dev-branches.yml
@@ -86,8 +86,6 @@ jobs:
- name: Project check
if: steps.merge-check.outputs.has-changes == 'true'
id: project-check
- env:
- MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER: "true"
run: |
if make tools build lint test; then
echo "slack-text=✅ Dev branch \`${{ matrix.branch }}\` merged and pushed with latest changes from master. ${{ secrets.SLACK_ONCALL_TAG }} <${{github.server_url}}/${{github.repository}}/actions/runs/${{github.run_id}}|View Action>" >> "${GITHUB_OUTPUT}"
diff --git a/Makefile b/Makefile
index 60a609023b..a475f38f88 100644
--- a/Makefile
+++ b/Makefile
@@ -53,7 +53,6 @@ testmact: ## Run MacT tests (mocked acc tests)
@$(eval export MONGODB_ATLAS_ORG_ID?=111111111111111111111111)
@$(eval export MONGODB_ATLAS_PROJECT_ID?=111111111111111111111111)
@$(eval export MONGODB_ATLAS_CLUSTER_NAME?=mocked-cluster)
- @$(eval export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER?=true)
@if [ "$(ACCTEST_PACKAGES)" = "./..." ]; then \
echo "Error: ACCTEST_PACKAGES must be explicitly set for testmact target, './...' is not allowed"; \
exit 1; \
diff --git a/RELEASING.md b/RELEASING.md
index 07e191d1bf..5d73798b3d 100644
--- a/RELEASING.md
+++ b/RELEASING.md
@@ -17,13 +17,13 @@ Before triggering a release, view the corresponding [unreleased jira page](https
While QA acceptance tests are run in the release process automatically, we check [workflows/test-suite.yml](https://github.com/mongodb/terraform-provider-mongodbatlas/actions/workflows/test-suite.yml) and see if the latest run of the Test Suite action is successful (it runs every day at midnight UTC time). This can help detect failures before proceeding with the next steps.
-### Verify upgrade guide is defined
+### Verify upgrade guide is defined (if required)
-**Note**: Only applies if the right most version digit is 0 (considered a major or minor version in [semantic versioning](https://semver.org/)).
+- A document (./docs/guides/X.0.0-upgrade-guide.md) must be provided for each major version, summarizing the most significant features, breaking changes, and other helpful information. For minor version releases, this can be created if there are notable changes that warrant it.
-- A doc ./docs/guides/X.Y.0-upgrade-guide.md must be defined containing a summary of the most significant features, breaking changes, and additional information that can be helpful. If not defined the release process will be stopped automatically. The expectation is that this file is created during relevant pull requests (breaking changes, significant features), and not before the release process.
+- The expectation is that this file is created during relevant pull requests (breaking changes, significant features), and not before the release process.
-- We keep [Guides](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/docs/guides) only for 12 months. Add header `subcategory: "Older Guides"` to previous versions.
+- We keep [Guides](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/docs/guides) only for 12 months. Add header `subcategory: "Older Guides - Version X"` to previous versions.
### Trigger release workflow
diff --git a/contributing/testing-best-practices.md b/contributing/testing-best-practices.md
index 16e16be937..e5d37ec0c0 100644
--- a/contributing/testing-best-practices.md
+++ b/contributing/testing-best-practices.md
@@ -72,7 +72,6 @@ Acceptance and migration tests can reuse projects and clusters in order to be mo
**Experimental** framework for hooking into the HTTP Client used by the Terraform provider and capture/replay traffic.
- Works by mutating a `terraform-plugin-testing/helper/resource.TestCase`
- Limited to `TestAccMockable*` tests in [`resource_advanced_cluster_test.go`](../internal/service/advancedcluster/resource_advanced_cluster_test.go):
- - Remember to run `export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true` for the TPF implementation to be used and the tests to work.
- Enabled test cases should always be named with `TestAccMockable` prefix, e.g.: `TestAccMockableAdvancedCluster_tenantUpgrade`
- To create a new `TestAccMockable` you would need to (see [example commit](https://github.com/mongodb/terraform-provider-mongodbatlas/commit/939244fcab95eca9c4c93993fc1b5118ab8bfddb#diff-f9c590f9ffc351d041a26ff474f91404ff394cbfb83f1e135b415998476ca62aR128))
- (1) Write the normal acceptance test
@@ -151,5 +150,3 @@ For a full example of generation see [`http_mocker_plan_checks_test.go`](../inte
### Maintenance and tips
- `plan_step_name` is meant to be created manually (usually by copy-pasting `main.tf` and making changes)
- Use `testCases := map[string][]plancheck.PlanCheck{}` to test many different plan configs for the same import
-
-
diff --git a/docs/data-sources/access_list_api_key.md b/docs/data-sources/access_list_api_key.md
index 10f046ea94..674823029d 100644
--- a/docs/data-sources/access_list_api_key.md
+++ b/docs/data-sources/access_list_api_key.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_access_list_api_key
`mongodbatlas_access_list_api_key` describes an Access List API Key entry resource. The access list grants access from IPs, CIDRs) to clusters within the Project.
diff --git a/docs/data-sources/access_list_api_keys.md b/docs/data-sources/access_list_api_keys.md
index bf2637d73e..de479a3f14 100644
--- a/docs/data-sources/access_list_api_keys.md
+++ b/docs/data-sources/access_list_api_keys.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_access_list_api_key
`mongodbatlas_access_list_api_keys` describes an Access List API Key entry resource. The access list grants access from IPs, CIDRs) to clusters within the Project.
diff --git a/docs/data-sources/advanced_cluster (preview provider 2.0.0).md b/docs/data-sources/advanced_cluster (preview provider 2.0.0).md
deleted file mode 100644
index 03f1643e7b..0000000000
--- a/docs/data-sources/advanced_cluster (preview provider 2.0.0).md
+++ /dev/null
@@ -1,293 +0,0 @@
-# Data Source: mongodbatlas_advanced_cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-`mongodbatlas_advanced_cluster` describes an Advanced Cluster. The data source requires your Project ID.
-
-This page describes the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`, the page for the current version can be found [here](./advanced_cluster). In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
-
-~> **IMPORTANT:**
-
• Changes to cluster configurations can affect costs. Before making changes, please see [Billing](https://docs.atlas.mongodb.com/billing/).
-
• If your Atlas project contains a custom role that uses actions introduced in a specific MongoDB version, you cannot create a cluster with a MongoDB version less than that version unless you delete the custom role.
-
--> **NOTE:** To delete an Atlas cluster that has an associated `mongodbatlas_cloud_backup_schedule` resource and an enabled Backup Compliance Policy, first instruct Terraform to remove the `mongodbatlas_cloud_backup_schedule` resource from the state and then use Terraform to delete the cluster. To learn more, see [Delete a Cluster with a Backup Compliance Policy](../guides/delete-cluster-with-backup-compliance-policy.md).
-
-**NOTE:** This data source also includes Flex clusters.
-
-## Example Usage
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example" {
- project_id = ""
- name = "cluster-test"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M0"
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-
-data "mongodbatlas_advanced_cluster" "example" {
- project_id = mongodbatlas_advanced_cluster.example.project_id
- name = mongodbatlas_advanced_cluster.example.name
-}
-```
-
-## Example using latest sharding configurations with independent shard scaling in the cluster
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example" {
- project_id = ""
- name = "cluster-test"
- backup_enabled = false
- cluster_type = "SHARDED"
-
- replication_specs = [
- { # Sharded cluster with 2 asymmetric shards (M30 and M40)
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- },
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- }
- ]
-}
-
-data "mongodbatlas_advanced_cluster" "example" {
- project_id = mongodbatlas_advanced_cluster.example.project_id
- name = mongodbatlas_advanced_cluster.example.name
- use_replication_spec_per_shard = true
-}
-```
-
-## Example using Flex cluster
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example-flex" {
- project_id = ""
- name = "flex-cluster"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-
-data "mongodbatlas_advanced_cluster" "example" {
- project_id = mongodbatlas_advanced_cluster.example-flex.project_id
- name = mongodbatlas_advanced_cluster.example-flex.name
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique ID for the project to create the cluster.
-* `name` - (Required) Name of the cluster as it appears in Atlas. Once the cluster is created, its name cannot be changed.
-* `use_replication_spec_per_shard` - (Optional) Set this field to true to allow the data source to use the latest schema representing each shard with an individual `replication_specs` object. This enables representing clusters with independent shard scaling.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `id` - The cluster ID.
-* `bi_connector_config` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector_config). In prior versions of the MongoDB Atlas Terraform Provider, this parameter was named `bi_connector`.
-* `cluster_type` - Type of the cluster that you want to create.
-* `disk_size_gb` - Capacity, in gigabytes, of the host's root volume. **(DEPRECATED)** Use `replication_specs[#].region_configs[#].(analytics_specs|electable_specs|read_only_specs).disk_size_gb` instead. To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide).
-* `encryption_at_rest_provider` - Possible values are AWS, GCP, AZURE or NONE.
-* `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
-* `labels` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **(DEPRECATED)** Use `tags` instead.
-* `mongo_db_major_version` - Version of the cluster to deploy.
-* `pinned_fcv` - The pinned Feature Compatibility Version (FCV) with its associated expiration date. See [below](#pinned_fcv).
-* `pit_enabled` - Flag that indicates if the cluster uses Continuous Cloud Backup.
-* `replication_specs` - List of settings that configure your cluster regions. If `use_replication_spec_per_shard = true`, this array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. See [below](#replication_specs).
-* `root_cert_type` - Certificate Authority that MongoDB Atlas clusters use.
-* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
-* `version_release_system` - Release cadence that Atlas uses for this cluster.
-* `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details.
-* `global_cluster_self_managed_sharding` - Flag that indicates if cluster uses Atlas-Managed Sharding (false) or Self-Managed Sharding (true).
-* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster.
-* `redact_client_log_data` - (Optional) Flag that enables or disables log redaction, see the [manual](https://www.mongodb.com/docs/manual/administration/monitoring/#log-redaction) for more information.
-* `config_server_management_mode` - Config Server Management Mode for creating or updating a sharded cluster. Valid values are `ATLAS_MANAGED` (default) and `FIXED_TO_DEDICATED`. When configured as `ATLAS_MANAGED`, Atlas may automatically switch the cluster's config server type for optimal performance and savings. When configured as `FIXED_TO_DEDICATED`, the cluster will always use a dedicated config server. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-* `config_server_type` Describes a sharded cluster's config server type. Valid values are `DEDICATED` and `EMBEDDED`. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-
-### bi_connector_config
-
-Specifies BI Connector for Atlas configuration.
-
-* `enabled` - Specifies whether or not BI Connector for Atlas is enabled on the cluster.
-* `read_preference` - Specifies the read preference to be used by BI Connector for Atlas on the cluster. Each BI Connector for Atlas read preference contains a distinct combination of [readPreference](https://docs.mongodb.com/manual/core/read-preference/) and [readPreferenceTags](https://docs.mongodb.com/manual/core/read-preference/#tag-sets) options. For details on BI Connector for Atlas read preferences, refer to the [BI Connector Read Preferences Table](https://docs.atlas.mongodb.com/tutorial/create-global-writes-cluster/#bic-read-preferences).
-
-### tags
-
- Key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster.
-
-* `key` - Constant that defines the set of the tag.
-* `value` - Variable that belongs to the set of the tag.
-
-To learn more, see [Resource Tags](https://dochub.mongodb.org/core/add-cluster-tag-atlas).
-
-### labels
-
-Key-value pairs that categorize the cluster. Each key and value has a maximum length of 255 characters. You cannot set the key `Infrastructure Tool`, it is used for internal purposes to track aggregate usage.
-
-* `key` - The key that you want to write.
-* `value` - The value that you want to write.
-
--> **NOTE:** MongoDB Atlas doesn't display your labels.
-
-
-### replication_specs
-
-* `id` - **(DEPRECATED)** Unique identifer of the replication document for a zone in a Global Cluster. This value corresponds to the legacy sharding schema (no independent shard scaling) and is different from the Shard ID you may see in the Atlas UI. This value is not populated (empty string) when a sharded cluster has independently scaled shards.
-* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI. When using old sharding configuration (replication spec with `num_shards` greater than 1) this value is not populated.
-* `num_shards` - Provide this value if you set a `cluster_type` of `SHARDED` or `GEOSHARDED`. **(DEPRECATED)** To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide).
-* `region_configs` - Configuration for the hardware specifications for nodes set for a given region. Each `region_configs` object describes the region's priority in elections and the number and type of MongoDB nodes that Atlas deploys to the region. Each `region_configs` object must have either an `analytics_specs` object, `electable_specs` object, or `read_only_specs` object. See [below](#region_configs).
-* `container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container either created programmatically by the user before any clusters existed in a project or when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
-* `zone_name` - Name for the zone in a Global Cluster.
-* `zone_id` - Unique 24-hexadecimal digit string that identifies the zone in a Global Cluster. If clusterType is GEOSHARDED, this value indicates the zone that the given shard belongs to and can be used to configure Global Cluster backup policies.
-
-
-### region_configs
-
-* `analytics_specs` - Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. See [below](#specs).
-* `auto_scaling` - Configuration for the Collection of settings that configures auto-scaling information for the cluster. See [below](#auto_scaling).
-* `analytics_auto_scaling` - Configuration for the Collection of settings that configures analytics-auto-scaling information for the cluster. See [below](#analytics_auto_scaling).
-* `backing_provider_name` - Cloud service provider on which you provision the host for a multi-tenant cluster.
-* `electable_specs` - Hardware specifications for electable nodes in the region.
-* `priority` - Election priority of the region.
-* `provider_name` - Cloud service provider on which the servers are provisioned.
-* `read_only_specs` - Hardware specifications for read-only nodes in the region. See [below](#specs).
-* `region_name` - Physical location of your MongoDB cluster.
-
-### specs
-
-* `disk_iops` - Target IOPS (Input/Output Operations Per Second) desired for storage attached to this hardware. This parameter defaults to the cluster tier's standard IOPS value.
-* `ebs_volume_type` - Type of storage you want to attach to your AWS-provisioned cluster.
- * `STANDARD` volume types can't exceed the default IOPS rate for the selected volume size.
- * `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
-* `instance_size` - Hardware specification for the instance sizes in this region.
-* `node_count` - Number of nodes of the given type for MongoDB Atlas to deploy to the region.
-* `disk_size_gb` - Storage capacity that the host's root volume possesses expressed in gigabytes. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier.
-
-### auto_scaling
-
-* `disk_gb_enabled` - Flag that indicates whether this cluster enables disk auto-scaling.
-* `compute_enabled` - Flag that indicates whether instance size auto-scaling is enabled.
-* `compute_scale_down_enabled` - Flag that indicates whether the instance size may scale down.
-* `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10).
-* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40).
-
-### analytics_auto_scaling
-
-* `disk_gb_enabled` - Flag that indicates whether this cluster enables disk auto-scaling.
-* `compute_enabled` - Flag that indicates whether instance size auto-scaling is enabled.
-* `compute_scale_down_enabled` - Flag that indicates whether the instance size may scale down.
-* `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10).
-* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40).
-#### Advanced Configuration
-
-* `default_read_concern` - [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. **(DEPRECATED)** MongoDB 6.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation.
-* `default_write_concern` - [Default level of acknowledgment requested from MongoDB for write operations](https://docs.mongodb.com/manual/reference/write-concern/) set for this cluster. MongoDB 6.0 clusters default to [majority](https://docs.mongodb.com/manual/reference/write-concern/).
-* `fail_index_key_too_long` - **(DEPRECATED)** When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
-* `javascript_enabled` - When true, the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
-* `minimum_enabled_tls_protocol` - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are:
- - TLS1_0
- - TLS1_1
- - TLS1_2
-* `no_table_scan` - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations.
-* `oplog_size_mb` - The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas.
-* `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates.
-* `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
-* `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
-* `transaction_lifetime_limit_seconds` - Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds.
-* `default_max_time_ms` - Default time limit in milliseconds for individual read operations to complete. This option corresponds to the [defaultMaxTimeMS](https://www.mongodb.com/docs/upcoming/reference/cluster-parameters/defaultMaxTimeMS/) cluster parameter. This parameter is supported only for MongoDB version 8.0 and above.
-* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds This parameter is only supported for MongoDB version 6.0 and above. Defaults to `-1`(off).
-* `tls_cipher_config_mode` - The TLS cipher suite configuration mode. Valid values include `CUSTOM` or `DEFAULT`. The `DEFAULT` mode uses the default cipher suites. The `CUSTOM` mode allows you to specify custom cipher suites for both TLS 1.2 and TLS 1.3.
-* `custom_openssl_cipher_config_tls12` - The custom OpenSSL cipher suite list for TLS 1.2. This field is only valid when `tls_cipher_config_mode` is set to `CUSTOM`.
-
-### pinned_fcv
-
-* `expiration_date` - Expiration date of the fixed FCV. This value is in the ISO 8601 timestamp format (e.g. "2024-12-04T16:25:00Z").
-* `version` - Feature compatibility version of the cluster.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `cluster_id` - The cluster ID.
-* `mongo_db_version` - Version of MongoDB the cluster runs, in `major-version`.`minor-version` format.
-* `id` - The Terraform's unique identifier used internally for state management.
-* `connection_strings` - Set of connection strings that your applications use to connect to this cluster. More information in [Connection-strings](https://docs.mongodb.com/manual/reference/connection-string/). Use the parameters in this object to connect your applications to this cluster. To learn more about the formats of connection strings, see [Connection String Options](https://docs.atlas.mongodb.com/reference/faq/connection-changes/). NOTE: Atlas returns the contents of this object after the cluster is operational, not while it builds the cluster.
-
- **NOTE** Connection strings must be returned as a list, therefore to refer to a specific attribute value add index notation. Example: mongodbatlas_advanced_cluster.cluster-test.connection_strings.0.standard_srv
-
- Private connection strings may not be available immediately as the reciprocal connections may not have finalized by end of the Terraform run. If the expected connection string(s) do not contain a value a terraform refresh may need to be performed to obtain the value. One can also view the status of the peered connection in the [Atlas UI](https://docs.atlas.mongodb.com/security-vpc-peering/).
-
- Ensure `connection_strings` are available following `terraform apply` by adding a `depends_on` relationship to the `advanced_cluster`, ex:
- ```
- data "mongodbatlas_advanced_cluster" "example_cluster" {
- project_id = var.project_id
- name = var.cluster_name
- depends_on = [mongodbatlas_privatelink_endpoint_service.example_endpoint]
- }
- ```
-
- - `connection_strings.standard` - Public mongodb:// connection string for this cluster.
- - `connection_strings.standard_srv` - Public mongodb+srv:// connection string for this cluster. The mongodb+srv protocol tells the driver to look up the seed list of hosts in DNS. Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don’t need to append the seed list or change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn’t , use connectionStrings.standard.
- - `connection_strings.private` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- - `connection_strings.private_srv` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb+srv://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- - `connection_strings.private_endpoint` - Private endpoint connection strings. Each object describes the connection strings you can use to connect to this cluster through a private endpoint. Atlas returns this parameter only if you deployed a private endpoint to all regions to which you deployed this cluster's nodes.
- - `connection_strings.private_endpoint[#].connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
- - `connection_strings.private_endpoint[#].srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
- - `connection_strings.private_endpoint[#].srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
- - `connection_strings.private_endpoint[#].type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
- - `connection_strings.private_endpoint[#].endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
- - `connection_strings.private_endpoint[#].endpoints[#].endpoint_id` - Unique identifier of the private endpoint.
- - `connection_strings.private_endpoint[#].endpoints[#].provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
- - `connection_strings.private_endpoint[#].endpoints[#].region` - Region to which you deployed the private endpoint.
-* `paused` - Flag that indicates whether the cluster is paused or not.
-* `state_name` - Current state of the cluster. The possible states are:
-
-See detailed information for arguments and attributes: [MongoDB API Advanced Cluster](https://docs.atlas.mongodb.com/reference/api/cluster-advanced/get-one-cluster-advanced/)
diff --git a/docs/data-sources/advanced_cluster.md b/docs/data-sources/advanced_cluster.md
index de42aa08ef..eaa606bda1 100644
--- a/docs/data-sources/advanced_cluster.md
+++ b/docs/data-sources/advanced_cluster.md
@@ -1,8 +1,13 @@
+---
+subcategory: "Clusters"
+---
+
# Data Source: mongodbatlas_advanced_cluster
`mongodbatlas_advanced_cluster` describes an Advanced Cluster. The data source requires your Project ID.
-This page describes the current version of `mongodbatlas_advanced_cluster`, the page for the **Preview for MongoDB Atlas Provider 2.0.0** can be found [here](./advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+
+-> **NOTE:** Groups and projects are synonymous terms. You might find group_id in the official documentation.
~> **IMPORTANT:**
• Changes to cluster configurations can affect costs. Before making changes, please see [Billing](https://docs.atlas.mongodb.com/billing/).
@@ -12,8 +17,6 @@ This page describes the current version of `mongodbatlas_advanced_cluster`, the
-> **NOTE:** This data source also includes Flex clusters.
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
-
## Example Usage
```terraform
@@ -22,17 +25,21 @@ resource "mongodbatlas_advanced_cluster" "example" {
name = "cluster-test"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M0"
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M0"
+ }
+ provider_name = "TENANT"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
data "mongodbatlas_advanced_cluster" "example" {
@@ -50,37 +57,41 @@ resource "mongodbatlas_advanced_cluster" "example" {
backup_enabled = false
cluster_type = "SHARDED"
- replication_specs { # Sharded cluster with 2 asymmetric shards (M30 and M40)
- region_configs {
- electable_specs {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
+ replication_specs = [
+ { # Sharded cluster with 2 asymmetric shards (M30 and M40)
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ },
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M40"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
}
- }
-
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
+ ]
}
data "mongodbatlas_advanced_cluster" "example" {
project_id = mongodbatlas_advanced_cluster.example.project_id
name = mongodbatlas_advanced_cluster.example.name
- use_replication_spec_per_shard = true
}
```
@@ -92,14 +103,18 @@ resource "mongodbatlas_advanced_cluster" "example-flex" {
name = "flex-cluster"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ provider_name = "FLEX"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
data "mongodbatlas_advanced_cluster" "example" {
@@ -112,23 +127,20 @@ data "mongodbatlas_advanced_cluster" "example" {
* `project_id` - (Required) The unique ID for the project to create the cluster.
* `name` - (Required) Name of the cluster as it appears in Atlas. Once the cluster is created, its name cannot be changed.
-* `use_replication_spec_per_shard` - (Optional) Set this field to true to allow the data source to use the latest schema representing each shard with an individual `replication_specs` object. This enables representing clusters with independent shard scaling.
## Attributes Reference
In addition to all arguments above, the following attributes are exported:
-* `id` - The cluster ID.
* `bi_connector_config` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector_config). In prior versions of the MongoDB Atlas Terraform Provider, this parameter was named `bi_connector`.
* `cluster_type` - Type of the cluster that you want to create.
-* `disk_size_gb` - Capacity, in gigabytes, of the host's root volume. **(DEPRECATED)** Use `replication_specs.#.region_configs.#.(analytics_specs|electable_specs|read_only_specs).disk_size_gb` instead. To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide).
* `encryption_at_rest_provider` - Possible values are AWS, GCP, AZURE or NONE.
* `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
* `labels` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **(DEPRECATED)** Use `tags` instead.
* `mongo_db_major_version` - Version of the cluster to deploy.
* `pinned_fcv` - The pinned Feature Compatibility Version (FCV) with its associated expiration date. See [below](#pinned_fcv).
* `pit_enabled` - Flag that indicates if the cluster uses Continuous Cloud Backup.
-* `replication_specs` - List of settings that configure your cluster regions. If `use_replication_spec_per_shard = true`, this array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. See [below](#replication_specs).
+* `replication_specs` - List of settings that configure your cluster regions. This array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. See [below](#replication_specs).
* `root_cert_type` - Certificate Authority that MongoDB Atlas clusters use.
* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
* `version_release_system` - Release cadence that Atlas uses for this cluster.
@@ -167,9 +179,7 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le
### replication_specs
-* `id` - **(DEPRECATED)** Unique identifer of the replication document for a zone in a Global Cluster. This value corresponds to the legacy sharding schema (no independent shard scaling) and is different from the Shard ID you may see in the Atlas UI. This value is not populated (empty string) when a sharded cluster has independently scaled shards.
-* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI. When using old sharding configuration (replication spec with `num_shards` greater than 1) this value is not populated.
-* `num_shards` - Provide this value if you set a `cluster_type` of `SHARDED` or `GEOSHARDED`. **(DEPRECATED)** To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide).
+* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI.
* `region_configs` - Configuration for the hardware specifications for nodes set for a given region. Each `region_configs` object describes the region's priority in elections and the number and type of MongoDB nodes that Atlas deploys to the region. Each `region_configs` object must have either an `analytics_specs` object, `electable_specs` object, or `read_only_specs` object. See [below](#region_configs).
* `container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container either created programmatically by the user before any clusters existed in a project or when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
* `zone_name` - Name for the zone in a Global Cluster.
@@ -215,9 +225,7 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le
* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40).
#### Advanced Configuration
-* `default_read_concern` - [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. **(DEPRECATED)** MongoDB 6.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation.
* `default_write_concern` - [Default level of acknowledgment requested from MongoDB for write operations](https://docs.mongodb.com/manual/reference/write-concern/) set for this cluster. MongoDB 6.0 clusters default to [majority](https://docs.mongodb.com/manual/reference/write-concern/).
-* `fail_index_key_too_long` - **(DEPRECATED)** When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
* `javascript_enabled` - When true, the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
* `minimum_enabled_tls_protocol` - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are:
- TLS1_0
@@ -245,7 +253,6 @@ In addition to all arguments above, the following attributes are exported:
* `cluster_id` - The cluster ID.
* `mongo_db_version` - Version of MongoDB the cluster runs, in `major-version`.`minor-version` format.
-* `id` - The Terraform's unique identifier used internally for state management.
* `connection_strings` - Set of connection strings that your applications use to connect to this cluster. More information in [Connection-strings](https://docs.mongodb.com/manual/reference/connection-string/). Use the parameters in this object to connect your applications to this cluster. To learn more about the formats of connection strings, see [Connection String Options](https://docs.atlas.mongodb.com/reference/faq/connection-changes/). NOTE: Atlas returns the contents of this object after the cluster is operational, not while it builds the cluster.
**NOTE** Connection strings must be returned as a list, therefore to refer to a specific attribute value add index notation. Example: mongodbatlas_advanced_cluster.cluster-test.connection_strings.0.standard_srv
@@ -266,14 +273,14 @@ In addition to all arguments above, the following attributes are exported:
- `connection_strings.private` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- `connection_strings.private_srv` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb+srv://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- `connection_strings.private_endpoint` - Private endpoint connection strings. Each object describes the connection strings you can use to connect to this cluster through a private endpoint. Atlas returns this parameter only if you deployed a private endpoint to all regions to which you deployed this cluster's nodes.
- - `connection_strings.private_endpoint.#.connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
- - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
- - `connection_strings.private_endpoint.#.srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
- - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
- - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
- - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint.
- - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
- - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint.
+ - `connection_strings.private_endpoint[#].connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
+ - `connection_strings.private_endpoint[#].srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
+ - `connection_strings.private_endpoint[#].srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
+ - `connection_strings.private_endpoint[#].type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
+ - `connection_strings.private_endpoint[#].endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
+ - `connection_strings.private_endpoint[#].endpoints[#].endpoint_id` - Unique identifier of the private endpoint.
+ - `connection_strings.private_endpoint[#].endpoints[#].provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
+ - `connection_strings.private_endpoint[#].endpoints[#].region` - Region to which you deployed the private endpoint.
* `paused` - Flag that indicates whether the cluster is paused or not.
* `state_name` - Current state of the cluster. The possible states are:
diff --git a/docs/data-sources/advanced_clusters (preview provider 2.0.0).md b/docs/data-sources/advanced_clusters (preview provider 2.0.0).md
deleted file mode 100644
index 874f1e67a3..0000000000
--- a/docs/data-sources/advanced_clusters (preview provider 2.0.0).md
+++ /dev/null
@@ -1,282 +0,0 @@
-# Data Source: mongodbatlas_advanced_clusters (Preview for MongoDB Atlas Provider 2.0.0)
-
-`mongodbatlas_advanced_clusters` returns all Advanced Clusters for a project_id.
-
-This page describes the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_clusters`, the page for the current version can be found [here](./advanced_clusters). In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
-
-~> **IMPORTANT:**
-
• Changes to cluster configurations can affect costs. Before making changes, please see [Billing](https://docs.atlas.mongodb.com/billing/).
-
• If your Atlas project contains a custom role that uses actions introduced in a specific MongoDB version, you cannot create a cluster with a MongoDB version less than that version unless you delete the custom role.
-
--> **NOTE:** This data source also includes Flex clusters.
-
-## Example Usage
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example" {
- project_id = ""
- name = "cluster-test"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M0"
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-
-data "mongodbatlas_advanced_clusters" "example" {
- project_id = mongodbatlas_advanced_cluster.example.project_id
-}
-```
-
-## Example using latest sharding configurations with independent shard scaling in the cluster
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example" {
- project_id = ""
- name = "cluster-test"
- backup_enabled = false
- cluster_type = "SHARDED"
-
- replication_specs = [
- { # Sharded cluster with 2 asymmetric shards (M30 and M40)
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- },
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- }
- ]
-}
-
-data "mongodbatlas_advanced_cluster" "example-asym" {
- project_id = mongodbatlas_advanced_cluster.example.project_id
- name = mongodbatlas_advanced_cluster.example.name
- use_replication_spec_per_shard = true
-}
-```
-
-## Example using Flex cluster
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example-flex" {
- project_id = ""
- name = "flex-cluster"
- cluster_type = "REPLICASET"
-
- replication_specs {
- region_configs {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- }
-}
-
-data "mongodbatlas_advanced_clusters" "example" {
- project_id = mongodbatlas_advanced_cluster.example-flex.project_id
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique ID for the project to get the clusters.
-* `use_replication_spec_per_shard` - (Optional) Set this field to true to allow the data source to use the latest schema representing each shard with an individual `replication_specs` object. This enables representing clusters with independent shard scaling. **Note:** If not set to true, this data source return all clusters except clusters with asymmetric shards.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `id` - The cluster ID.
-* `results` - A list where each represents a Cluster. See below for more details.
-
-### Advanced Cluster
-
-* `bi_connector_config` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector_config). In prior versions of the MongoDB Atlas Terraform Provider, this parameter was named `bi_connector`.
-* `cluster_type` - Type of the cluster that you want to create.
-* `disk_size_gb` - Capacity, in gigabytes, of the host's root volume. **(DEPRECATED)** Use `replication_specs[#].region_configs[#].(analytics_specs|electable_specs|read_only_specs).disk_size_gb` instead. To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide) for more details.
-* `encryption_at_rest_provider` - Possible values are AWS, GCP, AZURE or NONE.
-* `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
-* `labels` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels).
-* `mongo_db_major_version` - Version of the cluster to deploy.
-* `pinned_fcv` - The pinned Feature Compatibility Version (FCV) with its associated expiration date. See [below](#pinned_fcv).
-* `pit_enabled` - Flag that indicates if the cluster uses Continuous Cloud Backup.
-* `replication_specs` - List of settings that configure your cluster regions. If `use_replication_spec_per_shard = true`, this array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. See [below](#replication_specs)
-* `root_cert_type` - Certificate Authority that MongoDB Atlas clusters use.
-* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
-* `version_release_system` - Release cadence that Atlas uses for this cluster.
-* `advanced_configuration` - Get the advanced configuration options. See [Advanced Configuration](#advanced-configuration) below for more details.
-* `global_cluster_self_managed_sharding` - Flag that indicates if cluster uses Atlas-Managed Sharding (false) or Self-Managed Sharding (true).
-* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster.
-* `redact_client_log_data` - (Optional) Flag that enables or disables log redaction, see the [manual](https://www.mongodb.com/docs/manual/administration/monitoring/#log-redaction) for more information.
-* `config_server_management_mode` - Config Server Management Mode for creating or updating a sharded cluster. Valid values are `ATLAS_MANAGED` (default) and `FIXED_TO_DEDICATED`. When configured as `ATLAS_MANAGED`, Atlas may automatically switch the cluster's config server type for optimal performance and savings. When configured as `FIXED_TO_DEDICATED`, the cluster will always use a dedicated config server. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-* `config_server_type` Describes a sharded cluster's config server type. Valid values are `DEDICATED` and `EMBEDDED`. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-
-### bi_connector_config
-
-Specifies BI Connector for Atlas configuration.
-
-* `enabled` - Specifies whether or not BI Connector for Atlas is enabled on the cluster.
-* `read_preference` - Specifies the read preference to be used by BI Connector for Atlas on the cluster. Each BI Connector for Atlas read preference contains a distinct combination of [readPreference](https://docs.mongodb.com/manual/core/read-preference/) and [readPreferenceTags](https://docs.mongodb.com/manual/core/read-preference/#tag-sets) options. For details on BI Connector for Atlas read preferences, refer to the [BI Connector Read Preferences Table](https://docs.atlas.mongodb.com/tutorial/create-global-writes-cluster/#bic-read-preferences).
-
-### tags
-
-Key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster.
-
-* `key` - Constant that defines the set of the tag.
-* `value` - Variable that belongs to the set of the tag.
-
-To learn more, see [Resource Tags](https://dochub.mongodb.org/core/add-cluster-tag-atlas).
-
-### labels
-
-Key-value pairs that categorize the cluster. Each key and value has a maximum length of 255 characters. You cannot set the key `Infrastructure Tool`, it is used for internal purposes to track aggregate usage.
-
-* `key` - The key that you want to write.
-* `value` - The value that you want to write.
-
--> **NOTE:** MongoDB Atlas doesn't display your labels.
-
-
-### replication_specs
-
-* `id` - **(DEPRECATED)** Unique identifer of the replication document for a zone in a Global Cluster. This value corresponds to the legacy sharding schema (no independent shard scaling) and is different from the Shard ID you may see in the Atlas UI. This value is not populated (empty string) when a sharded cluster has independently scaled shards.
-* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI. When using old sharding configuration (replication spec with `num_shards` greater than 1) this value is not populated.
-* `num_shards` - Provide this value if you set a `cluster_type` of SHARDED or GEOSHARDED. **(DEPRECATED)** To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide) for more details.
-* `region_configs` - Configuration for the hardware specifications for nodes set for a given region. Each `region_configs` object describes the region's priority in elections and the number and type of MongoDB nodes that Atlas deploys to the region. Each `region_configs` object must have either an `analytics_specs` object, `electable_specs` object, or `read_only_specs` object. See [below](#region_configs).
-* `container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container either created programmatically by the user before any clusters existed in a project or when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
-* `zone_name` - Name for the zone in a Global Cluster.
-* `zone_id` - Unique 24-hexadecimal digit string that identifies the zone in a Global Cluster. If clusterType is GEOSHARDED, this value indicates the zone that the given shard belongs to and can be used to configure Global Cluster backup policies.
-
-
-### region_configs
-
-* `analytics_specs` - Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. See [below](#specs).
-* `auto_scaling` - Configuration for the Collection of settings that configures auto-scaling information for the cluster. See [below](#auto_scaling).
-* `analytics_auto_scaling` - Configuration for the Collection of settings that configures analytis-auto-scaling information for the cluster. See [below](#analytics_auto_scaling).
-* `backing_provider_name` - Cloud service provider on which you provision the host for a multi-tenant cluster.
-* `electable_specs` - Hardware specifications for electable nodes in the region.
-* `priority` - Election priority of the region.
-* `provider_name` - Cloud service provider on which the servers are provisioned.
-* `read_only_specs` - Hardware specifications for read-only nodes in the region. See [below](#specs).
-* `region_name` - Physical location of your MongoDB cluster.
-
-### specs
-
-* `disk_iops` - Target IOPS (Input/Output Operations Per Second) desired for storage attached to this hardware. This parameter defaults to the cluster tier's standard IOPS value.
-* `ebs_volume_type` - Type of storage you want to attach to your AWS-provisioned cluster.
- * `STANDARD` volume types can't exceed the default IOPS rate for the selected volume size.
- * `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
-* `instance_size` - Hardware specification for the instance sizes in this region.
-* `node_count` - Number of nodes of the given type for MongoDB Atlas to deploy to the region.
-* `disk_size_gb` - Storage capacity that the host's root volume possesses expressed in gigabytes. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier.
-
-### auto_scaling
-
-* `disk_gb_enabled` - Flag that indicates whether this cluster enables disk auto-scaling.
-* `compute_enabled` - Flag that indicates whether instance size auto-scaling is enabled.
-* `compute_scale_down_enabled` - Flag that indicates whether the instance size may scale down.
-* `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10).
-* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40).
-
-### analytics_auto_scaling
-
-* `disk_gb_enabled` - Flag that indicates whether this cluster enables disk auto-scaling.
-* `compute_enabled` - Flag that indicates whether instance size auto-scaling is enabled.
-* `compute_scale_down_enabled` - Flag that indicates whether the instance size may scale down.
-* `compute_min_instance_size` - Minimum instance size to which your cluster can automatically scale (such as M10).
-* `compute_max_instance_size` - Maximum instance size to which your cluster can automatically scale (such as M40).
-
-#### Advanced Configuration
-
-* `default_read_concern` - [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. **(DEPRECATED)** MongoDB 6.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation.
-* `default_write_concern` - [Default level of acknowledgment requested from MongoDB for write operations](https://docs.mongodb.com/manual/reference/write-concern/) set for this cluster. MongoDB 6.0 clusters default to [majority](https://docs.mongodb.com/manual/reference/write-concern/).
-* `fail_index_key_too_long` - **(DEPRECATED)** When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
-* `javascript_enabled` - When true, the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
-* `minimum_enabled_tls_protocol` - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are:
- - TLS1_0
- - TLS1_1
- - TLS1_2
-* `no_table_scan` - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations.
-* `oplog_size_mb` - The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas.
-* `oplog_min_retention_hours` - Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates.
-* `sample_size_bi_connector` - Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
-* `sample_refresh_interval_bi_connector` - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
-* `default_max_time_ms` - Default time limit in milliseconds for individual read operations to complete. This option corresponds to the [defaultMaxTimeMS](https://www.mongodb.com/docs/upcoming/reference/cluster-parameters/defaultMaxTimeMS/) cluster parameter. This parameter is supported only for MongoDB version 8.0 and above.
-* `transaction_lifetime_limit_seconds` - (Optional) Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds.
-* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. This parameter is only supported for MongoDB version 6.0 and above. Defaults to `-1`(off).
-* `tls_cipher_config_mode` - The TLS cipher suite configuration mode. Valid values include `CUSTOM` or `DEFAULT`. The `DEFAULT` mode uses the default cipher suites. The `CUSTOM` mode allows you to specify custom cipher suites for both TLS 1.2 and TLS 1.3.
-* `custom_openssl_cipher_config_tls12` - The custom OpenSSL cipher suite list for TLS 1.2. This field is only valid when `tls_cipher_config_mode` is set to `CUSTOM`.
-
-### pinned_fcv
-
-* `expiration_date` - Expiration date of the fixed FCV. This value is in the ISO 8601 timestamp format (e.g. "2024-12-04T16:25:00Z").
-* `version` - Feature compatibility version of the cluster.
-
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `cluster_id` - The cluster ID.
-* `mongo_db_version` - Version of MongoDB the cluster runs, in `major-version`.`minor-version` format.
-* `id` - The Terraform's unique identifier used internally for state management.
-* `connection_strings` - Set of connection strings that your applications use to connect to this cluster. More information in [Connection-strings](https://docs.mongodb.com/manual/reference/connection-string/). Use the parameters in this object to connect your applications to this cluster. To learn more about the formats of connection strings, see [Connection String Options](https://docs.atlas.mongodb.com/reference/faq/connection-changes/). NOTE: Atlas returns the contents of this object after the cluster is operational, not while it builds the cluster.
-
- **NOTE** Connection strings must be returned as a list, therefore to refer to a specific attribute value add index notation. Example: mongodbatlas_advanced_cluster.cluster-test.connection_strings.0.standard_srv
-
- Private connection strings may not be available immediately as the reciprocal connections may not have finalized by end of the Terraform run. If the expected connection string(s) do not contain a value a terraform refresh may need to be performed to obtain the value. One can also view the status of the peered connection in the [Atlas UI](https://docs.atlas.mongodb.com/security-vpc-peering/).
-
- - `connection_strings.standard` - Public mongodb:// connection string for this cluster.
- - `connection_strings.standard_srv` - Public mongodb+srv:// connection string for this cluster. The mongodb+srv protocol tells the driver to look up the seed list of hosts in DNS. Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don’t need to append the seed list or change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn’t , use connectionStrings.standard.
- - `connection_strings.private` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- - `connection_strings.private_srv` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb+srv://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- - `connection_strings.private_endpoint` - Private endpoint connection strings. Each object describes the connection strings you can use to connect to this cluster through a private endpoint. Atlas returns this parameter only if you deployed a private endpoint to all regions to which you deployed this cluster's nodes.
- - `connection_strings.private_endpoint[#].connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
- - `connection_strings.private_endpoint[#].srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
- - `connection_strings.private_endpoint[#].srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
- - `connection_strings.private_endpoint[#].type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
- - `connection_strings.private_endpoint[#].endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
- - `connection_strings.private_endpoint[#].endpoints[#].endpoint_id` - Unique identifier of the private endpoint.
- - `connection_strings.private_endpoint[#].endpoints[#].provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
- - `connection_strings.private_endpoint[#].endpoints[#].region` - Region to which you deployed the private endpoint.
-* `paused` - Flag that indicates whether the cluster is paused or not.
-* `state_name` - Current state of the cluster. The possible states are:
-
-See detailed information for arguments and attributes: [MongoDB API Advanced Clusters](https://docs.atlas.mongodb.com/reference/api/cluster-advanced/get-all-cluster-advanced/)
diff --git a/docs/data-sources/advanced_clusters.md b/docs/data-sources/advanced_clusters.md
index 8ffb914c18..5237d2e2a6 100644
--- a/docs/data-sources/advanced_clusters.md
+++ b/docs/data-sources/advanced_clusters.md
@@ -1,8 +1,11 @@
+---
+subcategory: "Clusters"
+---
+
# Data Source: mongodbatlas_advanced_clusters
`mongodbatlas_advanced_clusters` returns all Advanced Clusters for a project_id.
-This page describes the current version of `mongodbatlas_advanced_clusters`, the page for the **Preview for MongoDB Atlas Provider 2.0.0** can be found [here](./advanced_clusters%2520%2528preview%2520provider%25202.0.0%2529).
-> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
@@ -20,17 +23,21 @@ resource "mongodbatlas_advanced_cluster" "example" {
name = "cluster-test"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M0"
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M0"
+ }
+ provider_name = "TENANT"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
data "mongodbatlas_advanced_clusters" "example" {
@@ -47,37 +54,41 @@ resource "mongodbatlas_advanced_cluster" "example" {
backup_enabled = false
cluster_type = "SHARDED"
- replication_specs { # Sharded cluster with 2 asymmetric shards (M30 and M40)
- region_configs {
- electable_specs {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
+ replication_specs = [
+ { # Sharded cluster with 2 asymmetric shards (M30 and M40)
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ },
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M40"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
}
- }
-
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
+ ]
}
data "mongodbatlas_advanced_cluster" "example-asym" {
project_id = mongodbatlas_advanced_cluster.example.project_id
name = mongodbatlas_advanced_cluster.example.name
- use_replication_spec_per_shard = true
}
```
@@ -89,14 +100,14 @@ resource "mongodbatlas_advanced_cluster" "example-flex" {
name = "flex-cluster"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
provider_name = "FLEX"
backing_provider_name = "AWS"
region_name = "US_EAST_1"
priority = 7
- }
- }
+ }]
+ }]
}
data "mongodbatlas_advanced_clusters" "example" {
@@ -107,27 +118,24 @@ data "mongodbatlas_advanced_clusters" "example" {
## Argument Reference
* `project_id` - (Required) The unique ID for the project to get the clusters.
-* `use_replication_spec_per_shard` - (Optional) Set this field to true to allow the data source to use the latest schema representing each shard with an individual `replication_specs` object. This enables representing clusters with independent shard scaling. **Note:** If not set to true, this data source return all clusters except clusters with asymmetric shards.
## Attributes Reference
In addition to all arguments above, the following attributes are exported:
-* `id` - The cluster ID.
* `results` - A list where each represents a Cluster. See below for more details.
### Advanced Cluster
* `bi_connector_config` - Configuration settings applied to BI Connector for Atlas on this cluster. See [below](#bi_connector_config). In prior versions of the MongoDB Atlas Terraform Provider, this parameter was named `bi_connector`.
* `cluster_type` - Type of the cluster that you want to create.
-* `disk_size_gb` - Capacity, in gigabytes, of the host's root volume. **(DEPRECATED)** Use `replication_specs.#.region_configs.#.(analytics_specs|electable_specs|read_only_specs).disk_size_gb` instead. To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide) for more details.
* `encryption_at_rest_provider` - Possible values are AWS, GCP, AZURE or NONE.
* `tags` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
* `labels` - Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels).
* `mongo_db_major_version` - Version of the cluster to deploy.
* `pinned_fcv` - The pinned Feature Compatibility Version (FCV) with its associated expiration date. See [below](#pinned_fcv).
* `pit_enabled` - Flag that indicates if the cluster uses Continuous Cloud Backup.
-* `replication_specs` - List of settings that configure your cluster regions. If `use_replication_spec_per_shard = true`, this array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. See [below](#replication_specs)
+* `replication_specs` - List of settings that configure your cluster regions. This array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. See [below](#replication_specs)
* `root_cert_type` - Certificate Authority that MongoDB Atlas clusters use.
* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
* `version_release_system` - Release cadence that Atlas uses for this cluster.
@@ -166,9 +174,7 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le
### replication_specs
-* `id` - **(DEPRECATED)** Unique identifer of the replication document for a zone in a Global Cluster. This value corresponds to the legacy sharding schema (no independent shard scaling) and is different from the Shard ID you may see in the Atlas UI. This value is not populated (empty string) when a sharded cluster has independently scaled shards.
-* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI. When using old sharding configuration (replication spec with `num_shards` greater than 1) this value is not populated.
-* `num_shards` - Provide this value if you set a `cluster_type` of SHARDED or GEOSHARDED. **(DEPRECATED)** To learn more, see the [Migration Guide](../guides/1.18.0-upgrade-guide) for more details.
+* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI.
* `region_configs` - Configuration for the hardware specifications for nodes set for a given region. Each `region_configs` object describes the region's priority in elections and the number and type of MongoDB nodes that Atlas deploys to the region. Each `region_configs` object must have either an `analytics_specs` object, `electable_specs` object, or `read_only_specs` object. See [below](#region_configs).
* `container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container either created programmatically by the user before any clusters existed in a project or when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
* `zone_name` - Name for the zone in a Global Cluster.
@@ -215,9 +221,7 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le
#### Advanced Configuration
-* `default_read_concern` - [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. **(DEPRECATED)** MongoDB 6.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation.
* `default_write_concern` - [Default level of acknowledgment requested from MongoDB for write operations](https://docs.mongodb.com/manual/reference/write-concern/) set for this cluster. MongoDB 6.0 clusters default to [majority](https://docs.mongodb.com/manual/reference/write-concern/).
-* `fail_index_key_too_long` - **(DEPRECATED)** When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
* `javascript_enabled` - When true, the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
* `minimum_enabled_tls_protocol` - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are:
- TLS1_0
@@ -246,7 +250,6 @@ In addition to all arguments above, the following attributes are exported:
* `cluster_id` - The cluster ID.
* `mongo_db_version` - Version of MongoDB the cluster runs, in `major-version`.`minor-version` format.
-* `id` - The Terraform's unique identifier used internally for state management.
* `connection_strings` - Set of connection strings that your applications use to connect to this cluster. More information in [Connection-strings](https://docs.mongodb.com/manual/reference/connection-string/). Use the parameters in this object to connect your applications to this cluster. To learn more about the formats of connection strings, see [Connection String Options](https://docs.atlas.mongodb.com/reference/faq/connection-changes/). NOTE: Atlas returns the contents of this object after the cluster is operational, not while it builds the cluster.
**NOTE** Connection strings must be returned as a list, therefore to refer to a specific attribute value add index notation. Example: mongodbatlas_advanced_cluster.cluster-test.connection_strings.0.standard_srv
@@ -258,14 +261,14 @@ In addition to all arguments above, the following attributes are exported:
- `connection_strings.private` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- `connection_strings.private_srv` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb+srv://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- `connection_strings.private_endpoint` - Private endpoint connection strings. Each object describes the connection strings you can use to connect to this cluster through a private endpoint. Atlas returns this parameter only if you deployed a private endpoint to all regions to which you deployed this cluster's nodes.
- - `connection_strings.private_endpoint.#.connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
- - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
- - `connection_strings.private_endpoint.#.srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
- - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
- - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
- - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint.
- - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
- - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint.
+ - `connection_strings.private_endpoint[#].connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
+ - `connection_strings.private_endpoint[#].srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
+ - `connection_strings.private_endpoint[#].srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
+ - `connection_strings.private_endpoint[#].type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
+ - `connection_strings.private_endpoint[#].endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
+ - `connection_strings.private_endpoint[#].endpoints[#].endpoint_id` - Unique identifier of the private endpoint.
+ - `connection_strings.private_endpoint[#].endpoints[#].provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
+ - `connection_strings.private_endpoint[#].endpoints[#].region` - Region to which you deployed the private endpoint.
* `paused` - Flag that indicates whether the cluster is paused or not.
* `state_name` - Current state of the cluster. The possible states are:
diff --git a/docs/data-sources/alert_configuration.md b/docs/data-sources/alert_configuration.md
index ecf3fab32c..f19fbe3523 100644
--- a/docs/data-sources/alert_configuration.md
+++ b/docs/data-sources/alert_configuration.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Alert Configurations"
+---
+
# Data Source: mongodbatlas_alert_configuration
`mongodbatlas_alert_configuration` describes an Alert Configuration.
diff --git a/docs/data-sources/alert_configurations.md b/docs/data-sources/alert_configurations.md
index 1c45301485..fe260916cd 100644
--- a/docs/data-sources/alert_configurations.md
+++ b/docs/data-sources/alert_configurations.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Alert Configurations"
+---
+
# Data Source: mongodbatlas_alert_configurations
`mongodbatlas_alert_configurations` describes all Alert Configurations by the provided project_id. The data source requires your Project ID.
diff --git a/docs/data-sources/api_key.md b/docs/data-sources/api_key.md
index b7fbfcdefe..97af8844f5 100644
--- a/docs/data-sources/api_key.md
+++ b/docs/data-sources/api_key.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_api_key
`mongodbatlas_api_key` describes a MongoDB Atlas API Key. This represents a API Key that has been created.
diff --git a/docs/data-sources/api_key_project_assignment.md b/docs/data-sources/api_key_project_assignment.md
index 2b7bf97d0d..38a8e6fd84 100644
--- a/docs/data-sources/api_key_project_assignment.md
+++ b/docs/data-sources/api_key_project_assignment.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_api_key_project_assignment
`mongodbatlas_api_key_project_assignment` describes an API Key Project Assignment.
diff --git a/docs/data-sources/api_key_project_assignments.md b/docs/data-sources/api_key_project_assignments.md
index da1ccdff5c..6154fc1196 100644
--- a/docs/data-sources/api_key_project_assignments.md
+++ b/docs/data-sources/api_key_project_assignments.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_api_key_project_assignments
`mongodbatlas_api_key_project_assignments` provides an API Key Project Assignments data source. The data source lets you list all API key project assignments for an organization.
diff --git a/docs/data-sources/api_keys.md b/docs/data-sources/api_keys.md
index a1d6c28f84..d36690ea9d 100644
--- a/docs/data-sources/api_keys.md
+++ b/docs/data-sources/api_keys.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_api_keys
`mongodbatlas_api_keys` describe all API Keys. This represents API Keys that have been created.
diff --git a/docs/data-sources/atlas_user.md b/docs/data-sources/atlas_user.md
index 8216f65ca6..6a2f3bd8e2 100644
--- a/docs/data-sources/atlas_user.md
+++ b/docs/data-sources/atlas_user.md
@@ -1,7 +1,13 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
# Data Source: mongodbatlas_atlas_user
`mongodbatlas_atlas_user` Provides a MongoDB Atlas User.
+~> **DEPRECATION:** This data source is deprecated. Use `mongodbatlas_cloud_user_org_assignment` to read organization user assignments. See the [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management).
+
-> **NOTE:** If you are the owner of a MongoDB Atlas organization or project, you can also retrieve the user profile for any user with membership in that organization or project.
## Example Usage
@@ -25,14 +31,14 @@ data "mongodbatlas_atlas_user" "test" {
* `user_id` - (Optional) Unique 24-hexadecimal digit string that identifies this user.
* `username` - (Optional) Email address that belongs to the MongoDB Atlas user account. You can't modify this address after creating the user.
-~> **IMPORTANT:** Either `user_id` or `username` must be configurated.
+~> **IMPORTANT:** Either `user_id` or `username` must be configured.
## Attributes Reference
In addition to all arguments above, the following attributes are exported:
* `country` - Two alphabet characters that identifies MongoDB Atlas user's geographic location. This parameter uses the ISO 3166-1a2 code format.
* `created_at` - Date and time when the current account is created. This value is in the ISO 8601 timestamp format in UTC.
-* `email_address` - Email address that belongs to the MongoDB Atlas user.
+* `email_address` - **(DEPRECATED)** Email address that belongs to the MongoDB Atlas user. This attribute is deprecated and will be removed in the next major release. Please transition to `data.mongodbatlas_organization.users.username`, `data.mongodbatlas_team.users.username` or `data.mongodbatlas_project.users.username` attributes. For more details, see [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management)."
* `first_name` - First or given name that belongs to the MongoDB Atlas user.
* `last_auth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
* `last_name` - Last name, family name, or surname that belongs to the MongoDB Atlas user.
diff --git a/docs/data-sources/atlas_users.md b/docs/data-sources/atlas_users.md
index 2ad0490c18..3aea2e467e 100644
--- a/docs/data-sources/atlas_users.md
+++ b/docs/data-sources/atlas_users.md
@@ -1,7 +1,13 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
# Data Source: atlas_users
`atlas_users` provides Atlas Users associated with a specified Organization, Project, or Team.
+~> **DEPRECATION:** This data source is deprecated. Replace it with the `users` attribute on `mongodbatlas_organization`, `mongodbatlas_project`, or `mongodbatlas_team` data sources, depending on scope. See the [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management).
+
-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
## Example Usage
@@ -54,7 +60,7 @@ In addition to all arguments above, the following attributes are exported:
* `username` - Email address that belongs to the MongoDB Atlas user account. You cannot modify this address after creating the user.
* `country` - Two alphabet characters that identifies MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
* `created_at` - Date and time when the current account is created. This value is in the ISO 8601 timestamp format in UTC.
-* `email_address` - Email address that belongs to the MongoDB Atlas user.
+* `email_address` - **(DEPRECATED)** Email address that belongs to the MongoDB Atlas user. This attribute is deprecated and will be removed in the next major release. Please transition to `data.mongodbatlas_organization.users.username`, `data.mongodbatlas_team.users.username` or `data.mongodbatlas_project.users.username` attributes. For more details, see [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management)."
* `first_name` - First or given name that belongs to the MongoDB Atlas user.
* `last_auth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
* `last_name` - Last name, family name, or surname that belongs to the MongoDB Atlas user.
diff --git a/docs/data-sources/auditing.md b/docs/data-sources/auditing.md
index bbf0b2b742..a970ceb30c 100644
--- a/docs/data-sources/auditing.md
+++ b/docs/data-sources/auditing.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Auditing"
+---
+
# Data Source: mongodbatlas_auditing
`mongodbatlas_auditing` describes a Auditing.
diff --git a/docs/data-sources/backup_compliance_policy.md b/docs/data-sources/backup_compliance_policy.md
index d06be55d0e..b925e427e8 100644
--- a/docs/data-sources/backup_compliance_policy.md
+++ b/docs/data-sources/backup_compliance_policy.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_backup_compliance_policy
`mongodbatlas_backup_compliance_policy` provides an Atlas Backup Compliance Policy. An Atlas Backup Compliance Policy contains the current protection policy settings for a project. A compliance policy prevents any user, regardless of role, from modifying or deleting specific cluster configurations and backups. To disable a Backup Compliance Policy, you must contact MongoDB support. Backup Compliance Policies are only supported for clusters M10 and higher and are applied as the minimum policy for all clusters.
@@ -17,17 +21,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_CENTRAL_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
diff --git a/docs/data-sources/cloud_backup_schedule.md b/docs/data-sources/cloud_backup_schedule.md
index bd32504ee0..e65c67d419 100644
--- a/docs/data-sources/cloud_backup_schedule.md
+++ b/docs/data-sources/cloud_backup_schedule.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_schedule
`mongodbatlas_cloud_backup_schedule` provides a Cloud Backup Schedule datasource. An Atlas Cloud Backup Schedule provides the current cloud backup schedule for the cluster.
@@ -15,17 +19,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_CENTRAL_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
@@ -59,7 +63,6 @@ resource "mongodbatlas_cloud_backup_schedule" "test" {
data "mongodbatlas_cloud_backup_schedule" "test" {
project_id = mongodbatlas_cloud_backup_schedule.test.project_id
cluster_name = mongodbatlas_cloud_backup_schedule.test.cluster_name
- use_zone_id_for_copy_settings = true
}
```
@@ -67,7 +70,6 @@ data "mongodbatlas_cloud_backup_schedule" "test" {
* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
* `cluster_name` - (Required) The name of the Atlas cluster that contains the snapshots backup policy you want to retrieve.
-* `use_zone_id_for_copy_settings` - Set this field to `true` to allow the data source to use the latest schema that populates `copy_settings.#.zone_id` instead of the deprecated `copy_settings.#.replication_spec_id`. These fields also enable you to reference cluster zones using independent shard scaling, which no longer supports `replication_spec.*.id`. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones).
## Attributes Reference
@@ -139,7 +141,6 @@ In addition to all arguments above, the following attributes are exported:
* `frequencies` - List that describes which types of snapshots to copy. i.e. "HOURLY" "DAILY" "WEEKLY" "MONTHLY" "YEARLY" "ON_DEMAND"
* `region_name` - Target region to copy snapshots belonging to replicationSpecId to. Please supply the 'Atlas Region' which can be found under https://www.mongodb.com/docs/atlas/reference/cloud-providers/ 'regions' link
* `zone_id` - Unique 24-hexadecimal digit string that identifies the zone in a cluster. For global clusters, there can be multiple zones to choose from. For sharded clusters and replica set clusters, there is only one zone in the cluster.
-* `replication_spec_id` - Unique 24-hexadecimal digit string that identifies the replication object for a zone in a cluster. For global clusters, there can be multiple zones to choose from. For sharded clusters and replica set clusters, there is only one zone in the cluster. To find the Replication Spec Id, consult the replicationSpecs array returned from [Return One Multi-Cloud Cluster in One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getcluster). **(DEPRECATED)** Use `zone_id` instead. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones).
* `should_copy_oplogs` - Flag that indicates whether to copy the oplogs to the target region. You can use the oplogs to perform point-in-time restores.
**Note** The parameter deleteCopiedBackups is not supported in terraform please leverage Atlas Admin API or AtlasCLI instead to manage the lifecycle of backup snaphot copies.
diff --git a/docs/data-sources/cloud_backup_snapshot.md b/docs/data-sources/cloud_backup_snapshot.md
index 80297fafc7..930897987a 100644
--- a/docs/data-sources/cloud_backup_snapshot.md
+++ b/docs/data-sources/cloud_backup_snapshot.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot
`mongodbatlas_cloud_backup_snapshot` provides an Cloud Backup Snapshot datasource. Atlas Cloud Backup Snapshots provide localized backup storage using the native snapshot functionality of the cluster’s cloud service.
diff --git a/docs/data-sources/cloud_backup_snapshot_export_bucket.md b/docs/data-sources/cloud_backup_snapshot_export_bucket.md
index 35dbee7d08..4d4dcc0b13 100644
--- a/docs/data-sources/cloud_backup_snapshot_export_bucket.md
+++ b/docs/data-sources/cloud_backup_snapshot_export_bucket.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot_export_bucket
`mongodbatlas_cloud_backup_snapshot_export_bucket` datasource allows you to retrieve all the buckets for the specified project.
diff --git a/docs/data-sources/cloud_backup_snapshot_export_buckets.md b/docs/data-sources/cloud_backup_snapshot_export_buckets.md
index 2a8684474f..d59d587bd1 100644
--- a/docs/data-sources/cloud_backup_snapshot_export_buckets.md
+++ b/docs/data-sources/cloud_backup_snapshot_export_buckets.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot_export_buckets
`mongodbatlas_cloud_backup_snapshot_export_buckets` datasource allows you to retrieve all the buckets for the specified project.
diff --git a/docs/data-sources/cloud_backup_snapshot_export_job.md b/docs/data-sources/cloud_backup_snapshot_export_job.md
index 46e05f9ce2..090011b24a 100644
--- a/docs/data-sources/cloud_backup_snapshot_export_job.md
+++ b/docs/data-sources/cloud_backup_snapshot_export_job.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot_export_Job
`mongodbatlas_cloud_backup_snapshot_export_job` datasource allows you to retrieve a snapshot export job for the specified project and cluster.
diff --git a/docs/data-sources/cloud_backup_snapshot_export_jobs.md b/docs/data-sources/cloud_backup_snapshot_export_jobs.md
index b66f15507a..0d4b3b266c 100644
--- a/docs/data-sources/cloud_backup_snapshot_export_jobs.md
+++ b/docs/data-sources/cloud_backup_snapshot_export_jobs.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot_export_jobs
`mongodbatlas_cloud_backup_snapshot_export_jobs` datasource allows you to retrieve all the buckets for the specified project.
diff --git a/docs/data-sources/cloud_backup_snapshot_restore_job.md b/docs/data-sources/cloud_backup_snapshot_restore_job.md
index afb7c4768e..10141ed5eb 100644
--- a/docs/data-sources/cloud_backup_snapshot_restore_job.md
+++ b/docs/data-sources/cloud_backup_snapshot_restore_job.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot_restore_job
`mongodbatlas_cloud_backup_snapshot_restore_job` provides a Cloud Backup Snapshot Restore Job datasource. Gets all the cloud backup snapshot restore jobs for the specified cluster.
diff --git a/docs/data-sources/cloud_backup_snapshot_restore_jobs.md b/docs/data-sources/cloud_backup_snapshot_restore_jobs.md
index 58cdcbd592..2338417334 100644
--- a/docs/data-sources/cloud_backup_snapshot_restore_jobs.md
+++ b/docs/data-sources/cloud_backup_snapshot_restore_jobs.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshot_restore_jobs
`mongodbatlas_cloud_backup_snapshot_restore_jobs` provides a Cloud Backup Snapshot Restore Jobs datasource. Gets all the cloud backup snapshot restore jobs for the specified cluster.
diff --git a/docs/data-sources/cloud_backup_snapshots.md b/docs/data-sources/cloud_backup_snapshots.md
index fd3e2e0e21..ab6c199cf3 100644
--- a/docs/data-sources/cloud_backup_snapshots.md
+++ b/docs/data-sources/cloud_backup_snapshots.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Data Source: mongodbatlas_cloud_backup_snapshots
`mongodbatlas_cloud_backup_snapshots` provides an Cloud Backup Snapshot datasource. Atlas Cloud Backup Snapshots provide localized backup storage using the native snapshot functionality of the cluster’s cloud service.
diff --git a/docs/data-sources/cloud_provider_access_setup.md b/docs/data-sources/cloud_provider_access_setup.md
index 983fb177a2..7a621e007e 100644
--- a/docs/data-sources/cloud_provider_access_setup.md
+++ b/docs/data-sources/cloud_provider_access_setup.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Provider Access"
+---
+
# Data Source: mongodbatlas_cloud_provider_access_setup
`mongodbatlas_cloud_provider_access_setup` allows you to get a single role for a provider access role setup. Supported providers: AWS, AZURE and GCP.
diff --git a/docs/data-sources/cloud_provider_shared_tier_restore_job.md b/docs/data-sources/cloud_provider_shared_tier_restore_job.md
index 8791e838ef..80ea18e6a1 100644
--- a/docs/data-sources/cloud_provider_shared_tier_restore_job.md
+++ b/docs/data-sources/cloud_provider_shared_tier_restore_job.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Shared-Tier Restore Jobs"
---
-**WARNING:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Data Source: mongodbatlas_shared_tier_restore_job
@@ -41,4 +41,4 @@ In addition to all arguments above, the following attributes are exported:
* `delivery_type` - Means by which this resource returns the snapshot to the requesting MongoDB Cloud user. Values: `RESTORE`, `DOWNLOAD`.
* `expiration_date` - Date and time when the download link no longer works. This parameter expresses its value in the ISO 8601 timestamp format in UTC.
-For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Shared-Tier-Restore-Jobs/operation/getSharedClusterBackupRestoreJob)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Shared-Tier-Restore-Jobs/operation/getSharedClusterBackupRestoreJob)
diff --git a/docs/data-sources/cloud_provider_shared_tier_restore_jobs.md b/docs/data-sources/cloud_provider_shared_tier_restore_jobs.md
index 432109a8eb..ea40e73b6a 100644
--- a/docs/data-sources/cloud_provider_shared_tier_restore_jobs.md
+++ b/docs/data-sources/cloud_provider_shared_tier_restore_jobs.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Shared-Tier Restore Jobs"
---
-**WARNING:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Data Source: mongodbatlas_shared_tier_restore_jobs
@@ -48,4 +48,4 @@ In addition to all arguments above, the following attributes are exported:
* `delivery_type` - Means by which this resource returns the snapshot to the requesting MongoDB Cloud user. Values: `RESTORE`, `DOWNLOAD`.
* `expiration_date` - Date and time when the download link no longer works. This parameter expresses its value in the ISO 8601 timestamp format in UTC.
-For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Shared-Tier-Restore-Jobs/operation/getSharedClusterBackupRestoreJob)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Shared-Tier-Restore-Jobs/operation/getSharedClusterBackupRestoreJob)
diff --git a/docs/data-sources/cloud_provider_shared_tier_snapshot.md b/docs/data-sources/cloud_provider_shared_tier_snapshot.md
index eb168becd7..bfb0cef142 100644
--- a/docs/data-sources/cloud_provider_shared_tier_snapshot.md
+++ b/docs/data-sources/cloud_provider_shared_tier_snapshot.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Shared-Tier Snapshots"
---
-**WARNING:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Data Source: mongodbatlas_shared_tier_snapshot
@@ -39,4 +39,4 @@ In addition to all arguments above, the following attributes are exported:
* `finish_time` - Date and time when MongoDB Cloud completed writing this snapshot. This parameter expresses its value in the ISO 8601 timestamp format in UTC.
* `scheduled_time` - Date and time when MongoDB Cloud will take the snapshot. This parameter expresses its value in the ISO 8601 timestamp format in UTC.
-For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Shared-Tier-Snapshots/operation/getSharedClusterBackup)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Shared-Tier-Snapshots/operation/getSharedClusterBackup)
diff --git a/docs/data-sources/cloud_provider_shared_tier_snapshots.md b/docs/data-sources/cloud_provider_shared_tier_snapshots.md
index fb772f06d4..48a7d00941 100644
--- a/docs/data-sources/cloud_provider_shared_tier_snapshots.md
+++ b/docs/data-sources/cloud_provider_shared_tier_snapshots.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Shared-Tier Snapshots"
---
-**WARNING:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Data Source: mongodbatlas_shared_tier_snapshots
@@ -43,4 +43,4 @@ In addition to all arguments above, the following attributes are exported:
* `finish_time` - Date and time when MongoDB Cloud completed writing this snapshot. This parameter expresses its value in the ISO 8601 timestamp format in UTC.
* `scheduled_time` - Date and time when MongoDB Cloud will take the snapshot. This parameter expresses its value in the ISO 8601 timestamp format in UTC.
-For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Cloud-Backups/operation/listShardedClusterBackups)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Cloud-Backups/operation/listShardedClusterBackups)
diff --git a/docs/data-sources/cloud_provider_snapshot.md b/docs/data-sources/cloud_provider_snapshot.md
deleted file mode 100644
index c164eb95bc..0000000000
--- a/docs/data-sources/cloud_provider_snapshot.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This datasource is deprecated, use `mongodbatlas_cloud_backup_snapshot`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Data Source: mongodbatlas_cloud_provider_snapshot
-
-`mongodbatlas_cloud_provider_snapshot` provides an Cloud Backup Snapshot datasource. Atlas Cloud Backup Snapshots provide localized backup storage using the native snapshot functionality of the cluster’s cloud service.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-
-```terraform
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- group_id = "5d0f1f73cf09a29120e173cf"
- cluster_name = "MyClusterTest"
- description = "SomeDescription"
- retention_in_days = 1
-}
-
-data "mongodbatlas_cloud_provider_snapshot" "test" {
- snapshot_id = "5d1285acd5ec13b6c2d1726a"
- group_id = mongodbatlas_cloud_provider_snapshot.test.group_id
- cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name
-}
-```
-
-## Argument Reference
-
-* `snapshot_id` - (Required) The unique identifier of the snapshot you want to retrieve.
-* `cluster_name` - (Required) The name of the Atlas cluster that contains the snapshot you want to retrieve.
-* `group_id` - (Required) The unique identifier of the project for the Atlas cluster.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `id` - Unique identifier of the snapshot.
-* `created_at` - UTC ISO 8601 formatted point in time when Atlas took the snapshot.
-* `expires_at` - UTC ISO 8601 formatted point in time when Atlas will delete the snapshot.
-* `description` - UDescription of the snapshot. Only present for on-demand snapshots.
-* `master_key_uuid` - Unique ID of the AWS KMS Customer Master Key used to encrypt the snapshot. Only visible for clusters using Encryption at Rest via Customer KMS.
-* `mongod_version` - Version of the MongoDB server.
-* `snapshot_type` - Specified the type of snapshot. Valid values are onDemand and scheduled.
-* `status` - Current status of the snapshot. One of the following values: queued, inProgress, completed, failed.
-* `storage_size_bytes` - Specifies the size of the snapshot in bytes.
-* `type` - Specifies the type of cluster: replicaSet or shardedCluster.
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/backup/get-one-backup/)
\ No newline at end of file
diff --git a/docs/data-sources/cloud_provider_snapshot_backup_policy.md b/docs/data-sources/cloud_provider_snapshot_backup_policy.md
deleted file mode 100644
index 09724c9aa8..0000000000
--- a/docs/data-sources/cloud_provider_snapshot_backup_policy.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-subcategory: "Deprecated"
----
-
- **WARNING:** This data source is deprecated, use `mongodbatlas_cloud_backup_schedule`
- **Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Data Source: mongodbatlas_cloud_provider_snapshot_backup_policy
-
-`mongodbatlas_cloud_provider_snapshot_backup_policy` provides a Cloud Backup Snapshot Backup Policy datasource. An Atlas Cloud Backup Snapshot Policy provides the current snapshot schedule and retention settings for the cluster.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "clusterTest"
- cluster_type = "REPLICASET"
- backup_enabled = true # enable cloud backup snapshots
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_CENTRAL_1"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
-
- reference_hour_of_day = 3
- reference_minute_of_hour = 45
- restore_window_days = 4
-
-
- policies {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id
- frequency_interval = 1
- frequency_type = "hourly"
- retention_unit = "days"
- retention_value = 1
- }
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id
- frequency_interval = 1
- frequency_type = "daily"
- retention_unit = "days"
- retention_value = 2
- }
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id
- frequency_interval = 4
- frequency_type = "weekly"
- retention_unit = "weeks"
- retention_value = 3
- }
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id
- frequency_interval = 5
- frequency_type = "monthly"
- retention_unit = "months"
- retention_value = 4
- }
- }
-}
-
-data "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot_backup_policy.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot_backup_policy.test.cluster_name
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
-* `cluster_name` - (Required) The name of the Atlas cluster that contains the snapshots backup policy you want to retrieve.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `cluster_id` - Unique identifier of the Atlas cluster.
-* `next_snapshot` - UTC ISO 8601 formatted point in time when Atlas will take the next snapshot.
-* `reference_hour_of_day` - UTC Hour of day between 0 and 23 representing which hour of the day that Atlas takes a snapshot.
-* `reference_minute_of_hour` - UTC Minute of day between 0 and 59 representing which minute of the referenceHourOfDay that Atlas takes the snapshot.
-* `restore_window_days` - Specifies a restore window in days for cloud backup to maintain.
-
-### Policies
-* `policies` - A list of policy definitions for the cluster.
-* `policies.#.id` - Unique identifier of the backup policy.
-
-#### Policy Item
-* `policies.#.policy_item` - A list of specifications for a policy.
-* `policies.#.policy_item.#.id` - Unique identifier for this policy item.
-* `policies.#.policy_item.#.frequency_interval` - The frequency interval for a set of snapshots.
-* `policies.#.policy_item.#.frequency_type` - A type of frequency (hourly, daily, weekly, monthly).
-* `policies.#.policy_item.#.retention_unit` - The unit of time in which snapshot retention is measured (days, weeks, months).
-* `policies.#.policy_item.#.retention_value` - The number of days, weeks, or months the snapshot is retained.
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/schedule/get-all-schedules/)
\ No newline at end of file
diff --git a/docs/data-sources/cloud_provider_snapshot_restore_job.md b/docs/data-sources/cloud_provider_snapshot_restore_job.md
deleted file mode 100644
index 8b58423e69..0000000000
--- a/docs/data-sources/cloud_provider_snapshot_restore_job.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This datasource is deprecated, use `mongodbatlas_cloud_backup_snapshot_restore_job`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Data Source: mongodbatlas_cloud_provider_snapshot_restore_job
-
-`mongodbatlas_cloud_provider_snapshot_restore_job` provides a Cloud Backup Snapshot Restore Job datasource. Gets all the cloud backup snapshot restore jobs for the specified cluster.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-First create a snapshot of the desired cluster. Then request that snapshot be restored in an automated fashion to the designated cluster and project.
-
-```terraform
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- project_id = "5cf5a45a9ccf6400e60981b6"
- cluster_name = "MyCluster"
- description = "MyDescription"
- retention_in_days = 1
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" {
- project_id = "5cf5a45a9ccf6400e60981b6"
- cluster_name = "MyCluster"
- snapshot_id = "${mongodbatlas_cloud_provider_snapshot.test.id}"
- delivery_type {
- automated = true
- target_cluster_name = "MyCluster"
- target_project_id = "5cf5a45a9ccf6400e60981b6"
- }
-}
-
-data "mongodbatlas_cloud_provider_snapshot_restore_job" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot_restore_job.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot_restore_job.test.cluster_name
- job_id = mongodbatlas_cloud_provider_snapshot_restore_job.test.id
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
-* `cluster_name` - (Required) The name of the Atlas cluster for which you want to retrieve the restore job.
-* `job_id` - (Required) The unique identifier of the restore job to retrieve.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `cancelled` - Indicates whether the restore job was canceled.
-* `created_at` - UTC ISO 8601 formatted point in time when Atlas created the restore job.
-* `delivery_type` - Type of restore job to create. Possible values are: automated and download.
-* `delivery_url` - One or more URLs for the compressed snapshot files for manual download. Only visible if deliveryType is download.
-* `expired` - Indicates whether the restore job expired.
-* `expires_at` - UTC ISO 8601 formatted point in time when the restore job expires.
-* `finished_at` - UTC ISO 8601 formatted point in time when the restore job completed.
-* `id` - The unique identifier of the restore job.
-* `snapshot_id` - Unique identifier of the source snapshot ID of the restore job.
-* `target_project_id` - Name of the target Atlas project of the restore job. Only visible if deliveryType is automated.
-* `target_cluster_name` - Name of the target Atlas cluster to which the restore job restores the snapshot. Only visible if deliveryType is automated.
-* `timestamp` - Timestamp in ISO 8601 date and time format in UTC when the snapshot associated to snapshotId was taken.
-* `oplogTs` - Timestamp in the number of seconds that have elapsed since the UNIX epoch.
-* `oplogInc` - Oplog operation number from which to you want to restore this snapshot.
-* `pointInTimeUTCSeconds` - Timestamp in the number of seconds that have elapsed since the UNIX epoch.
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/restore/get-one-restore-job/)
\ No newline at end of file
diff --git a/docs/data-sources/cloud_provider_snapshot_restore_jobs.md b/docs/data-sources/cloud_provider_snapshot_restore_jobs.md
deleted file mode 100644
index d6d46c82eb..0000000000
--- a/docs/data-sources/cloud_provider_snapshot_restore_jobs.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This datasource is deprecated, use `mongodbatlas_cloud_backup_snapshots_restore_jobs`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Data Source: mongodbatlas_cloud_provider_snapshot_restore_jobs
-
-`mongodbatlas_cloud_provider_snapshot_restore_jobs` provides a Cloud Backup Snapshot Restore Jobs datasource. Gets all the cloud backup snapshot restore jobs for the specified cluster.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-First create a snapshot of the desired cluster. Then request that snapshot be restored in an automated fashion to the designated cluster and project.
-
-```terraform
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- project_id = "5cf5a45a9ccf6400e60981b6"
- cluster_name = "MyCluster"
- description = "MyDescription"
- retention_in_days = 1
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" {
- project_id = "5cf5a45a9ccf6400e60981b6"
- cluster_name = "MyCluster"
- snapshot_id = mongodbatlas_cloud_provider_snapshot.test.id
- delivery_type_config {
- automated = true
- target_cluster_name = "MyCluster"
- target_project_id = "5cf5a45a9ccf6400e60981b6"
- }
-}
-
-data "mongodbatlas_cloud_provider_snapshot_restore_jobs" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot_restore_job.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot_restore_job.test.cluster_name
- page_num = 1
- items_per_page = 5
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
-* `cluster_name` - (Required) The name of the Atlas cluster for which you want to retrieve restore jobs.
-* `page_num` - (Optional) The page to return. Defaults to `1`.
-* `items_per_page` - (Optional) Number of items to return per page, up to a maximum of 500. Defaults to `100`.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `results` - Includes cloudProviderSnapshotRestoreJob object for each item detailed in the results array section.
-* `totalCount` - Count of the total number of items in the result set. It may be greater than the number of objects in the results array if the entire result set is paginated.
-
-### CloudProviderSnapshotRestoreJob
-
-* `cancelled` - Indicates whether the restore job was canceled.
-* `created_at` - UTC ISO 8601 formatted point in time when Atlas created the restore job.
-* `delivery_type` - Type of restore job to create. Possible values are: automated and download.
-* `delivery_url` - One or more URLs for the compressed snapshot files for manual download. Only visible if deliveryType is download.
-* `expired` - Indicates whether the restore job expired.
-* `expires_at` - UTC ISO 8601 formatted point in time when the restore job expires.
-* `finished_at` - UTC ISO 8601 formatted point in time when the restore job completed.
-* `id` - The unique identifier of the restore job.
-* `snapshot_id` - Unique identifier of the source snapshot ID of the restore job.
-* `target_project_id` - Name of the target Atlas project of the restore job. Only visible if deliveryType is automated.
-* `target_cluster_name` - Name of the target Atlas cluster to which the restore job restores the snapshot. Only visible if deliveryType is automated.
-* `timestamp` - Timestamp in ISO 8601 date and time format in UTC when the snapshot associated to snapshotId was taken.
-* `oplogTs` - Timestamp in the number of seconds that have elapsed since the UNIX epoch.
-* `oplogInc` - Oplog operation number from which to you want to restore this snapshot.
-* `pointInTimeUTCSeconds` - Timestamp in the number of seconds that have elapsed since the UNIX epoch.
-
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/restore/get-all-restore-jobs/)
\ No newline at end of file
diff --git a/docs/data-sources/cloud_provider_snapshots.md b/docs/data-sources/cloud_provider_snapshots.md
deleted file mode 100644
index 7c8a63773d..0000000000
--- a/docs/data-sources/cloud_provider_snapshots.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This datasource is deprecated, use `mongodbatlas_cloud_backup_snapshots`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Data Source: mongodbatlas_cloud_provider_snapshots
-
-`mongodbatlas_cloud_provider_snapshots` provides an Cloud Backup Snapshot datasource. Atlas Cloud Backup Snapshots provide localized backup storage using the native snapshot functionality of the cluster’s cloud service.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-
-```terraform
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- group_id = "5d0f1f73cf09a29120e173cf"
- cluster_name = "MyClusterTest"
- description = "SomeDescription"
- retention_in_days = 1
-}
-
-data "mongodbatlas_cloud_provider_snapshots" "test" {
- group_id = mongodbatlas_cloud_provider_snapshots.test.group_id
- cluster_name = mongodbatlas_cloud_provider_snapshots.test.cluster_name
- page_num = 1
- items_per_page = 5
-}
-```
-
-## Argument Reference
-
-* `cluster_name` - (Required) The name of the Atlas cluster that contains the snapshot you want to retrieve.
-* `group_id` - (Required) The unique identifier of the project for the Atlas cluster.
-* `page_num` - (Optional) The page to return. Defaults to `1`.
-* `items_per_page` - (Optional) Number of items to return per page, up to a maximum of 500. Defaults to `100`.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `results` - Includes cloudProviderSnapshot object for each item detailed in the results array section.
-* `totalCount` - Count of the total number of items in the result set. It may be greater than the number of objects in the results array if the entire result set is paginated.
-
-### CloudProviderSnapshot
-
-* `id` - Unique identifier of the snapshot.
-* `created_at` - UTC ISO 8601 formatted point in time when Atlas took the snapshot.
-* `expires_at` - UTC ISO 8601 formatted point in time when Atlas will delete the snapshot.
-* `description` - UDescription of the snapshot. Only present for on-demand snapshots.
-* `master_key_uuid` - Unique ID of the AWS KMS Customer Master Key used to encrypt the snapshot. Only visible for clusters using Encryption at Rest via Customer KMS.
-* `mongod_version` - Version of the MongoDB server.
-* `snapshot_type` - Specified the type of snapshot. Valid values are onDemand and scheduled.
-* `status` - Current status of the snapshot. One of the following values: queued, inProgress, completed, failed.
-* `storage_size_bytes` - Specifies the size of the snapshot in bytes.
-* `type` - Specifies the type of cluster: replicaSet or shardedCluster.
-
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/backup/get-all-backups/)
diff --git a/docs/data-sources/cloud_user_org_assignment.md b/docs/data-sources/cloud_user_org_assignment.md
new file mode 100644
index 0000000000..752adda72e
--- /dev/null
+++ b/docs/data-sources/cloud_user_org_assignment.md
@@ -0,0 +1,78 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# Data Source: mongodbatlas_cloud_user_org_assignment
+
+`mongodbatlas_cloud_user_org_assignment` provides a Cloud User Organization Assignment data source. The data source lets you retrieve a user assigned to an organization.
+
+**NOTE**: Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint are not returned with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser) for details.
+To manage such users with this resource, refer to our [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_cloud_user_org_assignment" "example" {
+ org_id = var.org_id
+ username = var.user_email
+ roles = {
+ org_roles = ["ORG_MEMBER"]
+ }
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "example_username" {
+ org_id = var.org_id
+ username = mongodbatlas_cloud_user_org_assignment.example.username
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "example_user_id" {
+ org_id = var.org_id
+ user_id = mongodbatlas_cloud_user_org_assignment.example.user_id
+}
+```
+
+
+## Schema
+
+### Required
+
+- `org_id` (String) Unique 24-hexadecimal digit string that identifies the organization that contains your projects. Use the [/orgs](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-organizations) endpoint to retrieve all organizations to which the authenticated user has access.
+
+### Optional
+
+- `user_id` (String) Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+- `username` (String) Email address that represents the username of the MongoDB Cloud user.
+
+### Read-Only
+
+- `country` (String) Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+- `created_at` (String) Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+- `first_name` (String) First or given name that belongs to the MongoDB Cloud user.
+- `invitation_created_at` (String) Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `invitation_expires_at` (String) Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `inviter_username` (String) Username of the MongoDB Cloud user who sent the invitation to join the organization.
+- `last_auth` (String) Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+- `last_name` (String) Last name, family name, or surname that belongs to the MongoDB Cloud user.
+- `mobile_number` (String) Mobile phone number that belongs to the MongoDB Cloud user.
+- `org_membership_status` (String) String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+- `roles` (Attributes) Organization and project level roles to assign the MongoDB Cloud user within one organization. (see [below for nested schema](#nestedatt--roles))
+- `team_ids` (Set of String) List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+
+
+### Nested Schema for `roles`
+
+Read-Only:
+
+- `org_roles` (Set of String) One or more organization level roles to assign the MongoDB Cloud user.
+- `project_role_assignments` (Attributes List) List of project level role assignments to assign the MongoDB Cloud user. (see [below for nested schema](#nestedatt--roles--project_role_assignments))
+
+
+### Nested Schema for `roles.project_role_assignments`
+
+Read-Only:
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies the project to which these roles belong.
+- `project_roles` (Set of String) One or more project-level roles assigned to the MongoDB Cloud user.
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser) Documentation.
diff --git a/docs/data-sources/cloud_user_project_assignment.md b/docs/data-sources/cloud_user_project_assignment.md
new file mode 100644
index 0000000000..4c7fb760e6
--- /dev/null
+++ b/docs/data-sources/cloud_user_project_assignment.md
@@ -0,0 +1,61 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# Data Source: mongodbatlas_cloud_user_project_assignment
+
+`mongodbatlas_cloud_user_project_assignment` provides a Cloud User Project Assignment data source. The data source lets you retrieve a user assigned to a project.
+
+-> **NOTE:** Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint are not returned with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectteam) for details.
+To manage such users with this resource, refer to our [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_cloud_user_project_assignment" "example" {
+ project_id = var.project_id
+ username = var.user_email
+ roles = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_cloud_user_project_assignment" "example_username" {
+ project_id = var.project_id
+ username = mongodbatlas_cloud_user_project_assignment.example.username
+}
+
+data "mongodbatlas_cloud_user_project_assignment" "example_user_id" {
+ project_id = var.project_id
+ user_id = mongodbatlas_cloud_user_project_assignment.example.user_id
+}
+```
+
+
+## Schema
+
+### Required
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listprojects) endpoint to retrieve all projects to which the authenticated user has access.
+
+**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.
+
+### Optional
+
+- `user_id` (String) Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+- `username` (String) Email address that represents the username of the MongoDB Cloud user.
+
+### Read-Only
+
+- `country` (String) Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+- `created_at` (String) Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+- `first_name` (String) First or given name that belongs to the MongoDB Cloud user.
+- `invitation_created_at` (String) Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `invitation_expires_at` (String) Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `inviter_username` (String) Username of the MongoDB Cloud user who sent the invitation to join the organization.
+- `last_auth` (String) Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+- `last_name` (String) Last name, family name, or surname that belongs to the MongoDB Cloud user.
+- `mobile_number` (String) Mobile phone number that belongs to the MongoDB Cloud user.
+- `org_membership_status` (String) String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+- `roles` (Set of String) One or more project-level roles to assign the MongoDB Cloud user.
+
+For more information, see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectuser) Documentation.
diff --git a/docs/data-sources/cloud_user_team_assignment.md b/docs/data-sources/cloud_user_team_assignment.md
new file mode 100644
index 0000000000..d5a8983f96
--- /dev/null
+++ b/docs/data-sources/cloud_user_team_assignment.md
@@ -0,0 +1,79 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# Data Source: mongodbatlas_cloud_user_team_assignment
+
+`mongodbatlas_cloud_user_team_assignment` provides a Cloud User Team Assignment data source. The data source lets you retrieve a user assigned to a team.
+
+-> **NOTE**Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint are not returned with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listteamusers) for details.
+To manage such users with this resource, refer to our [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](../guides/atlas-user-management).
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_cloud_user_team_assignment" "example" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = var.user_id
+}
+
+data "mongodbatlas_cloud_user_team_assignment" "example_user_id" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = mongodbatlas_cloud_user_team_assignment.example.user_id
+}
+
+data "mongodbatlas_cloud_user_team_assignment" "example_username" {
+ org_id = var.org_id
+ team_id = var.team_id
+ username = mongodbatlas_cloud_user_team_assignment.example.username
+}
+```
+
+
+## Schema
+
+### Required
+
+- `org_id` (String) Unique 24-hexadecimal digit string that identifies the organization that contains your projects. Use the [/orgs](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-organizations) endpoint to retrieve all organizations to which the authenticated user has access.
+- `team_id` (String) Unique 24-hexadecimal digit string that identifies the team to which you want to assign the MongoDB Cloud user. Use the [/teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-teams) endpoint to retrieve all teams to which the authenticated user has access.
+
+### Optional
+
+- `user_id` (String) Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+- `username` (String) Email address that represents the username of the MongoDB Cloud user.
+
+### Read-Only
+
+- `country` (String) Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+- `created_at` (String) Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+- `first_name` (String) First or given name that belongs to the MongoDB Cloud user.
+- `invitation_created_at` (String) Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `invitation_expires_at` (String) Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `inviter_username` (String) Username of the MongoDB Cloud user who sent the invitation to join the organization.
+- `last_auth` (String) Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+- `last_name` (String) Last name, family name, or surname that belongs to the MongoDB Cloud user.
+- `mobile_number` (String) Mobile phone number that belongs to the MongoDB Cloud user.
+- `org_membership_status` (String) String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+- `roles` (Attributes) Organization and project level roles to assign the MongoDB Cloud user within one organization. (see [below for nested schema](#nestedatt--roles))
+- `team_ids` (Set of String) List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+
+
+### Nested Schema for `roles`
+
+Read-Only:
+
+- `org_roles` (Set of String) One or more organization level roles to assign the MongoDB Cloud user.
+- `project_role_assignments` (Attributes Set) List of project level role assignments to assign the MongoDB Cloud user. (see [below for nested schema](#nestedatt--roles--project_role_assignments))
+
+
+### Nested Schema for `roles.project_role_assignments`
+
+Read-Only:
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies the project to which these roles belong.
+- `project_roles` (Set of String) One or more project-level roles assigned to the MongoDB Cloud user.
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listteamusers) Documentation.
diff --git a/docs/data-sources/cluster.md b/docs/data-sources/cluster.md
index a45b9c058c..1f39b8bf7a 100644
--- a/docs/data-sources/cluster.md
+++ b/docs/data-sources/cluster.md
@@ -1,7 +1,13 @@
+---
+subcategory: "Clusters"
+---
+
# Data Source: mongodbatlas_cluster
`mongodbatlas_cluster` describes a Cluster. The data source requires your Project ID.
+~> **DEPRECATION:** This datasource is deprecated and will be removed in the next major release. Please use `mongodbatlas_advanced_cluster`. For more details, see [our migration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
+
~> **IMPORTANT:**
• Multi Region Cluster: The `mongodbatlas_cluster` data source doesn't return the `container_id` for each region utilized by the cluster. For retrieving the `container_id`, we recommend the [`mongodbatlas_advanced_cluster`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/data-sources/advanced_cluster) data source instead.
• Changes to cluster configurations can affect costs. Before making changes, please see [Billing](https://docs.atlas.mongodb.com/billing/).
diff --git a/docs/data-sources/cluster_outage_simulation.md b/docs/data-sources/cluster_outage_simulation.md
index 090e5cb891..cd17aeec3b 100644
--- a/docs/data-sources/cluster_outage_simulation.md
+++ b/docs/data-sources/cluster_outage_simulation.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cluster Outage Simulation"
+---
+
# Data Source: mongodbatlas_cluster_outage_simulation
`mongodbatlas_cluster_outage_simulation` provides a Cluster Outage Simulation resource. For more details see https://www.mongodb.com/docs/atlas/tutorial/test-resilience/simulate-regional-outage/
diff --git a/docs/data-sources/clusters.md b/docs/data-sources/clusters.md
index 9fbb6e6aa9..65990d4811 100644
--- a/docs/data-sources/clusters.md
+++ b/docs/data-sources/clusters.md
@@ -1,7 +1,13 @@
+---
+subcategory: "Clusters"
+---
+
# Data Source: mongodbatlas_clusters
`mongodbatlas_cluster` describes all Clusters by the provided project_id. The data source requires your Project ID.
+~> **DEPRECATION:** This datasource is deprecated and will be removed in the next major release. Please use `mongodbatlas_advanced_clusters`. For more details, see [our migration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
+
~> **IMPORTANT:**
• Multi Region Cluster: The `mongodbatlas_cluster` data source doesn't return the `container_id` for each region utilized by the cluster. For retrieving the `container_id`, we recommend the [`mongodbatlas_advanced_cluster`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/data-sources/advanced_clusters) data source instead.
• Changes to cluster configurations can affect costs. Before making changes, please see [Billing](https://docs.atlas.mongodb.com/billing/).
diff --git a/docs/data-sources/control_plane_ip_addresses.md b/docs/data-sources/control_plane_ip_addresses.md
index 7276dd8e3b..c68e0e76bd 100644
--- a/docs/data-sources/control_plane_ip_addresses.md
+++ b/docs/data-sources/control_plane_ip_addresses.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Root"
+---
+
# Data Source: mongodbatlas_control_plane_ip_addresses
`mongodbatlas_control_plane_ip_addresses` returns all control plane IP addresses.
diff --git a/docs/data-sources/custom_db_role.md b/docs/data-sources/custom_db_role.md
index f5a90abb69..04476e6ef5 100644
--- a/docs/data-sources/custom_db_role.md
+++ b/docs/data-sources/custom_db_role.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Custom Database Roles"
+---
+
# Data Source: mongodbatlas_custom_db_role
`mongodbatlas_custom_db_role` describes a Custom DB Role. This represents a custom db role.
diff --git a/docs/data-sources/custom_db_roles.md b/docs/data-sources/custom_db_roles.md
index 61fd372685..e3588500ec 100644
--- a/docs/data-sources/custom_db_roles.md
+++ b/docs/data-sources/custom_db_roles.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Custom Database Roles"
+---
+
# Data Source: mongodbatlas_custom_db_roles
`mongodbatlas_custom_db_roles` describes all Custom DB Roles. This represents a custom db roles.
diff --git a/docs/data-sources/custom_dns_configuration_cluster_aws.md b/docs/data-sources/custom_dns_configuration_cluster_aws.md
index f1b127a2af..89424710bd 100644
--- a/docs/data-sources/custom_dns_configuration_cluster_aws.md
+++ b/docs/data-sources/custom_dns_configuration_cluster_aws.md
@@ -1,3 +1,7 @@
+---
+subcategory: "AWS Clusters DNS"
+---
+
# Data Source: mongodbatlas_custom_dns_configuration_cluster_aws
`mongodbatlas_custom_dns_configuration_cluster_aws` describes a Custom DNS Configuration for Atlas Clusters on AWS.
diff --git a/docs/data-sources/data_lake_pipeline.md b/docs/data-sources/data_lake_pipeline.md
index 41e4247fe4..e5c9ff6490 100644
--- a/docs/data-sources/data_lake_pipeline.md
+++ b/docs/data-sources/data_lake_pipeline.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Data Lake Pipelines"
---
-**WARNING:** Data Lake is deprecated. To learn more, see
+~> **DEPRECATION:** Data Lake is deprecated. To learn more, see
# Data Source: mongodbatlas_data_lake_pipeline
@@ -25,17 +25,17 @@ resource "mongodbatlas_advanced_cluster" "automated_backup_test" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "GCP"
region_name = "US_EAST_4"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_data_lake_pipeline" "pipeline" {
diff --git a/docs/data-sources/data_lake_pipeline_run.md b/docs/data-sources/data_lake_pipeline_run.md
index 2a9876878f..8196e678b3 100644
--- a/docs/data-sources/data_lake_pipeline_run.md
+++ b/docs/data-sources/data_lake_pipeline_run.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Data Lake Pipelines"
---
-**WARNING:** Data Lake is deprecated. To learn more, see
+~> **DEPRECATION:** Data Lake is deprecated. To learn more, see
# Data Source: mongodbatlas_data_lake_pipeline_run
diff --git a/docs/data-sources/data_lake_pipeline_runs.md b/docs/data-sources/data_lake_pipeline_runs.md
index 3864100f97..f89f916d4d 100644
--- a/docs/data-sources/data_lake_pipeline_runs.md
+++ b/docs/data-sources/data_lake_pipeline_runs.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Data Lake Pipelines"
---
-**WARNING:** Data Lake is deprecated. To learn more, see
+~> **DEPRECATION:** Data Lake is deprecated. To learn more, see
# Data Source: mongodbatlas_data_lake_pipeline_runs
diff --git a/docs/data-sources/data_lake_pipelines.md b/docs/data-sources/data_lake_pipelines.md
index 7477bf1dfe..f9f47d7928 100644
--- a/docs/data-sources/data_lake_pipelines.md
+++ b/docs/data-sources/data_lake_pipelines.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Data Lake Pipelines"
---
-**WARNING:** Data Lake is deprecated. To learn more, see
+~> **DEPRECATION:** Data Lake is deprecated. To learn more, see
# Data Source: mongodbatlas_data_lake_pipelines
diff --git a/docs/data-sources/database_user.md b/docs/data-sources/database_user.md
index f3689a049c..f484b7b594 100644
--- a/docs/data-sources/database_user.md
+++ b/docs/data-sources/database_user.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Database Users"
+---
+
# Data Source: mongodbatlas_database_user
`mongodbatlas_database_user` describes a Database User. This represents a database user which will be applied to all clusters within the project.
diff --git a/docs/data-sources/database_users.md b/docs/data-sources/database_users.md
index a3e5db9ed2..4d0d8b489b 100644
--- a/docs/data-sources/database_users.md
+++ b/docs/data-sources/database_users.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Database Users"
+---
+
# Data Source: mongodbatlas_database_users
`mongodbatlas_database_users` describes all Database Users. This represents a database user which will be applied to all clusters within the project.
diff --git a/docs/data-sources/encryption_at_rest.md b/docs/data-sources/encryption_at_rest.md
index d9d06ca514..4e3e76c09c 100644
--- a/docs/data-sources/encryption_at_rest.md
+++ b/docs/data-sources/encryption_at_rest.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# Data Source: mongodbatlas_encryption_at_rest
`mongodbatlas_encryption_at_rest` describes encryption at rest configuration for an Atlas project with one of the following providers:
@@ -52,17 +56,17 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
backup_enabled = true
encryption_at_rest_provider = "AWS"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
data "mongodbatlas_encryption_at_rest" "test" {
diff --git a/docs/data-sources/encryption_at_rest_private_endpoint.md b/docs/data-sources/encryption_at_rest_private_endpoint.md
index 4a9233a1bd..30e10ce0b6 100644
--- a/docs/data-sources/encryption_at_rest_private_endpoint.md
+++ b/docs/data-sources/encryption_at_rest_private_endpoint.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# Data Source: mongodbatlas_encryption_at_rest_private_endpoint
`mongodbatlas_encryption_at_rest_private_endpoint` describes a private endpoint used for encryption at rest using customer-managed keys.
diff --git a/docs/data-sources/encryption_at_rest_private_endpoints.md b/docs/data-sources/encryption_at_rest_private_endpoints.md
index a09d9b2c67..eebc763035 100644
--- a/docs/data-sources/encryption_at_rest_private_endpoints.md
+++ b/docs/data-sources/encryption_at_rest_private_endpoints.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# Data Source: mongodbatlas_encryption_at_rest_private_endpoints
`mongodbatlas_encryption_at_rest_private_endpoints` describes private endpoints of a particular cloud provider used for encryption at rest using customer-managed keys.
diff --git a/docs/data-sources/event_trigger.md b/docs/data-sources/event_trigger.md
index 8eb31c95b6..a38a2872c0 100644
--- a/docs/data-sources/event_trigger.md
+++ b/docs/data-sources/event_trigger.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Event Trigger"
+---
+
# Data Source: mongodbatlas_event_trigger
`mongodbatlas_event_trigger` describes an Event Trigger.
diff --git a/docs/data-sources/event_triggers.md b/docs/data-sources/event_triggers.md
index ebd172f91d..3dee685a58 100644
--- a/docs/data-sources/event_triggers.md
+++ b/docs/data-sources/event_triggers.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Event Trigger"
+---
+
# Data Source: mongodbatlas_event_triggers
`mongodbatlas_event_triggers` describes all Event Triggers.
diff --git a/docs/data-sources/federated_database_instance.md b/docs/data-sources/federated_database_instance.md
index f376eb8dec..e156086c1e 100644
--- a/docs/data-sources/federated_database_instance.md
+++ b/docs/data-sources/federated_database_instance.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_federated_database_instance
`mongodbatlas_federated_database_instance` provides a Federated Database Instance data source.
diff --git a/docs/data-sources/federated_database_instances.md b/docs/data-sources/federated_database_instances.md
index 62fc283b58..67eee16d35 100644
--- a/docs/data-sources/federated_database_instances.md
+++ b/docs/data-sources/federated_database_instances.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_federated_database_instances
`mongodbatlas_federated_database_instances` provides a Federated Database Instance data source.
diff --git a/docs/data-sources/federated_query_limit.md b/docs/data-sources/federated_query_limit.md
index 5ce6911f78..d474871fe1 100644
--- a/docs/data-sources/federated_query_limit.md
+++ b/docs/data-sources/federated_query_limit.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_federated_query_limit
`mongodbatlas_federated_query_limit` provides a Federated Database Instance Query Limit data source. To learn more about Atlas Data Federation see https://www.mongodb.com/docs/atlas/data-federation/overview/.
diff --git a/docs/data-sources/federated_query_limits.md b/docs/data-sources/federated_query_limits.md
index 27c2682cb8..84a42f0808 100644
--- a/docs/data-sources/federated_query_limits.md
+++ b/docs/data-sources/federated_query_limits.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_federated_query_limits
`mongodbatlas_federated_query_limits` provides a Federated Database Instance Query Limits data source. To learn more about Atlas Data Federation see https://www.mongodb.com/docs/atlas/data-federation/overview/.
diff --git a/docs/data-sources/federated_settings.md b/docs/data-sources/federated_settings.md
index e99f339e45..895757a280 100644
--- a/docs/data-sources/federated_settings.md
+++ b/docs/data-sources/federated_settings.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Data Source: mongodbatlas_federated_settings
`mongodbatlas_federated_settings` provides a federated settings data source. Atlas Cloud federated settings provides federated settings outputs.
diff --git a/docs/data-sources/federated_settings_identity_provider.md b/docs/data-sources/federated_settings_identity_provider.md
index 43eb965768..22b1e65535 100644
--- a/docs/data-sources/federated_settings_identity_provider.md
+++ b/docs/data-sources/federated_settings_identity_provider.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Data Source: mongodbatlas_federated_settings_identity_provider
`mongodbatlas_federated_settings_identity_provider` provides a federated settings identity provider data source. Atlas federated settings identity provider provides federated settings outputs for the configured identity provider.
diff --git a/docs/data-sources/federated_settings_identity_providers.md b/docs/data-sources/federated_settings_identity_providers.md
index 1aa52dc8f2..9b25fb2167 100644
--- a/docs/data-sources/federated_settings_identity_providers.md
+++ b/docs/data-sources/federated_settings_identity_providers.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_federated_settings_identity_providers
`mongodbatlas_federated_settings_identity_providers` provides an Federated Settings Identity Providers datasource. Atlas Cloud Federated Settings Identity Providers provides federated settings outputs for the configured Identity Providers.
diff --git a/docs/data-sources/federated_settings_org_config.md b/docs/data-sources/federated_settings_org_config.md
index c1faba4446..0b67bd7bb2 100644
--- a/docs/data-sources/federated_settings_org_config.md
+++ b/docs/data-sources/federated_settings_org_config.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Data Source: mongodbatlas_federated_settings_org_config
`mongodbatlas_federated_settings_org_config` provides an Federated Settings Identity Providers datasource. Atlas Cloud Federated Settings Organizational configuration provides federated settings outputs for the configured Organizational configuration.
diff --git a/docs/data-sources/federated_settings_org_configs.md b/docs/data-sources/federated_settings_org_configs.md
index 8c208fac58..f064232ffd 100644
--- a/docs/data-sources/federated_settings_org_configs.md
+++ b/docs/data-sources/federated_settings_org_configs.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Data Source: mongodbatlas_federated_settings_org_configs
`mongodbatlas_federated_settings_org_configs` provides an Federated Settings Identity Providers datasource. Atlas Cloud Federated Settings Identity Providers provides federated settings outputs for the configured Identity Providers.
diff --git a/docs/data-sources/federated_settings_org_role_mapping.md b/docs/data-sources/federated_settings_org_role_mapping.md
index 8041188284..a7631cf6c3 100644
--- a/docs/data-sources/federated_settings_org_role_mapping.md
+++ b/docs/data-sources/federated_settings_org_role_mapping.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Data Source: mongodbatlas_federated_settings_org_role_mapping
`mongodbatlas_federated_settings_org_role_mapping` provides an Federated Settings Org Role Mapping datasource. Atlas Cloud Federated Settings Org Role Mapping provides federated settings outputs for the configured Org Role Mapping.
diff --git a/docs/data-sources/federated_settings_org_role_mappings.md b/docs/data-sources/federated_settings_org_role_mappings.md
index bd5ba98182..b5c60d538d 100644
--- a/docs/data-sources/federated_settings_org_role_mappings.md
+++ b/docs/data-sources/federated_settings_org_role_mappings.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Data Source: mongodbatlas_federated_settings_org_role_mappings
`mongodbatlas_federated_settings_org_role_mappings` provides an Federated Settings Org Role Mapping datasource. Atlas Cloud Federated Settings Org Role Mapping provides federated settings outputs for the configured Org Role Mapping.
diff --git a/docs/data-sources/flex_cluster.md b/docs/data-sources/flex_cluster.md
index 9b11807ff2..baa0cd4505 100644
--- a/docs/data-sources/flex_cluster.md
+++ b/docs/data-sources/flex_cluster.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Clusters"
+---
+
# Data Source: mongodbatlas_flex_cluster
`mongodbatlas_flex_cluster` describes a flex cluster.
diff --git a/docs/data-sources/flex_clusters.md b/docs/data-sources/flex_clusters.md
index 7b5d51a115..6d21ff7c49 100644
--- a/docs/data-sources/flex_clusters.md
+++ b/docs/data-sources/flex_clusters.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Clusters"
+---
+
# Data Source: mongodbatlas_flex_clusters
`mongodbatlas_flex_clusters` returns all flex clusters in a project.
diff --git a/docs/data-sources/flex_restore_job.md b/docs/data-sources/flex_restore_job.md
index 398d84115c..a32a7b187f 100644
--- a/docs/data-sources/flex_restore_job.md
+++ b/docs/data-sources/flex_restore_job.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Restore Jobs"
+---
+
# Data Source: mongodbatlas_flex_restore_job
`mongodbatlas_flex_restore_job` describes a flex restore job.
diff --git a/docs/data-sources/flex_restore_jobs.md b/docs/data-sources/flex_restore_jobs.md
index 2f9049ec20..7d689ee21d 100644
--- a/docs/data-sources/flex_restore_jobs.md
+++ b/docs/data-sources/flex_restore_jobs.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Restore Jobs"
+---
+
# Data Source: mongodbatlas_flex_restore_jobs
`mongodbatlas_flex_restore_jobs` returns all flex restore job of a flex cluster.
diff --git a/docs/data-sources/flex_snapshot.md b/docs/data-sources/flex_snapshot.md
index 9f968fa448..57c894db0b 100644
--- a/docs/data-sources/flex_snapshot.md
+++ b/docs/data-sources/flex_snapshot.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Snapshots"
+---
+
# Data Source: mongodbatlas_flex_snapshot
`mongodbatlas_flex_snapshot` describes a flex snapshot.
diff --git a/docs/data-sources/flex_snapshots.md b/docs/data-sources/flex_snapshots.md
index f0bc182224..0652a7be0c 100644
--- a/docs/data-sources/flex_snapshots.md
+++ b/docs/data-sources/flex_snapshots.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Snapshots"
+---
+
# Data Source: mongodbatlas_flex_snapshots
`mongodbatlas_flex_snapshots` returns all snapshots of a flex cluster.
diff --git a/docs/data-sources/global_cluster_config.md b/docs/data-sources/global_cluster_config.md
index 3bb6a7423b..d82811c974 100644
--- a/docs/data-sources/global_cluster_config.md
+++ b/docs/data-sources/global_cluster_config.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Global Clusters"
+---
+
# Data Source: mongodbatlas_global_cluster_config
`mongodbatlas_global_cluster_config` describes all managed namespaces and custom zone mappings associated with the specified Global Cluster.
@@ -15,61 +19,59 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "GEOSHARDED"
backup_enabled = true
- replication_specs { # Zone 1, shard 1
+ replication_specs = [
+ { # Zone 1, shard 1
zone_name = "Zone 1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_CENTRAL_1"
- }
- }
-
- replication_specs { # Zone 1, shard 2
+ }]
+ },
+ { # Zone 1, shard 2
zone_name = "Zone 1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_CENTRAL_1"
- }
- }
-
- replication_specs { # Zone 2, shard 1
+ }]
+ },
+ { # Zone 2, shard 1
zone_name = "Zone 2"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # Zone 2, shard 2
+ }]
+ },
+ { # Zone 2, shard 2
zone_name = "Zone 2"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_2"
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_global_cluster_config" "config" {
@@ -105,7 +107,6 @@ In addition to all arguments above, the following attributes are exported:
* `id` - The Terraform's unique identifier used internally for state management.
* `custom_zone_mapping_zone_id` - A map of all custom zone mappings defined for the Global Cluster to `replication_specs.*.zone_id`. Atlas automatically maps each location code to the closest geographical zone. Custom zone mappings allow administrators to override these automatic mappings. If your Global Cluster does not have any custom zone mappings, this document is empty.
-* `custom_zone_mapping` - (Deprecated) A map of all custom zone mappings defined for the Global Cluster to `replication_specs.*.id`. This attribute is deprecated, use `custom_zone_mapping_zone_id` instead. This attribute is not set when a cluster uses independent shard scaling. To learn more, see the [Sharding Configuration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
* `managed_namespaces` - Add a managed namespaces to a Global Cluster. For more information about managed namespaces, see [Global Clusters](https://docs.atlas.mongodb.com/reference/api/global-clusters/). See [Managed Namespace](#managed-namespace) below for more details.
### Managed Namespace
diff --git a/docs/data-sources/ldap_configuration.md b/docs/data-sources/ldap_configuration.md
index 3187cb6610..648ff59fa3 100644
--- a/docs/data-sources/ldap_configuration.md
+++ b/docs/data-sources/ldap_configuration.md
@@ -1,3 +1,7 @@
+---
+subcategory: "LDAP Configuration"
+---
+
# Data Source: mongodbatlas_ldap_configuration
`mongodbatlas_ldap_configuration` describes a LDAP Configuration.
diff --git a/docs/data-sources/ldap_verify.md b/docs/data-sources/ldap_verify.md
index 3a4ec5c3f3..abcc359476 100644
--- a/docs/data-sources/ldap_verify.md
+++ b/docs/data-sources/ldap_verify.md
@@ -1,3 +1,7 @@
+---
+subcategory: "LDAP Configuration"
+---
+
# Data Source: mongodbatlas_ldap_verify
`mongodbatlas_ldap_verify` describes a LDAP Verify.
@@ -19,17 +23,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud provider snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_ldap_verify" "test" {
@@ -68,4 +72,4 @@ In addition to all arguments above, the following attributes are exported:
* `validations` - Array of validation messages related to the verification of the provided LDAP over TLS/SSL configuration details.
-See detailed information for arguments and attributes: [MongoDB API LDAP Verify](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-verification-status)
\ No newline at end of file
+See detailed information for arguments and attributes: [MongoDB API LDAP Verify](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-verification-status)
diff --git a/docs/data-sources/maintenance_window.md b/docs/data-sources/maintenance_window.md
index 541d1d1db1..f4f13c2204 100644
--- a/docs/data-sources/maintenance_window.md
+++ b/docs/data-sources/maintenance_window.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Maintenance Windows"
+---
+
# Data Source: mongodbatlas_maintenance_window
`mongodbatlas_maintenance_window` provides a Maintenance Window entry datasource. Gets information regarding the configured maintenance window for a MongoDB Atlas project.
@@ -41,7 +45,7 @@ In addition to all arguments above, the following attributes are exported:
* `day_of_week` - Day of the week when you would like the maintenance window to start as a 1-based integer: Su=1, M=2, T=3, W=4, T=5, F=6, Sa=7.
* `hour_of_day` - Hour of the day when you would like the maintenance window to start. This parameter uses the 24-hour clock, where midnight is 0, noon is 12 (Time zone is UTC).
-* `start_asap` - Flag indicating whether project maintenance has been directed to start immediately. If you request that maintenance begin immediately, this field returns true from the time the request was made until the time the maintenance event completes.
+* `start_asap` - Flag indicating whether project maintenance has been directed to start immediately. If requested, this field returns true from the time the request was made until the time the maintenance event completes.
* `number_of_deferrals` - Number of times the current maintenance event for this project has been deferred, there can be a maximum of 2 deferrals.
* `auto_defer_once_enabled` - Flag that indicates whether you want to defer all maintenance windows one week they would be triggered.
* `protected_hours` - (Optional) Defines the time period during which there will be no standard updates to the clusters. See [Protected Hours](#protected-hours).
@@ -52,4 +56,4 @@ In addition to all arguments above, the following attributes are exported:
* `end_hour_of_day` - Zero-based integer that represents the end hour of the day for the protected hours window.
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/maintenance-windows/)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/maintenance-windows/)
diff --git a/docs/data-sources/mongodb_employee_access_grant.md b/docs/data-sources/mongodb_employee_access_grant.md
index 19d127f19a..e3369ed84e 100644
--- a/docs/data-sources/mongodb_employee_access_grant.md
+++ b/docs/data-sources/mongodb_employee_access_grant.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Clusters"
+---
+
# Data Source: mongodbatlas_mongodb_employee_access_grant
`mongodbatlas_mongodb_employee_access_grant` describes a MongoDB employee access grant.
diff --git a/docs/data-sources/network_container.md b/docs/data-sources/network_container.md
index 5335ff4fc9..d0930ea274 100644
--- a/docs/data-sources/network_container.md
+++ b/docs/data-sources/network_container.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Network Peering"
+---
+
# Data Source: mongodbatlas_network_container
`mongodbatlas_network_container` describes a Network Peering Container. The resource requires your Project ID and container ID.
diff --git a/docs/data-sources/network_containers.md b/docs/data-sources/network_containers.md
index 5096f156cf..204a6dfec6 100644
--- a/docs/data-sources/network_containers.md
+++ b/docs/data-sources/network_containers.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Network Peering"
+---
+
# Data Source: mongodbatlas_network_containers
`mongodbatlas_network_containers` describes all Network Peering Containers. The data source requires your Project ID.
diff --git a/docs/data-sources/network_peering.md b/docs/data-sources/network_peering.md
index a9babbfdd3..36ef93b6c1 100644
--- a/docs/data-sources/network_peering.md
+++ b/docs/data-sources/network_peering.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Network Peering"
+---
+
# Data Source: mongodbatlas_network_peering
`mongodbatlas_network_peering` describes a Network Peering Connection.
diff --git a/docs/data-sources/network_peerings.md b/docs/data-sources/network_peerings.md
index d5fb102ae4..2ac0fe2b7c 100644
--- a/docs/data-sources/network_peerings.md
+++ b/docs/data-sources/network_peerings.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Network Peering"
+---
+
# Data Source: mongodbatlas_network_peerings
`mongodbatlas_network_peerings` describes all Network Peering Connections.
diff --git a/docs/data-sources/online_archive.md b/docs/data-sources/online_archive.md
index 606987eebe..2ffae1e419 100644
--- a/docs/data-sources/online_archive.md
+++ b/docs/data-sources/online_archive.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Online Archive"
+---
+
# Data Source: mongodbatlas_online_archive
`mongodbatlas_online_archive` describes an Online Archive
diff --git a/docs/data-sources/online_archives.md b/docs/data-sources/online_archives.md
index 0d81847cbd..8296a6f538 100644
--- a/docs/data-sources/online_archives.md
+++ b/docs/data-sources/online_archives.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Online Archive"
+---
+
# Data Source: mongodbatlas_online_archive
`mongodbatlas_online_archive` Describes the list of all the online archives for a cluster
diff --git a/docs/data-sources/org_invitation.md b/docs/data-sources/org_invitation.md
index adef9bb943..e079393607 100644
--- a/docs/data-sources/org_invitation.md
+++ b/docs/data-sources/org_invitation.md
@@ -1,7 +1,13 @@
+---
+subcategory: "Organizations"
+---
+
# Data Source: mongodbatlas_org_invitation
`mongodbatlas_org_invitation` describes an invitation for a user to join an Atlas organization.
+~> **DEPRECATION:** This data source is deprecated. Use `mongodbatlas_cloud_user_org_assignment` to read organization user assignments. See the [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
## Example Usage
```terraform
@@ -34,4 +40,4 @@ In addition to the arguments, this data source exports the following attributes:
* `teams_ids` - An array of unique 24-hexadecimal digit strings that identify the teams that the user was invited to join.
* `roles` - Atlas roles to assign to the invited user. If the user accepts the invitation, Atlas assigns these roles to them. The [MongoDB Documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/#organization-roles) describes the roles a user can have.
-See the [MongoDB Atlas Administration API](https://docs.atlas.mongodb.com/reference/api/organization-get-one-invitation/) documentation for more information.
\ No newline at end of file
+See the [MongoDB Atlas Administration API](https://docs.atlas.mongodb.com/reference/api/organization-get-one-invitation/) documentation for more information.
diff --git a/docs/data-sources/organization.md b/docs/data-sources/organization.md
index 2a6a065030..735e48a42c 100644
--- a/docs/data-sources/organization.md
+++ b/docs/data-sources/organization.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Organizations"
+---
+
# Data Source: mongodbatlas_organization
`mongodbatlas_organization` describes all MongoDB Atlas Organizations. This represents organizations that have been created.
@@ -23,6 +27,7 @@ In addition to all arguments above, the following attributes are exported:
* `name` - Human-readable label that identifies the organization.
* `id` - Unique 24-hexadecimal digit string that identifies the organization.
* `is_deleted` - Flag that indicates whether this organization has been deleted.
+* `users`- Returns a list of all pending and active MongoDB Cloud users associated with the specified organization.
* `api_access_list_required` - (Optional) Flag that indicates whether to require API operations to originate from an IP Address added to the API access list for the specified organization.
* `multi_factor_auth_required` - (Optional) Flag that indicates whether to require users to set up Multi-Factor Authentication (MFA) before accessing the specified organization. To learn more, see: https://www.mongodb.com/docs/atlas/security-multi-factor-authentication/.
* `restrict_employee_access` - (Optional) Flag that indicates whether to block MongoDB Support from accessing Atlas infrastructure for any deployment in the specified organization without explicit permission. Once this setting is turned on, you can grant MongoDB Support a 24-hour bypass access to the Atlas deployment to resolve support issues. To learn more, see: https://www.mongodb.com/docs/atlas/security-restrict-support-access/.
@@ -31,6 +36,28 @@ In addition to all arguments above, the following attributes are exported:
* `skip_default_alerts_settings` - (Optional) Flag that indicates whether to prevent Atlas from automatically creating organization-level alerts not explicitly managed through Terraform. Defaults to `true`.
+### Users
+* `id` - Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+* `org_membership_status` - String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+* `roles` - Organization- and project-level roles assigned to one MongoDB Cloud user within one organization.
+* `team_ids` - List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+* `username` - Email address that represents the username of the MongoDB Cloud user.
+* `country` - Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+* `invitation_created_at` - Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+* `invitation_expires_at` - Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+* `inviter_username` - Username of the MongoDB Cloud user who sent the invitation to join the organization.
+* `created_at` - Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+* `first_name` - First or given name that belongs to the MongoDB Cloud user.
+* `last_auth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+* `last_name` - Last name, family name, or surname that belongs to the MongoDB Cloud user.
+* `mobile_number` - Mobile phone number that belongs to the MongoDB Cloud user.
+
+
+~> **NOTE:** - Users with pending invitations created using [`mongodbatlas_project_invitation`](../resources/project_invitation.md) resource or via the deprecated [Invite One MongoDB Cloud User to Join One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createprojectinvitation) endpoint are excluded (or cannot be managed) with this resource. See [MongoDB Atlas API - MongoDB Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-mongodb-cloud-users) for details.
+To manage these users with this resource/data source, refer to our [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
+
+
~> **NOTE:** - If you create an organization with our Terraform provider version >=1.30.0, this field is set to `true` by default.
- If you have an existing organization created with our Terraform provider version <1.30.0, this field might be `false`, which is the [API default value](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createorganization). To prevent the creation of future default alerts, set this explicitly to `true` using the [`mongodbatlas_organization`](../resources/organization.md) resource.
diff --git a/docs/data-sources/organizations.md b/docs/data-sources/organizations.md
index c182bc26aa..1d0f60ec46 100644
--- a/docs/data-sources/organizations.md
+++ b/docs/data-sources/organizations.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Organizations"
+---
+
# Data Source: mongodbatlas_organizations
`mongodbatlas_organizations` describes all MongoDB Atlas Organizations. This represents organizations that have been created.
@@ -28,6 +32,7 @@ data "mongodbatlas_organizations" "test" {
* `name` - Human-readable label that identifies the organization.
* `id` - Unique 24-hexadecimal digit string that identifies the organization.
* `is_deleted` - Flag that indicates whether this organization has been deleted.
+* `users` - Returns list of all pending and active MongoDB Cloud users associated with the specified organization.
* `api_access_list_required` - (Optional) Flag that indicates whether to require API operations to originate from an IP Address added to the API access list for the specified organization.
* `multi_factor_auth_required` - (Optional) Flag that indicates whether to require users to set up Multi-Factor Authentication (MFA) before accessing the specified organization. To learn more, see: https://www.mongodb.com/docs/atlas/security-multi-factor-authentication/.
* `restrict_employee_access` - (Optional) Flag that indicates whether to block MongoDB Support from accessing Atlas infrastructure for any deployment in the specified organization without explicit permission. Once this setting is turned on, you can grant MongoDB Support a 24-hour bypass access to the Atlas deployment to resolve support issues. To learn more, see: https://www.mongodb.com/docs/atlas/security-restrict-support-access/.
@@ -35,6 +40,28 @@ data "mongodbatlas_organizations" "test" {
* `security_contact` - (Optional) String that specifies a single email address for the specified organization to receive security-related notifications. Specifying a security contact does not grant them authorization or access to Atlas for security decisions or approvals.
* `skip_default_alerts_settings` - (Optional) Flag that indicates whether to prevent Atlas from automatically creating organization-level alerts not explicitly managed through Terraform. Defaults to `true`.
+
+### Users
+* `id` - Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+* `org_membership_status` - String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+* `roles` - Organization- and project-level roles assigned to one MongoDB Cloud user within one organization.
+* `teamIds` - List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+* `username` - Email address that represents the username of the MongoDB Cloud user.
+* `country` - Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+* `invitation_created_at` - Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+* `invitation_expires_at` - Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+* `inviter_username` - Username of the MongoDB Cloud user who sent the invitation to join the organization.
+* `created_at` - Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+* `first_name` - First or given name that belongs to the MongoDB Cloud user.
+* `last_auth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+* `last_name` - Last name, family name, or surname that belongs to the MongoDB Cloud user.
+* `mobile_number` - Mobile phone number that belongs to the MongoDB Cloud user.
+
+
+~> **NOTE:** - Users with pending invitations created using [`mongodbatlas_project_invitation`](../resources/project_invitation.md) resource or via the deprecated [Invite One MongoDB Cloud User to Join One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createprojectinvitation) endpoint are excluded (or cannot be managed) with this resource. See [MongoDB Atlas API - MongoDB Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-mongodb-cloud-users) for details.
+To manage these users with this resource/data source, refer to our [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
+
~> **NOTE:** - If you create an organization with our Terraform provider version >=1.30.0, this field is set to `true` by default.
- If you have an existing organization created with our Terraform provider version <1.30.0, this field might be `false`, which is the [API default value](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createorganization). To prevent the creation of future default alerts, set this explicitly to `true` using the [`mongodbatlas_organization`](../resources/organization.md) resource.
diff --git a/docs/data-sources/private_endpoint_regional_mode.md b/docs/data-sources/private_endpoint_regional_mode.md
index 8972932587..b69774ea9a 100644
--- a/docs/data-sources/private_endpoint_regional_mode.md
+++ b/docs/data-sources/private_endpoint_regional_mode.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Data Source: private_endpoint_regional_mode
`private_endpoint_regional_mode` describes a Private Endpoint Regional Mode. This represents a Private Endpoint Regional Mode Connection that wants to retrieve settings of an Atlas project.
diff --git a/docs/data-sources/privatelink_endpoint.md b/docs/data-sources/privatelink_endpoint.md
index ed741991d8..ab41f1b9fc 100644
--- a/docs/data-sources/privatelink_endpoint.md
+++ b/docs/data-sources/privatelink_endpoint.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Data Source: mongodbatlas_privatelink_endpoint
`mongodbatlas_privatelink_endpoint` describes a Private Endpoint. This represents a Private Endpoint Connection to retrieve details regarding a private endpoint by id in an Atlas project
diff --git a/docs/data-sources/privatelink_endpoint_service.md b/docs/data-sources/privatelink_endpoint_service.md
index 3ef1bbaa19..67ba29dc70 100644
--- a/docs/data-sources/privatelink_endpoint_service.md
+++ b/docs/data-sources/privatelink_endpoint_service.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Data Source: mongodbatlas_privatelink_endpoint_service
`mongodbatlas_privatelink_endpoint_service` describes a Private Endpoint Link. This represents a Private Endpoint Link Connection that wants to retrieve details in an Atlas project.
diff --git a/docs/data-sources/privatelink_endpoint_service_data_federation_online_archive.md b/docs/data-sources/privatelink_endpoint_service_data_federation_online_archive.md
index 953d7b4087..f52be0db83 100644
--- a/docs/data-sources/privatelink_endpoint_service_data_federation_online_archive.md
+++ b/docs/data-sources/privatelink_endpoint_service_data_federation_online_archive.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_privatelink_endpoint_service_data_federation_online_archive
`mongodbatlas_privatelink_endpoint_service_data_federation_online_archive` describes a Private Endpoint Service resource for Data Federation and Online Archive.
diff --git a/docs/data-sources/privatelink_endpoint_service_data_federation_online_archives.md b/docs/data-sources/privatelink_endpoint_service_data_federation_online_archives.md
index 75aa36e8fd..9129df2db6 100644
--- a/docs/data-sources/privatelink_endpoint_service_data_federation_online_archives.md
+++ b/docs/data-sources/privatelink_endpoint_service_data_federation_online_archives.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Data Source: mongodbatlas_privatelink_endpoint_service_data_federation_online_archives
`mongodbatlas_privatelink_endpoint_service_data_federation_online_archives` describes Private Endpoint Service resources for Data Federation and Online Archive.
diff --git a/docs/data-sources/privatelink_endpoint_service_serverless.md b/docs/data-sources/privatelink_endpoint_service_serverless.md
deleted file mode 100644
index bc054fa0fc..0000000000
--- a/docs/data-sources/privatelink_endpoint_service_serverless.md
+++ /dev/null
@@ -1,103 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This data source is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide)
-
-# Data Source: privatelink_endpoint_service_serverless
-
-`privatelink_endpoint_service_serverless` provides a Serverless PrivateLink Endpoint Service resource.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
-
-## Example Usage
-
-## Example with AWS
-```terraform
-
-data "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
-}
-
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AWS"
-}
-
-
-resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = "test-db"
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- provider_name = "AWS"
- comment = "New serverless endpoint"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AWS"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST_1"
- continuous_backup_enabled = true
-}
-```
-
-## Example with AZURE
-```terraform
-
-data "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
-}
-
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AZURE"
-}
-
-
-resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = "test-db"
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- provider_name = "AZURE"
- comment = "New serverless endpoint"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AZURE"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST"
- continuous_backup_enabled = true
-}
-```
-
-### Available complete examples
-- [Setup private connection to a MongoDB Atlas Serverless Instance with AWS VPC](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/aws-privatelink-endpoint/serverless-instance)
-
-## Argument Reference
-
-* `project_id` - (Required) Unique 24-digit hexadecimal string that identifies the project.
-* `instance_name` - (Required) Human-readable label that identifies the serverless instance
-* `endpoint_id` - (Required) Unique 22-character alphanumeric string that identifies the private endpoint. Atlas supports AWS private endpoints using the [AWS PrivateLink](https://aws.amazon.com/privatelink/) feature.
-* `cloud_provider_endpoint_id` - Unique string that identifies the private endpoint's network interface.
-* `comment` - Human-readable string to associate with this private endpoint.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `endpoint_service_name` - Unique string that identifies the PrivateLink endpoint service. MongoDB Cloud returns null while it creates the endpoint service.
-* `private_link_service_resource_id` - Root-relative path that identifies the Azure Private Link Service that MongoDB Cloud manages.
-* `private_endpoint_ip_address` - IPv4 address of the private endpoint in your Azure VNet that someone added to this private endpoint service.
-* `status` - Human-readable label that indicates the current operating status of the private endpoint. Values include: RESERVATION_REQUESTED, RESERVED, INITIATING, AVAILABLE, FAILED, DELETING.
-
-For more information see: [MongoDB Atlas API - Serverless Private Endpoints](https://www.mongodb.com/docs/atlas/reference/api/serverless-private-endpoints-get-one/).
diff --git a/docs/data-sources/privatelink_endpoints_service_adl.md b/docs/data-sources/privatelink_endpoints_service_adl.md
index fcddf7e800..5838b80c52 100644
--- a/docs/data-sources/privatelink_endpoints_service_adl.md
+++ b/docs/data-sources/privatelink_endpoints_service_adl.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Data Source: privatelink_endpoints_service_adl
`privatelink_endpoints_service_adl` describes the list of all Atlas Data Lake (ADL) and Online Archive PrivateLink endpoints resource.
diff --git a/docs/data-sources/privatelink_endpoints_service_serverless.md b/docs/data-sources/privatelink_endpoints_service_serverless.md
deleted file mode 100644
index 997b84a29f..0000000000
--- a/docs/data-sources/privatelink_endpoints_service_serverless.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This data source is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide)
-
-# Data Source: privatelink_endpoints_service_serverless
-
-`privatelink_endpoints_service_serverless` describes the list of all Serverless PrivateLink Endpoint Service resource.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
-
-## Example Usage
-
-## Example with AWS
-```terraform
-
-data "mongodbatlas_privatelink_endpoints_service_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
-}
-
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AWS"
-}
-
-resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = "test-db"
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- provider_name = "AWS"
- comment = "New serverless endpoint"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AWS"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST_1"
- continuous_backup_enabled = true
-}
-```
-
-## Example with AZURE
-```terraform
-
-data "mongodbatlas_privatelink_endpoints_service_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
-}
-
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AZURE"
-}
-
-resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = "test-db"
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- provider_name = "AZURE"
- comment = "New serverless endpoint"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AZURE"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST"
- continuous_backup_enabled = true
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) Unique 24-digit hexadecimal string that identifies the project.
-* `instance_name` - Human-readable label that identifies the serverless instance
-
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-* `results` - Each element in the `result` array is one private serverless endpoint.
-
-### results
-
-Each object in the `results` array represents an online archive with the following attributes:
-* `cloud_provider_endpoint_id` - Unique string that identifies the private endpoint's network interface.
-* `comment` - Human-readable string to associate with this private endpoint.
-* `endpoint_id` - (Required) Unique 22-character alphanumeric string that identifies the private endpoint. Atlas supports AWS private endpoints using the [AWS PrivateLink](https://aws.amazon.com/privatelink/) feature.
-* `endpoint_service_name` - Unique string that identifies the PrivateLink endpoint service. MongoDB Cloud returns null while it creates the endpoint service.
-* `private_link_service_resource_id` - Root-relative path that identifies the Azure Private Link Service that MongoDB Cloud manages.
-* `private_endpoint_ip_address` - IPv4 address of the private endpoint in your Azure VNet that someone added to this private endpoint service.
-* `status` - Human-readable label that indicates the current operating status of the private endpoint. Values include: RESERVATION_REQUESTED, RESERVED, INITIATING, AVAILABLE, FAILED, DELETING.
-
-For more information see: [MongoDB Atlas API - Serverless Private Endpoints](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Serverless-Private-Endpoints/operation/createServerlessPrivateEndpoint).
diff --git a/docs/data-sources/project.md b/docs/data-sources/project.md
index 01c30628c2..268dcbb021 100644
--- a/docs/data-sources/project.md
+++ b/docs/data-sources/project.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Projects"
+---
+
# Data Source: mongodbatlas_project
`mongodbatlas_project` describes a MongoDB Atlas Project. This represents a project that has been created.
@@ -79,10 +83,10 @@ In addition to all arguments above, the following attributes are exported:
* `cluster_count` - The number of Atlas clusters deployed in the project.
* `created` - The ISO-8601-formatted timestamp of when Atlas created the project.
* `tags` - Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the project. To learn more, see [Resource Tags](https://www.mongodb.com/docs/atlas/tags/)
-* `teams` - Returns all teams to which the authenticated user has access in the project. See [Teams](#teams).
+* `teams` - **(DEPRECATED)** Returns all teams to which the authenticated user has access in the project. See [Teams](#teams).
* `limits` - The limits for the specified project. See [Limits](#limits).
* `ip_addresses` - IP addresses in a project categorized by services. See [IP Addresses](#ip-addresses). **WARNING:** This attribute is deprecated, use the `mongodbatlas_project_ip_addresses` data source instead.
-
+* `users` - Returns list of all pending and active MongoDB Cloud users associated with the specified project.
* `is_collect_database_specifics_statistics_enabled` - Flag that indicates whether to enable statistics in [cluster metrics](https://www.mongodb.com/docs/atlas/monitor-cluster-metrics/) collection for the project.
* `is_data_explorer_enabled` - Flag that indicates whether to enable Data Explorer for the project. If enabled, you can query your database with an easy to use interface.
* `is_extended_storage_sizes_enabled` - Flag that indicates whether to enable extended storage sizes for the specified project.
@@ -94,6 +98,8 @@ In addition to all arguments above, the following attributes are exported:
### Teams
+~> **DEPRECATION:** This attribute is deprecated and will be removed in the next major release. Please transition to `mongodbatlas_team_project_assignment`. For more details, see [Migration Guide: Project Teams Attribute to Team Project Assignment Resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management).
+
* `team_id` - The unique identifier of the team you want to associate with the project. The team and project must share the same parent organization.
* `role_names` - Each string in the array represents a project role assigned to the team. Every user associated with the team inherits these roles. The [MongoDB Documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/#organization-roles) describes the roles a user can have.
@@ -112,6 +118,19 @@ In addition to all arguments above, the following attributes are exported:
* `services.clusters.#.inbound` - List of inbound IP addresses associated with the cluster. If your network allows outbound HTTP requests only to specific IP addresses, you must allow access to the following IP addresses so that your application can connect to your Atlas cluster.
* `services.clusters.#.outbound` - List of outbound IP addresses associated with the cluster. If your network allows inbound HTTP requests only from specific IP addresses, you must allow access from the following IP addresses so that your Atlas cluster can communicate with your webhooks and KMS.
+### Users
+* `id`- Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+* `orgMembershipStatus`- String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+* `roles`- One or more project-level roles assigned to the MongoDB Cloud user.
+* `username`- Email address that represents the username of the MongoDB Cloud user.
+* `country`- Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+* `createdAt`- Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+* `firstName`- First or given name that belongs to the MongoDB Cloud user.
+* `lastAuth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+* `lastName`- Last name, family name, or surname that belongs to the MongoDB Cloud user.
+* `mobileNumber` - Mobile phone number that belongs to the MongoDB Cloud user.
+
+~> **NOTE:** - Does not return pending users invited via the deprecated [Invite One MongoDB Cloud User to Join One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createprojectinvitation) endpoint or pending invitations created using [`mongodbatlas_project_invitation`](../resources/project_invitation.md) resource.
See [MongoDB Atlas API - Project](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/project-get-teams/) Documentation for more information.
diff --git a/docs/data-sources/project_api_key.md b/docs/data-sources/project_api_key.md
index 31d291034c..91de0965fe 100644
--- a/docs/data-sources/project_api_key.md
+++ b/docs/data-sources/project_api_key.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_project_api_key
`mongodbatlas_project_api_key` describes a MongoDB Atlas Project API Key. This represents a Project API Key that has been created.
diff --git a/docs/data-sources/project_api_keys.md b/docs/data-sources/project_api_keys.md
index d931be06e9..3672d2e768 100644
--- a/docs/data-sources/project_api_keys.md
+++ b/docs/data-sources/project_api_keys.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Data Source: mongodbatlas_project_api_keys
`mongodbatlas_project_api_keys` describes all API Keys. This represents API Keys that have been created.
diff --git a/docs/data-sources/project_invitation.md b/docs/data-sources/project_invitation.md
index 856498fa11..f7f8f1ca96 100644
--- a/docs/data-sources/project_invitation.md
+++ b/docs/data-sources/project_invitation.md
@@ -1,7 +1,13 @@
+---
+subcategory: "Projects"
+---
+
# Data Source: mongodbatlas_project_invitation
`mongodbatlas_project_invitation` describes an invitation to a user to join an Atlas project.
+~> **DEPRECATION:** This data source is deprecated. Use `mongodbatlas_cloud_user_project_assignment` to read project user assignments. See the [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management).
+
-> **NOTE:** Groups and projects are synonymous terms. You may find GROUP-ID in the official documentation.
## Example Usages
@@ -35,4 +41,4 @@ In addition to the arguments, this data source exports the following attributes:
* `inviter_username` - Atlas user who invited `username` to the project.
* `roles` - Atlas roles to assign to the invited user. If the user accepts the invitation, Atlas assigns these roles to them. Refer to the [MongoDB Documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/#project-roles) for information on valid roles.
-See the [MongoDB Atlas Administration API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects/operation/createProjectInvitation) documentation for more information.
\ No newline at end of file
+See the [MongoDB Atlas Administration API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects/operation/createProjectInvitation) documentation for more information.
diff --git a/docs/data-sources/project_ip_access_list.md b/docs/data-sources/project_ip_access_list.md
index 132d969968..97b39530ab 100644
--- a/docs/data-sources/project_ip_access_list.md
+++ b/docs/data-sources/project_ip_access_list.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Project IP Access List"
+---
+
# Data Source: mongodbatlas_project_ip_access_list
`mongodbatlas_project_ip_access_list` describes an IP Access List entry resource. The access list grants access from IPs, CIDRs or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project.
diff --git a/docs/data-sources/project_ip_addresses.md b/docs/data-sources/project_ip_addresses.md
index 8302da82e3..810bf6e474 100644
--- a/docs/data-sources/project_ip_addresses.md
+++ b/docs/data-sources/project_ip_addresses.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Projects"
+---
+
# Data Source: mongodbatlas_project_ip_addresses
`mongodbatlas_project_ip_addresses` returns the IP addresses in a project categorized by services.
diff --git a/docs/data-sources/projects.md b/docs/data-sources/projects.md
index 7df7f44c3c..f35e0a6b10 100644
--- a/docs/data-sources/projects.md
+++ b/docs/data-sources/projects.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Projects"
+---
+
# Data Source: mongodbatlas_projects
`mongodbatlas_projects` describes all Projects. This represents projects that have been created.
@@ -51,13 +55,14 @@ data "mongodbatlas_projects" "test" {
* `name` - The name of the project you want to create.
* `org_id` - The ID of the organization you want to create the project within.
+* `project_id`- Unique 24-hexadecimal digit string that identifies the MongoDB Cloud project.
* `cluster_count` - The number of Atlas clusters deployed in the project.
* `created` - The ISO-8601-formatted timestamp of when Atlas created the project.
* `tags` - Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the project. To learn more, see [Resource Tags](https://www.mongodb.com/docs/atlas/tags/)
-* `teams` - Returns all teams to which the authenticated user has access in the project. See [Teams](#teams).
+* `teams` - **(DEPRECATED)** Returns all teams to which the authenticated user has access in the project. See [Teams](#teams).
* `limits` - The limits for the specified project. See [Limits](#limits).
* `ip_addresses` - IP addresses in a project categorized by services. See [IP Addresses](#ip-addresses). **WARNING:** This attribute is deprecated, use the `mongodbatlas_project_ip_addresses` data source instead.
-
+* `users` - Returns list of all pending and active MongoDB Cloud users associated with the specified project.
* `is_collect_database_specifics_statistics_enabled` - Flag that indicates whether to enable statistics in [cluster metrics](https://www.mongodb.com/docs/atlas/monitor-cluster-metrics/) collection for the project.
* `is_data_explorer_enabled` - Flag that indicates whether to enable Data Explorer for the project. If enabled, you can query your database with an easy to use interface.
* `is_extended_storage_sizes_enabled` - Flag that indicates whether to enable extended storage sizes for the specified project.
@@ -69,6 +74,8 @@ data "mongodbatlas_projects" "test" {
### Teams
+~> **DEPRECATION:** This attribute is deprecated and will be removed in the next major release. Please transition to `mongodbatlas_team_project_assignment`. For more details, see [Migration Guide: Project Teams Attribute to Team Project Assignment Resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management).
+
* `team_id` - The unique identifier of the team you want to associate with the project. The team and project must share the same parent organization.
* `role_names` - Each string in the array represents a project role assigned to the team. Every user associated with the team inherits these roles. The [MongoDB Documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/#organization-roles) describes the roles a user can have.
@@ -87,5 +94,18 @@ data "mongodbatlas_projects" "test" {
* `services.clusters.#.inbound` - List of inbound IP addresses associated with the cluster. If your network allows outbound HTTP requests only to specific IP addresses, you must allow access to the following IP addresses so that your application can connect to your Atlas cluster.
* `services.clusters.#.outbound` - List of outbound IP addresses associated with the cluster. If your network allows inbound HTTP requests only from specific IP addresses, you must allow access from the following IP addresses so that your Atlas cluster can communicate with your webhooks and KMS.
+### Users
+* `id`- Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+* `orgMembershipStatus`- String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+* `roles`- One or more project-level roles assigned to the MongoDB Cloud user.
+* `username`- Email address that represents the username of the MongoDB Cloud user.
+* `country`- Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+* `createdAt`- Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+* `firstName`- First or given name that belongs to the MongoDB Cloud user.
+* `lastAuth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+* `lastName`- Last name, family name, or surname that belongs to the MongoDB Cloud user.
+* `mobileNumber` - Mobile phone number that belongs to the MongoDB Cloud user.
+
+~> **NOTE:** - Does not return pending users invited via the deprecated [Invite One MongoDB Cloud User to Join One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createprojectinvitation) endpoint or pending invitations created using [`mongodbatlas_project_invitation`](../resources/project_invitation.md) resource.
See [MongoDB Atlas API - Projects](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Projects) - [and MongoDB Atlas API - Teams](https://docs.atlas.mongodb.com/reference/api/project-get-teams/) Documentation for more information.
diff --git a/docs/data-sources/push_based_log_export.md b/docs/data-sources/push_based_log_export.md
index 1ab223181e..ed59735c47 100644
--- a/docs/data-sources/push_based_log_export.md
+++ b/docs/data-sources/push_based_log_export.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Push-Based Log Export"
+---
+
# Data Source: mongodbatlas_push_based_log_export
`mongodbatlas_push_based_log_export` describes the configured project level settings for the push-based log export feature.
diff --git a/docs/data-sources/resource_policies.md b/docs/data-sources/resource_policies.md
index 08b4254301..c550c99482 100644
--- a/docs/data-sources/resource_policies.md
+++ b/docs/data-sources/resource_policies.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Resource Policies"
+---
+
# Data Source: mongodbatlas_resource_policies
`mongodbatlas_resource_policies` returns all resource policies in an organization.
diff --git a/docs/data-sources/resource_policy.md b/docs/data-sources/resource_policy.md
index cc88238a8f..d2caf550f3 100644
--- a/docs/data-sources/resource_policy.md
+++ b/docs/data-sources/resource_policy.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Resource Policies"
+---
+
# Data Source: mongodbatlas_resource_policy
`mongodbatlas_resource_policy` describes a resource policy in an organization.
diff --git a/docs/data-sources/roles_org_id.md b/docs/data-sources/roles_org_id.md
index be4e87b7de..5ef0bb2e01 100644
--- a/docs/data-sources/roles_org_id.md
+++ b/docs/data-sources/roles_org_id.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Organizations"
+---
+
# Data Source: mongodbatlas_roles_org_id
`mongodbatlas_roles_org_id` describes a MongoDB Atlas Roles Org ID. This represents a Roles Org ID.
diff --git a/docs/data-sources/search_deployment.md b/docs/data-sources/search_deployment.md
index e478107479..de5566aefa 100644
--- a/docs/data-sources/search_deployment.md
+++ b/docs/data-sources/search_deployment.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# Data Source: mongodbatlas_search_deployment
`mongodbatlas_search_deployment` describes a search node deployment.
@@ -14,17 +18,17 @@ resource "mongodbatlas_advanced_cluster" "example" {
name = "ClusterExample"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_search_deployment" "example" {
diff --git a/docs/data-sources/search_index.md b/docs/data-sources/search_index.md
index cd3bf0255f..8e5f9c1b58 100644
--- a/docs/data-sources/search_index.md
+++ b/docs/data-sources/search_index.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# Data Source: mongodbatlas_search_index
`mongodbatlas_search_index` describes a single search indexes. This represents a single search index that have been created.
diff --git a/docs/data-sources/search_indexes.md b/docs/data-sources/search_indexes.md
index abc56a6e0d..b78eb3c201 100644
--- a/docs/data-sources/search_indexes.md
+++ b/docs/data-sources/search_indexes.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# Data Source: mongodbatlas_search_indexes
`mongodbatlas_search_indexes` describes all search indexes. This represents search indexes that have been created.
diff --git a/docs/data-sources/serverless_instance.md b/docs/data-sources/serverless_instance.md
index 9462bd26a6..57b6f275f2 100644
--- a/docs/data-sources/serverless_instance.md
+++ b/docs/data-sources/serverless_instance.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Serverless Instances"
---
-**WARNING:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Data Source: mongodbatlas_serverless_instance
@@ -23,16 +23,6 @@ data "mongodbatlas_serverless_instance" "test_two" {
}
```
-**NOTE:** `mongodbatlas_serverless_instance` and `mongodbatlas_privatelink_endpoint_service_serverless` resources have a circular dependency in some respects.\
-That is, the `serverless_instance` must exist before the `privatelink_endpoint_service` can be created,\
-and the `privatelink_endpoint_service` must exist before the `serverless_instance` gets its respective `connection_strings_private_endpoint_srv` values.
-
-Because of this, the `serverless_instance` data source has particular value as a source of the `connection_strings_private_endpoint_srv`.\
-When using the data_source in-tandem with the afforementioned resources, we can create and retrieve the `connection_strings_private_endpoint_srv` in a single `terraform apply`.
-
-Follow this example to [setup private connection to a serverless instance using aws vpc](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/aws-privatelink-endpoint/serverless-instance) and get the connection strings in a single `terraform apply`
-
-
## Argument Reference
* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies the project that contains your serverless instance.
diff --git a/docs/data-sources/serverless_instances.md b/docs/data-sources/serverless_instances.md
index dd5d0c308f..5e5cdd8e6d 100644
--- a/docs/data-sources/serverless_instances.md
+++ b/docs/data-sources/serverless_instances.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Serverless Instances"
---
-**WARNING:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This data source is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Data Source: mongodbatlas_serverless_instances
diff --git a/docs/data-sources/stream_account_details.md b/docs/data-sources/stream_account_details.md
index cd0be7d2da..4efcb2d520 100644
--- a/docs/data-sources/stream_account_details.md
+++ b/docs/data-sources/stream_account_details.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_account_details
`mongodbatlas_stream_account_details` returns the AWS Account ID/Azure Subscription ID, and the AWS VPC ID/Azure Virtual Network Name for the group, cloud provider, and region that you specify.
diff --git a/docs/data-sources/stream_connection.md b/docs/data-sources/stream_connection.md
index 9a2044be4e..56ebede13a 100644
--- a/docs/data-sources/stream_connection.md
+++ b/docs/data-sources/stream_connection.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_connection
`mongodbatlas_stream_connection` describes a stream connection.
diff --git a/docs/data-sources/stream_connections.md b/docs/data-sources/stream_connections.md
index 03f02408e6..fe2020590f 100644
--- a/docs/data-sources/stream_connections.md
+++ b/docs/data-sources/stream_connections.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_connections
`mongodbatlas_stream_connections` describes all connections of a stream instance for the specified project.
diff --git a/docs/data-sources/stream_instance.md b/docs/data-sources/stream_instance.md
index 8da78e5110..715cb6ecb5 100644
--- a/docs/data-sources/stream_instance.md
+++ b/docs/data-sources/stream_instance.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_instance
`mongodbatlas_stream_instance` describes a stream instance.
diff --git a/docs/data-sources/stream_instances.md b/docs/data-sources/stream_instances.md
index f02a878763..df30fff15d 100644
--- a/docs/data-sources/stream_instances.md
+++ b/docs/data-sources/stream_instances.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_instances
`mongodbatlas_stream_instances` describes the stream instances defined in a project.
diff --git a/docs/data-sources/stream_privatelink_endpoint.md b/docs/data-sources/stream_privatelink_endpoint.md
index b6814560eb..b9e95589c0 100644
--- a/docs/data-sources/stream_privatelink_endpoint.md
+++ b/docs/data-sources/stream_privatelink_endpoint.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_privatelink_endpoint
`mongodbatlas_stream_privatelink_endpoint` describes a Privatelink Endpoint for Streams.
diff --git a/docs/data-sources/stream_privatelink_endpoints.md b/docs/data-sources/stream_privatelink_endpoints.md
index 2e67e5d6df..62ad0f29f7 100644
--- a/docs/data-sources/stream_privatelink_endpoints.md
+++ b/docs/data-sources/stream_privatelink_endpoints.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_privatelink_endpoints
`mongodbatlas_stream_privatelink_endpoints` describes a Privatelink Endpoint for Streams.
diff --git a/docs/data-sources/stream_processor.md b/docs/data-sources/stream_processor.md
index 25f5c6bb7c..ca4b417516 100644
--- a/docs/data-sources/stream_processor.md
+++ b/docs/data-sources/stream_processor.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_processor
`mongodbatlas_stream_processor` describes a stream processor.
diff --git a/docs/data-sources/stream_processors.md b/docs/data-sources/stream_processors.md
index 9352847774..429c57e55a 100644
--- a/docs/data-sources/stream_processors.md
+++ b/docs/data-sources/stream_processors.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Data Source: mongodbatlas_stream_processors
`mongodbatlas_stream_processors` returns all stream processors in a stream instance.
diff --git a/docs/data-sources/team.md b/docs/data-sources/team.md
index a15e880541..0493c27742 100644
--- a/docs/data-sources/team.md
+++ b/docs/data-sources/team.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Teams"
+---
+
# Data Source: mongodbatlas_team
`mongodbatlas_team` describes a Team. The resource requires your Organization ID, Project ID and Team ID.
@@ -50,6 +54,27 @@ In addition to all arguments above, the following attributes are exported:
* `id` - Terraform's unique identifier used internally for state management.
* `team_id` - The unique identifier for the team.
* `name` - The name of the team you want to create.
-* `usernames` - The users who are part of the organization.
+* `usernames` - **(DEPRECATED)** The users who are part of the team. This attribute is deprecated and will be removed in the next major release. Please transition to `data.mongodbatlas_team.users`. For more details, see [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management.md).
+* `users`- Returns a list of all pending and active MongoDB Cloud users associated with the specified team.
+
+### Users
+* `id` - Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+* `org_membership_status` - String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or are already active in the organization.
+* `roles` - Organization and project-level roles assigned to one MongoDB Cloud user within one organization.
+* `team_ids` - List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+* `username` - Email address that represents the username of the MongoDB Cloud user.
+* `country` - Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+* `invitation_created_at` - Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+* `invitation_expires_at` - Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+* `inviter_username` - Username of the MongoDB Cloud user who sent the invitation to join the organization.
+* `created_at` - Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+* `first_name` - First or given name that belongs to the MongoDB Cloud user.
+* `last_auth` - Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+* `last_name` - Last name, family name, or surname that belongs to the MongoDB Cloud user.
+* `mobile_number` - Mobile phone number that belongs to the MongoDB Cloud user.
+
+
+~> **NOTE:** - Users with pending invitations created using [`mongodbatlas_project_invitation`](../resources/project_invitation.md) resource or via the deprecated [Invite One MongoDB Cloud User to Join One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createprojectinvitation) endpoint are excluded (or cannot be managed) with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-mongodb-cloud-users) for details.
+To manage these users with this resource/data source, refer to our [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](../guides/atlas-user-management).
See detailed information for arguments and attributes: [MongoDB API Teams](https://docs.atlas.mongodb.com/reference/api/teams-create-one/)
diff --git a/docs/data-sources/team_project_assignment.md b/docs/data-sources/team_project_assignment.md
new file mode 100644
index 0000000000..403493de7a
--- /dev/null
+++ b/docs/data-sources/team_project_assignment.md
@@ -0,0 +1,38 @@
+---
+subcategory: "Teams"
+---
+
+# Data Source: mongodbatlas_team_project_assignment
+
+`mongodbatlas_team_project_assignment` provides a Team Project Assignment data source. The data source lets you retrieve a team assigned to a project.
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_team_project_assignment" "this" {
+ project_id = var.project_id
+ team_id = var.team_id
+ role_names = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_team_project_assignment" "this" {
+ project_id = mongodbatlas_team_project_assignment.this.project_id
+ team_id = mongodbatlas_team_project_assignment.this.team_id
+}
+```
+
+
+## Schema
+
+### Required
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listprojects) endpoint to retrieve all projects to which the authenticated user has access.
+
+**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.
+- `team_id` (String) Unique 24-hexadecimal character string that identifies the team.
+
+### Read-Only
+
+- `role_names` (Set of String) One or more project-level roles assigned to the team.
+
+For more information, see: [MongoDB Atlas API - Teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectteam) Documentation.
diff --git a/docs/data-sources/teams.md b/docs/data-sources/teams.md
deleted file mode 100644
index 139c3ff5f0..0000000000
--- a/docs/data-sources/teams.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This datasource is deprecated, use `mongodbatlas_team`
-
-# Data Source: mongodbatlas_teams
-
-This data source is deprecated. Please transition to using `mongodbatlas_team` which defines the same underlying implementation, aligning the name of the data source with the implementation which fetches a single team.
-
-In the future this data source will define a new implementation capable of fetching all teams in one organization.
-
diff --git a/docs/data-sources/third_party_integration.md b/docs/data-sources/third_party_integration.md
index c7b7f236c9..2adb4f2b56 100644
--- a/docs/data-sources/third_party_integration.md
+++ b/docs/data-sources/third_party_integration.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Third-Party Integrations"
+---
+
# Data Source: mongodbatlas_third_party_integration
`mongodbatlas_third_party_integration` describes a Third-Party Integration Settings for the given type.
diff --git a/docs/data-sources/third_party_integrations.md b/docs/data-sources/third_party_integrations.md
index 93b115a333..b9cd7a646d 100644
--- a/docs/data-sources/third_party_integrations.md
+++ b/docs/data-sources/third_party_integrations.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Third-Party Integrations"
+---
+
# Data Source: mongodbatlas_third_party_integrations
`mongodbatlas_third_party_integrations` describes all Third-Party Integration Settings. This represents two Third-Party services `PAGER_DUTY` and `DATADOG`
diff --git a/docs/data-sources/x509_authentication_database_user.md b/docs/data-sources/x509_authentication_database_user.md
index e3a9509289..f98e7896f2 100644
--- a/docs/data-sources/x509_authentication_database_user.md
+++ b/docs/data-sources/x509_authentication_database_user.md
@@ -1,3 +1,7 @@
+---
+subcategory: "X.509 Authentication"
+---
+
# Data Source: mongodbatlas_x509_authentication_database_user
`mongodbatlas_x509_authentication_database_user` describes a X509 Authentication Database User. This represents a X509 Authentication Database User.
diff --git a/docs/guides/0.6.0-upgrade-guide.md b/docs/guides/0.6.0-upgrade-guide.md
index f8afc7ba5b..7297af1a1d 100644
--- a/docs/guides/0.6.0-upgrade-guide.md
+++ b/docs/guides/0.6.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 0.6.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 0"
---
# MongoDB Atlas Provider 0.6.0: Upgrade Guide
diff --git a/docs/guides/0.8.0-upgrade-guide.md b/docs/guides/0.8.0-upgrade-guide.md
index 00eee45257..e27963d2a9 100644
--- a/docs/guides/0.8.0-upgrade-guide.md
+++ b/docs/guides/0.8.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 0.8.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 0"
---
# MongoDB Atlas Provider v0.8.0: Upgrade and Information Guide
diff --git a/docs/guides/0.8.2-upgrade-guide.md b/docs/guides/0.8.2-upgrade-guide.md
index 3b152e9d4f..a1a0dbc71f 100644
--- a/docs/guides/0.8.2-upgrade-guide.md
+++ b/docs/guides/0.8.2-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 0.8.2"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 0"
---
# MongoDB Atlas Provider v0.8.2: Upgrade and Information Guide
@@ -64,4 +64,3 @@ configuration and real physical resources that exist. As a result, no
actions need to be performed.
```
-
diff --git a/docs/guides/0.9.0-upgrade-guide.md b/docs/guides/0.9.0-upgrade-guide.md
index 172865819f..62c7938067 100644
--- a/docs/guides/0.9.0-upgrade-guide.md
+++ b/docs/guides/0.9.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 0.9.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 0"
---
# MongoDB Atlas Provider v0.9.0: Upgrade and Information Guide
diff --git a/docs/guides/0.9.1-upgrade-guide.md b/docs/guides/0.9.1-upgrade-guide.md
index 68f89a77f9..36ef8e118e 100644
--- a/docs/guides/0.9.1-upgrade-guide.md
+++ b/docs/guides/0.9.1-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 0.9.1"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 0"
---
# MongoDB Atlas Provider v0.9.1: Upgrade and Information Guide
diff --git a/docs/guides/1.0.0-upgrade-guide.md b/docs/guides/1.0.0-upgrade-guide.md
index e87b53d822..26c57607aa 100644
--- a/docs/guides/1.0.0-upgrade-guide.md
+++ b/docs/guides/1.0.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.0.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.0.0: Upgrade and Information Guide
@@ -378,4 +378,4 @@ so no changes are needed.
* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723)
-* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
\ No newline at end of file
+* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
diff --git a/docs/guides/1.0.1-upgrade-guide.md b/docs/guides/1.0.1-upgrade-guide.md
index 9f50f47b53..77238e1581 100644
--- a/docs/guides/1.0.1-upgrade-guide.md
+++ b/docs/guides/1.0.1-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.0.1"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider v1.0.1: Upgrade and Information Guide
@@ -86,4 +86,4 @@ resource "mongodbatlas_search_index" "test" {
* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723)
-* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
\ No newline at end of file
+* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
diff --git a/docs/guides/1.1.0-upgrade-guide.md b/docs/guides/1.1.0-upgrade-guide.md
index fdc673dc0d..51cf3a9d7d 100644
--- a/docs/guides/1.1.0-upgrade-guide.md
+++ b/docs/guides/1.1.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.1.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.1.0/1.1.1: Upgrade and Information Guide
@@ -95,4 +95,4 @@ so no changes are needed.
* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723)
-* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
\ No newline at end of file
+* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
diff --git a/docs/guides/1.10.0-upgrade-guide.md b/docs/guides/1.10.0-upgrade-guide.md
index b123c6e366..e823688bb2 100644
--- a/docs/guides/1.10.0-upgrade-guide.md
+++ b/docs/guides/1.10.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.10.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.10.0: Upgrade and Information Guide
diff --git a/docs/guides/1.11.0-upgrade-guide.md b/docs/guides/1.11.0-upgrade-guide.md
index be6663d113..ad75550e4b 100644
--- a/docs/guides/1.11.0-upgrade-guide.md
+++ b/docs/guides/1.11.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.11.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.11.0: Upgrade and Information Guide
diff --git a/docs/guides/1.12.0-upgrade-guide.md b/docs/guides/1.12.0-upgrade-guide.md
index 3bb80e8ffe..774490dad1 100644
--- a/docs/guides/1.12.0-upgrade-guide.md
+++ b/docs/guides/1.12.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.12.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.12.0: Upgrade and Information Guide
diff --git a/docs/guides/1.13.0-upgrade-guide.md b/docs/guides/1.13.0-upgrade-guide.md
index 9be9c98dbe..3e0b7eee32 100644
--- a/docs/guides/1.13.0-upgrade-guide.md
+++ b/docs/guides/1.13.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.13.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.13.0: Upgrade and Information Guide
diff --git a/docs/guides/1.14.0-upgrade-guide.md b/docs/guides/1.14.0-upgrade-guide.md
index 34f70bd8c7..4a29bc8c99 100644
--- a/docs/guides/1.14.0-upgrade-guide.md
+++ b/docs/guides/1.14.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.14.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.14.0: Upgrade and Information Guide
diff --git a/docs/guides/1.15.0-upgrade-guide.md b/docs/guides/1.15.0-upgrade-guide.md
index 023aba9030..4e4ad0a457 100644
--- a/docs/guides/1.15.0-upgrade-guide.md
+++ b/docs/guides/1.15.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.15.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.15.0: Upgrade and Information Guide
diff --git a/docs/guides/1.16.0-upgrade-guide.md b/docs/guides/1.16.0-upgrade-guide.md
index b0778a656c..8c73cce109 100644
--- a/docs/guides/1.16.0-upgrade-guide.md
+++ b/docs/guides/1.16.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.16.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.16.0: Upgrade and Information Guide
diff --git a/docs/guides/1.17.0-upgrade-guide.md b/docs/guides/1.17.0-upgrade-guide.md
index 5ba8b8f0cf..495bfe823e 100644
--- a/docs/guides/1.17.0-upgrade-guide.md
+++ b/docs/guides/1.17.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.17.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.17.0: Upgrade and Information Guide
diff --git a/docs/guides/1.18.0-upgrade-guide.md b/docs/guides/1.18.0-upgrade-guide.md
index 85953a4aa2..040e6e499d 100644
--- a/docs/guides/1.18.0-upgrade-guide.md
+++ b/docs/guides/1.18.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.18.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.18.0: Upgrade and Information Guide
diff --git a/docs/guides/1.19.0-upgrade-guide.md b/docs/guides/1.19.0-upgrade-guide.md
index 0da8b5ba58..b086a8fcf7 100644
--- a/docs/guides/1.19.0-upgrade-guide.md
+++ b/docs/guides/1.19.0-upgrade-guide.md
@@ -1,5 +1,6 @@
---
page_title: "Upgrade Guide 1.19.0"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.19.0: Upgrade and Information Guide
diff --git a/docs/guides/1.2.0-upgrade-guide.md b/docs/guides/1.2.0-upgrade-guide.md
index f46f18af2d..03aafb9b15 100644
--- a/docs/guides/1.2.0-upgrade-guide.md
+++ b/docs/guides/1.2.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.2.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.2.0: Upgrade and Information Guide
@@ -24,4 +24,4 @@ Changes:
* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723)
-* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
\ No newline at end of file
+* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
diff --git a/docs/guides/1.20.0-upgrade-guide.md b/docs/guides/1.20.0-upgrade-guide.md
index cb01ee599c..286aabb9d4 100644
--- a/docs/guides/1.20.0-upgrade-guide.md
+++ b/docs/guides/1.20.0-upgrade-guide.md
@@ -1,5 +1,6 @@
---
page_title: "Upgrade Guide 1.20.0"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.20.0: Upgrade and Information Guide
diff --git a/docs/guides/1.3.0-upgrade-guide.md b/docs/guides/1.3.0-upgrade-guide.md
index 34e4d6b692..3cf76253e0 100644
--- a/docs/guides/1.3.0-upgrade-guide.md
+++ b/docs/guides/1.3.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.3.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.3.0: Upgrade and Information Guide
diff --git a/docs/guides/1.4.0-upgrade-guide.md b/docs/guides/1.4.0-upgrade-guide.md
index ac38a87f62..77a94f5641 100644
--- a/docs/guides/1.4.0-upgrade-guide.md
+++ b/docs/guides/1.4.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.4.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.4.0: Upgrade and Information Guide
diff --git a/docs/guides/1.5.0-upgrade-guide.md b/docs/guides/1.5.0-upgrade-guide.md
index 2e305bd48f..d820111826 100644
--- a/docs/guides/1.5.0-upgrade-guide.md
+++ b/docs/guides/1.5.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.5.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.5.0: Upgrade and Information Guide
diff --git a/docs/guides/1.6.0-upgrade-guide.md b/docs/guides/1.6.0-upgrade-guide.md
index 57dd04b2c2..ef45178d26 100644
--- a/docs/guides/1.6.0-upgrade-guide.md
+++ b/docs/guides/1.6.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.6.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.6.0: Upgrade and Information Guide
diff --git a/docs/guides/1.7.0-upgrade-guide.md b/docs/guides/1.7.0-upgrade-guide.md
index ee9988f593..63d3a3fc75 100644
--- a/docs/guides/1.7.0-upgrade-guide.md
+++ b/docs/guides/1.7.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.7.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.7.0: Upgrade and Information Guide
diff --git a/docs/guides/1.8.0-upgrade-guide.md b/docs/guides/1.8.0-upgrade-guide.md
index a10c0ac787..a1113f48e6 100644
--- a/docs/guides/1.8.0-upgrade-guide.md
+++ b/docs/guides/1.8.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.8.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.8.0: Upgrade and Information Guide
diff --git a/docs/guides/1.9.0-upgrade-guide.md b/docs/guides/1.9.0-upgrade-guide.md
index cd5133a922..db207549e2 100644
--- a/docs/guides/1.9.0-upgrade-guide.md
+++ b/docs/guides/1.9.0-upgrade-guide.md
@@ -1,6 +1,6 @@
---
page_title: "Upgrade Guide 1.9.0"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# MongoDB Atlas Provider 1.9.0: Upgrade and Information Guide
diff --git a/docs/guides/2.0.0-upgrade-guide.md b/docs/guides/2.0.0-upgrade-guide.md
new file mode 100644
index 0000000000..e24599d370
--- /dev/null
+++ b/docs/guides/2.0.0-upgrade-guide.md
@@ -0,0 +1,134 @@
+---
+layout: "mongodbatlas"
+page_title: "Upgrade Guide 2.0.0"
+description: |-
+MongoDB Atlas Provider 2.0.0: Upgrade and Information Guide
+---
+
+# MongoDB Atlas Provider 2.0.0: Upgrade and Information Guide
+
+The Terraform MongoDB Atlas Provider version 2.0.0 introduces new features, breaking changes, and resource deprecations/removals.
+
+Use this guide to understand what’s new, what requires migration, and how to update your configurations.
+
+## New Resources, Data Sources, and Features
+
+### New `delete_on_create_timeout` attribute
+
+Multiple resources now support a `delete_on_create_timeout` boolean attribute that controls cleanup behavior when resource creation times out. When set to `true` (default), the provider will delete the underlying resource if the create operation times out, helping to avoid orphaned resources. This attribute is available in the following resources:
+ - `mongodbatlas_advanced_cluster`
+ - `mongodbatlas_cloud_provider_access_setup`
+ - `mongodbatlas_cloud_backup_snapshot`
+ - `mongodbatlas_cluster_outage_simulation`
+ - `mongodbatlas_encryption_at_rest_private_endpoint`
+ - `mongodbatlas_flex_cluster`
+ - `mongodbatlas_network_peering`
+ - `mongodbatlas_online_archive`
+ - `mongodbatlas_privatelink_endpoint`
+ - `mongodbatlas_privatelink_endpoint_service`
+ - `mongodbatlas_push_based_log_export`
+ - `mongodbatlas_search_deployment`
+ - `mongodbatlas_stream_processor`
+
+### New Cloud User Assignment Resources
+
+New user and team management resources that replace deprecated invitation-based workflow:
+ - `mongodbatlas_cloud_user_org_assignment` resource and data source: Manages user membership in organizations with structured roles and support for both pending and active memberships.
+ - `mongodbatlas_cloud_user_project_assignment` resource and data source: Manages user membership in projects with automatic invitation handling based on organization membership status.
+ - `mongodbatlas_cloud_user_team_assignment` resource and data source: Manages user membership in teams with support for both username and user ID.
+ - `mongodbatlas_team_project_assignment` resource and data source: Manages team assignments to projects.
+
+## Breaking Changes
+
+### `mongodbatlas_advanced_cluster`
+Resource:
+ - The following attributes have been removed: `id`, `disk_size_gb`, `replication_specs.#.num_shards`, `replication_specs.#.id`, `advanced_configuration.default_read_concern`, `advanced_configuration.fail_index_key_too_long`.
+ - Several block elements are now attributes requiring syntax changes in existing configurations such as `replication_specs`, `region_configs`, `advanced_configuration`, etc.
+ - Only the new sharding configuration (one `replication_spec` for each shard) that allows scaling shards independently is supported.
+ - For details, see [Migration Guide: Advanced Cluster (v1.x → v2.0.0)](migrate-to-advanced-cluster-2.0.md).
+
+Data source:
+ - The following attributes have been removed: `id`, `disk_size_gb`, `replication_specs.#.num_shards`, `replication_specs.#.id`, `advanced_configuration.default_read_concern`, `advanced_configuration.fail_index_key_too_long`, `use_replication_spec_per_shard`.
+ - The data sources will now return only the new sharding configuration (one `replication_spec` for each shard) of the clusters that allows scaling shards independently.
+ - For details, see [Migration Guide: Advanced Cluster (v1.x → v2.0.0)](migrate-to-advanced-cluster-2.0.md).
+
+### `mongodbatlas_cloud_backup_schedule`
+Resource:
+ - `export` and `auto_export_enabled` are now optional only arguments. This facilitates tracking changes in their configurations.
+ - `copy_settings.#.replication_spec_id` attribute has been removed. Use `copy_settings.#.zone_id` instead. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones).
+
+Data source:
+ - `copy_settings.#.replication_spec_id` and `use_zone_id_for_copy_settings` attributes have been removed. Remove any usage of `use_zone_id_for_copy_settings` & replace any references to `copy_settings.#.replication_spec_id` in your configurations with `copy_settings.#.zone_id`. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones).
+
+
+### `mongodbatlas_maintenance_window`
+Resource:
+ - `hour_of_day` is now required. This clarification prevents errors when creating new resources.
+
+### `mongodbatlas_custom_db_role`
+Resource:
+ - `actions` now is set as `TypeSet`. This prevents plan diffs when reordering the `actions` in the resource.
+
+### `mongodbatlas_global_cluster_config`
+Resource:
+ - `custom_zone_mapping` attribute has been removed. Use `custom_zone_mapping_zone_id` instead. To learn more, see the [Sharding Configuration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema).
+
+Data source:
+ - `custom_zone_mapping` attribute has been removed. Use `custom_zone_mapping_zone_id` instead. To learn more, see the [Sharding Configuration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema).
+
+
+
+## Removed resources & data sources
+
+ - `mongodbatlas_cloud_provider_snapshot`
+ This deprecated resource & data source(s) have been removed. Please use `mongodbatlas_cloud_backup_snapshot` instead.
+
+ - `mongodbatlas_cloud_provider_snapshot_backup_policy`
+ This deprecated resource & data source(s) have been removed. Please use `mongodbatlas_cloud_backup_schedule` instead.
+
+ - `mongodbatlas_cloud_provider_snapshot_restore_job`
+ This deprecated resource & data source(s) have been removed. Please use `mongodbatlas_cloud_backup_snapshot_restore_job` instead.
+
+ - `mongodbatlas_privatelink_endpoint_serverless`
+ This deprecated resource & data source(s) have been removed. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+
+ - `mongodbatlas_privatelink_endpoint_service_serverless`
+ This deprecated resource & data source(s) have been removed. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+
+ - `mongodbatlas_teams`
+ This deprecated resource & data source(s) have been removed. Please use `mongodbatlas_team` instead.
+
+## Deprecations
+
+Version 2.0.0 introduces several deprecations as part of modernizing the provider's user and invitation management. These deprecated resources and attributes **will be removed in a future major version release**. Migration guides are available to help transition to the recommended replacements:
+
+### Resources and Data sources
+
+- **`mongodbatlas_cluster` resource and data sources**: Deprecated. Use `mongodbatlas_advanced_cluster` resource and data sources instead. See [the Cluster to Advanced Cluster Migration Guide](../guides/cluster-to-advanced-cluster-migration-guide).
+
+- **`mongodbatlas_org_invitation` resource and data source**: Deprecated. Use `mongodbatlas_cloud_user_org_assignment` resource instead. See the [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management.md) for migration steps.
+
+- **`mongodbatlas_project_invitation` resource and data source**: Deprecated. Use `mongodbatlas_cloud_user_project_assignment` resource instead. See the [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management.md) for migration steps.
+
+- **`mongodbatlas_atlas_user` data source**: Deprecated. Use `mongodbatlas_cloud_user_org_assignment` data source instead. See the [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management.md) for migration steps.
+
+- **`mongodbatlas_atlas_users` data source**: Deprecated. Use the `users` attribute on `mongodbatlas_organization`, `mongodbatlas_project`, or `mongodbatlas_team` data sources respectively. See the [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management.md) for migration steps.
+
+### Attributes
+
+- **`teams` attribute in `mongodbatlas_project` resource and data source**: Deprecated. Use `mongodbatlas_team_project_assignment` resource and data source to manage team membership to projects. See [Project Teams Attribute to Team Project Assignment Resource Migration Guide](../guides/atlas-user-management) for migration steps.
+
+- **`email_address` attribute in `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users` data sources**: Deprecated. See the [Migration Guide: Migrate off deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users`](../guides/atlas-user-management.md) for migration steps.
+
+- **`usernames` attribute in `mongodbatlas_team` resource and data source**: Deprecated. Use `mongodbatlas_cloud_user_team_assignment` resource to manage team membership per user and team. See the [Team Usernames Attribute to Cloud User Team Assignment Migration guide](../guides/atlas-user-management.md) for migration steps.
+
+2.0.0 also includes general improvements, bug fixes, and several key documentation updates. See the [CHANGELOG](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/CHANGELOG.md) for more specific information.
+
+
+### Helpful Links
+
+* [Report bugs](https://github.com/mongodb/terraform-provider-mongodbatlas/issues)
+
+* [Request Features](https://feedback.mongodb.com/forums/924145-atlas?category_id=370723)
+
+* [Contact Support](https://docs.atlas.mongodb.com/support/) covered by MongoDB Atlas support plans, Developer and above.
diff --git a/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md b/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md
index d48746e2dc..bf1c486023 100644
--- a/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md
+++ b/docs/guides/Programmatic-API-Key-upgrade-guide-1.10.0.md
@@ -1,6 +1,6 @@
---
page_title: "Migration Guide: Programmatic API Key (v1.10.0)"
-subcategory: "Older Guides"
+subcategory: "Older Guides - Version 1"
---
# Migration Guide: Programmatic API Key (v1.10.0)
diff --git a/docs/guides/advanced-cluster-new-sharding-schema.md b/docs/guides/advanced-cluster-new-sharding-schema.md
index 5e0a42e96d..3d2d5e3f03 100644
--- a/docs/guides/advanced-cluster-new-sharding-schema.md
+++ b/docs/guides/advanced-cluster-new-sharding-schema.md
@@ -4,10 +4,11 @@ page_title: "Migration Guide: Advanced Cluster New Sharding Configurations"
# Migration Guide: Advanced Cluster New Sharding Configurations
-**Objective**: Use this guide to migrate your existing `advanced_cluster` resources to support new sharding configurations introduced in version 1.18.0. The new sharding configurations allow you to scale shards independently. Additionally, as of version 1.23.0, compute auto-scaling supports scaling instance sizes independently for each shard when using the new sharding configuration. Existing sharding configurations continue to work, but you will receive deprecation messages if you continue to use them.
+**Objective**: Use this guide to migrate your existing `mongodbatlas_advanced_cluster` resources that may be using the legacy sharding schema _(i.e. using `num_shards` which was deprecated in v1.18.0 and removed in 2.0.0)_ to support the new sharding configurations instead. The new sharding configurations allow you to scale shards independently. Additionally, compute auto-scaling supports scaling instance sizes independently for each shard when using the new sharding configuration.
-Note: Once applied, the `advanced_cluster` resource making use of the new sharding configuration will not be able to transition back to the old sharding configuration.
+Note: Once applied, the `mongodbatlas_advanced_cluster` resource making use of the new sharding configuration will not be able to transition back to the old sharding configuration.
+- [Prerequisites](#prerequisites)
- [Migration Guide: Advanced Cluster New Sharding Configurations](#migration-guide-advanced-cluster-new-sharding-schema)
- [Changes Overview](#changes-overview)
- [Migrate advanced\_cluster type `SHARDED`](#migrate-advanced_cluster-type-sharded)
@@ -18,10 +19,15 @@ Note: Once applied, the `advanced_cluster` resource making use of the new shardi
- [Resources and Data Sources Impacted by Independent Shard Scaling](#resources-and-data-sources-impacted-by-independent-shard-scaling)
- [Data Source Transition for Asymmetric Clusters](#data-source-transition-for-asymmetric-clusters)
+## Prerequisites
+- Upgrade to MongoDB Atlas Terraform Provider 2.0.0 or later
+- Ensure `mongodbatlas_advanced_cluster` resources configuration is updated to use the latest syntax changes as per **Step 1 & 2** of [Migration Guide: Advanced Cluster (v1.x → v2.0.0)](migrate-to-advanced-cluster-2.0.md#how-to-migrate). **Note:** Syntax changes in [Migration Guide: Advanced Cluster (v1.x → v2.0.0)](migrate-to-advanced-cluster-2.0.md#how-to-migrate) and the changes in this guide should be applied together in one go **once the plan is empty** i.e. you should not make these updates separately.
+
+
## Changes Overview
`replication_specs` attribute now represents each individual cluster's shard with a unique replication spec element.
-When you use the new sharding configurations, it will no longer use the existing attribute `num_shards`, and instead the number of shards are defined by the number of `replication_specs` elements.
+When you use the new sharding configurations, it will no longer use the deprecated attribute `num_shards` _(this attribute has been removed in v2.0.0)_, and instead the number of shards are defined by the number of `replication_specs` elements.
### Migrate advanced_cluster type `SHARDED`
@@ -32,11 +38,10 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "SymmetricShardedCluster"
cluster_type = "SHARDED"
- replication_specs {
- # deprecation warning will be encoutered for using num_shards
- num_shards = 2
- region_configs {
- electable_specs {
+ replication_specs = [{
+ num_shards = 2 # this attribute has been removed in v2.0.0
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
disk_iops = 3000
node_count = 3
@@ -44,12 +49,12 @@ resource "mongodbatlas_advanced_cluster" "test" {
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
```
-In order to use our new sharding configurations, we will remove the use of `num_shards` and add a new identical `replication_specs` element for each shard. Note that these 2 changes must be done at the same time.
+In order to use our new sharding configurations, we will remove the use of `num_shards` and add a new identical `replication_specs` element for each shard. Note that all changes must be done at the same time.
```
resource "mongodbatlas_advanced_cluster" "test" {
@@ -57,9 +62,9 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "SymmetricShardedCluster"
cluster_type = "SHARDED"
- replication_specs { # first shard
- region_configs {
- electable_specs {
+ replication_specs = [{ # first shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
disk_iops = 3000
node_count = 3
@@ -67,12 +72,11 @@ resource "mongodbatlas_advanced_cluster" "test" {
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
-
- replication_specs { # second shard
- region_configs {
- electable_specs {
+ }]
+ },
+ { # second shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
disk_iops = 3000
node_count = 3
@@ -80,16 +84,14 @@ resource "mongodbatlas_advanced_cluster" "test" {
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
```
-This updated configuration will trigger a Terraform update plan. However, the underlying cluster will not face any changes after the `apply` command, as both configurations represent a sharded cluster composed of two shards.
-
### Migrate advanced_cluster type `GEOSHARDED`
-Consider the following configuration of a `GEOSHARDED` cluster using the deprecated `num_shards`:
+Consider the following configuration of a `GEOSHARDED` cluster using the deprecated (removed in v2.0.0) `num_shards`:
```
resource "mongodbatlas_advanced_cluster" "test" {
@@ -97,34 +99,33 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "GeoShardedCluster"
cluster_type = "GEOSHARDED"
- replication_specs {
+ replication_specs = [{
zone_name = "zone n1"
- num_shards = 2
- region_configs {
- electable_specs {
+ num_shards = 2 # this attribute has been removed in v2.0.0
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
-
- replication_specs {
+ }]
+ },
+ {
zone_name = "zone n2"
- num_shards = 2
+ num_shards = 2 # this attribute has been removed in v2.0.0
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
```
@@ -136,62 +137,58 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "GeoShardedCluster"
cluster_type = "GEOSHARDED"
- replication_specs { # first shard for zone n1
- zone_name = "zone n1"
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
- }
-
- replication_specs { # second shard for zone n1
- zone_name = "zone n1"
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
- }
-
- replication_specs { # first shard for zone n2
- zone_name = "zone n2"
- region_configs {
- electable_specs {
+ replication_specs = [
+ { # first shard for zone n1
+ zone_name = "zone n1"
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
-
- replication_specs { # second shard for zone n2
- zone_name = "zone n2"
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ }]
+ },
+ { # second shard for zone n1
+ zone_name = "zone n1"
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ }]
+ },
+ { # first shard for zone n2
+ zone_name = "zone n2"
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ },
+ { # second shard for zone n2
+ zone_name = "zone n2"
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ }]
}
```
-This updated configuration triggers a Terraform update plan. However, the underlying cluster will not face any changes after the `apply` command, as both configurations represent a geo sharded cluster with two zones and two shards in each one.
-
### Migrate advanced_cluster type `REPLICASET`
To learn more, see the documentation on [transitioning from a replica set to a sharded cluster](https://www.mongodb.com/docs/atlas/scale-cluster/#scale-your-replica-set-to-a-sharded-cluster).
@@ -203,17 +200,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "ReplicaSetTransition"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AZURE"
priority = 7
region_name = "US_EAST"
- }
- }
+ }]
+ }]
}
```
@@ -227,17 +224,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "ReplicaSetTransition"
cluster_type = "SHARDED"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AZURE"
priority = 7
region_name = "US_EAST"
- }
- }
+ }]
+ }]
}
```
@@ -251,35 +248,34 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "ReplicaSetTransition"
cluster_type = "SHARDED"
- replication_specs { # first shard
- region_configs {
- electable_specs {
+ replication_specs = [{ # first shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AZURE"
priority = 7
region_name = "US_EAST"
- }
- }
-
- replication_specs { # second shard
- region_configs {
- electable_specs {
+ }]
+ },
+ { # second shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AZURE"
priority = 7
region_name = "US_EAST"
- }
- }
+ }]
+ }]
}
```
## Use Independent Shard Scaling
-Use the new sharding configurations. Each shard must be represented with a unique `replication_specs` element and `num_shards` must not be used, as illustrated in the following example.
+Use the new sharding configurations. Each shard must be represented with a unique `replication_specs` element and `num_shards` must be removed, as illustrated in the following example.
```
resource "mongodbatlas_advanced_cluster" "test" {
@@ -287,29 +283,29 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "ShardedCluster"
cluster_type = "SHARDED"
- replication_specs { # first shard
- region_configs {
- electable_specs {
+ replication_specs = [
+ { # first shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
-
- replication_specs { # second shard
- region_configs {
- electable_specs {
+ }]
+ },
+ { # second shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
```
@@ -323,35 +319,35 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "ShardedCluster"
cluster_type = "SHARDED"
- replication_specs { # first shard upgraded to M40
- region_configs {
- electable_specs {
+ replication_specs = [
+ { # first shard upgraded to M40
+ region_configs = [{
+ electable_specs = {
instance_size = "M40"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
-
- replication_specs { # second shard preserves M30
- region_configs {
- electable_specs {
+ }]
+ },
+ { # second shard preserves M30
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
```
## Use Auto-Scaling Per Shard
-As of version 1.23.0, enabled `compute` auto-scaling (either `auto_scaling` or `analytics_auto_scaling`) will scale the `instance_size` of each shard independently. Each shard must be represented with a unique `replication_specs` element and `num_shards` must not be used. On the contrary, if using deprecated `num_shards` or a lower version, enabled compute auto-scaling will scale uniformily across all shards in the cluster.
+As of version 1.23.0, enabled `compute` auto-scaling (either `auto_scaling` or `analytics_auto_scaling`) will scale the `instance_size` of each shard independently. Each shard must be represented with a unique `replication_specs` element and `num_shards` must not be used.
The following example illustrates a configuration that has compute auto-scaling per shard for electable and analytic nodes.
@@ -360,60 +356,61 @@ resource "mongodbatlas_advanced_cluster" "test" {
project_id = var.project_id
name = "AutoScalingCluster"
cluster_type = "SHARDED"
- replication_specs { # first shard
- region_configs {
- electable_specs {
+
+ replication_specs = [{ # first shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M40"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M40"
node_count = 1
}
- auto_scaling {
+ auto_scaling = {
compute_enabled = true
compute_max_instance_size = "M60"
}
- analytics_auto_scaling {
+ analytics_auto_scaling = {
compute_enabled = true
compute_max_instance_size = "M60"
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
+ }]
zone_name = "Zone 1"
- }
- replication_specs { # second shard
- region_configs {
- electable_specs {
+ },
+ { # second shard
+ region_configs = [{
+ electable_specs = {
instance_size = "M40"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M40"
node_count = 1
}
- auto_scaling {
+ auto_scaling = {
compute_enabled = true
compute_max_instance_size = "M60"
}
- analytics_auto_scaling {
+ analytics_auto_scaling = {
compute_enabled = true
compute_max_instance_size = "M60"
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
+ }]
zone_name = "Zone 1"
- }
+ }]
lifecycle { # avoids future non-empty plans as instance size start to scale from initial values
ignore_changes = [
- replication_specs[0].region_configs[0].electable_specs[0].instance_size,
- replication_specs[0].region_configs[0].analytics_specs[0].instance_size,
- replication_specs[1].region_configs[0].electable_specs[0].instance_size,
- replication_specs[1].region_configs[0].analytics_specs[0].instance_size
+ replication_specs[0].region_configs[0].electable_specs.instance_size,
+ replication_specs[0].region_configs[0].analytics_specs.instance_size,
+ replication_specs[1].region_configs[0].electable_specs.instance_size,
+ replication_specs[1].region_configs[0].analytics_specs.instance_size
]
}
}
@@ -421,10 +418,9 @@ resource "mongodbatlas_advanced_cluster" "test" {
While the example initially defines 2 symmetric shards, auto-scaling of `electable_specs` or `analytic_specs` can lead to asymmetric shards due to changes in `instance_size`.
--> **NOTE:** In the following scenarios, a `mongodbatlas_advanced_cluster` using the new sharding configuration (single `replication_specs` per shard) might not have shard-level auto-scaling enabled:
-1. Configuration was defined prior to version 1.23.0 when auto-scaling per shard feature was released.
-2. Cluster was imported from a legacy schema (For example, `mongodbatlas_cluster` or `mongodbatlas_advanced_cluster` using `num_shards` > 1).
-In these cases, you must update the cluster configuration to activate the auto-scaling per shard feature. This can be done by temporarily modifying a value like `compute_min_instance_size`.
+-> **NOTE:** In the following scenarios, a `mongodbatlas_advanced_cluster` using the new sharding configuration (single `replication_specs` per shard) might not have shard-level auto-scaling enabled:
1. Configuration was defined prior to version 1.23.0 when auto-scaling per shard feature was released.
2. Cluster was imported from a legacy schema (For example, `mongodbatlas_cluster` or `mongodbatlas_advanced_cluster` using `num_shards` > 1).
+
3. Configuration is updated directly from a v1.x version of our provider directly to v2.0.0+ as no update is triggered.
+
In these cases, you must update the cluster configuration to activate the auto-scaling per shard feature. This can be done by temporarily modifying a value like `compute_min_instance_size`.
-> **NOTE:** See the table [below](#resources-and-data-sources-impacted-by-independent-shard-scaling) for other impacted resources when a cluster transitions to independently scaled shards.
@@ -432,11 +428,9 @@ In these cases, you must update the cluster configuration to activate the auto-s
Name | Changes | Transition Guide
--- | --- | ---
-`mongodbatlas_advanced_cluster` | Data source must use the `use_replication_spec_per_shard` attribute. | -
`mongodbatlas_advanced_cluster` | Use `replication_specs.#.zone_id` instead of `replication_specs.#.id`. | -
`mongodbatlas_cluster` | Resource and data source will not work. API error code `ASYMMETRIC_SHARD_UNSUPPORTED`. | [cluster-to-advanced-cluster-migration-guide](cluster-to-advanced-cluster-migration-guide.md)
-`mongodbatlas_cloud_backup_schedule` | Use `copy_settings.#.zone_id` instead of `copy_settings.#.replication_spec_id` | [1.18.0 Migration Guide](1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones)
-`mongodbatlas_global_cluster_config` | `custom_zone_mapping` is no longer populated, `custom_zone_mapping_zone_id` must be used instead. | -
+`mongodbatlas_cloud_backup_schedule` | Use `copy_settings.#.zone_id` instead of `copy_settings.#.replication_spec_id` | [1.18.0 Migration Guide](1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones)| -
### Data Source Transition for Asymmetric Clusters
@@ -448,7 +442,6 @@ If you have an existing cluster that becomes asymmetric due to independent shard
**Error Symptoms:**
- `mongodbatlas_cluster` data source fails with API error code `ASYMMETRIC_SHARD_UNSUPPORTED`
-- `mongodbatlas_advanced_cluster` data source without `use_replication_spec_per_shard = true` returns an error asking you to enable this attribute
#### Required Changes
@@ -459,23 +452,16 @@ data "mongodbatlas_cluster" "example" {
project_id = var.project_id
name = "my-cluster"
}
-
-# This fails and ask you to set use_replication_spec_per_shard = true
-data "mongodbatlas_advanced_cluster" "example" {
- project_id = var.project_id
- name = "my-cluster"
-}
```
**After (succeeds for asymmetric clusters):**
```hcl
# Remove mongodbatlas_cluster data source completely
-# Replace with mongodbatlas_advanced_cluster and enable the new schema
+# Replace with mongodbatlas_advanced_cluster
data "mongodbatlas_advanced_cluster" "example" {
project_id = var.project_id
name = "my-cluster"
- use_replication_spec_per_shard = true # Required for asymmetric clusters
}
```
@@ -484,7 +470,7 @@ data "mongodbatlas_advanced_cluster" "example" {
For modules or configurations that need to support both symmetric and asymmetric clusters, you can use conditional data source creation.
-**Note**: While `use_replication_spec_per_shard = true` supports both symmetric and asymmetric clusters, you may want to use the conditional pattern if you prefer to preserve the legacy data source representation for symmetric clusters, or if you need to maintain backward compatibility with existing module consumers.
+**Note**: While `data.mongodbatlas_advanced_cluster` supports both symmetric and asymmetric clusters, you may want to use the conditional pattern if you prefer to preserve the legacy data source representation for symmetric clusters, or if you need to maintain backward compatibility with existing module consumers.
```hcl
# Example: Conditional data source based on cluster configuration
@@ -506,7 +492,6 @@ data "mongodbatlas_advanced_cluster" "this" {
count = local.cluster_uses_new_sharding ? 1 : 0
name = mongodbatlas_advanced_cluster.this.name
project_id = mongodbatlas_advanced_cluster.this.project_id
- use_replication_spec_per_shard = true
depends_on = [mongodbatlas_advanced_cluster.this]
}
```
diff --git a/docs/guides/atlas-user-management.md b/docs/guides/atlas-user-management.md
new file mode 100644
index 0000000000..091cc00ca2
--- /dev/null
+++ b/docs/guides/atlas-user-management.md
@@ -0,0 +1,1061 @@
+---
+page_title: "Migration Guide: Atlas User Management"
+---
+
+# Migration Guide: Atlas User Management
+
+## Overview
+
+With MongoDB Atlas Terraform Provider `2.0.0`, several attributes and resources were deprecated in favor of new, assignment-based resources.
+These changes improve **clarity, separation of concerns, and alignment with Atlas APIs**.
+This guide covers migrating to the new resources/attributes for Atlas user management in context of **organization, teams, and projects**:
+
+## Quick Finder: What changed
+
+- **Org membership:** The `mongodbatlas_org_invitation` resource is deprecated. Use `mongodbatlas_cloud_user_org_assignment`.
+ → See [Org Invitation to Cloud User Org Assignment](#migr-org-invitation)
+
+- **Team membership:** The `usernames` attribute on `mongodbatlas_team` is deprecated. Use `mongodbatlas_cloud_user_team_assignment`.
+ → See [Team Usernames to Cloud User Team Assignment](#migr-team-usernames)
+
+- **Project team assignments:** The `teams` block inside `mongodbatlas_project` is deprecated. Use `mongodbatlas_team_project_assignment`.
+ → See [Project Teams to Team Project Assignment](#migr-project-teams)
+
+- **Project membership:** The `mongodbatlas_project_invitation` resource is deprecated. Use `mongodbatlas_cloud_user_project_assignment`.
+ → See [Project Invitation to Cloud User Project Assignment](#migr-project-invitation)
+
+- **Atlas User details:** The `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users` data sources are deprecated.
+ Use `mongodbatlas_cloud_user_org_assignment` for a single user in an org, and the `users` attributes on `mongodbatlas_organization`, `mongodbatlas_project`, or `mongodbatlas_team` for listings.
+ → See [Atlas User/Users Data Sources](#migr-atlas-user-users)
+
+These updates ensure that **organization membership, team membership, and project assignments** are modeled as explicit and independent resources — giving you more flexible control over Atlas access management.
+
+
+## Before You Begin
+- Backup your [Terraform state](https://developer.hashicorp.com/terraform/cli/commands/state) file
+- Use MongoDB Atlas Terraform Provider **v2.0.0+** or later.
+- Terraform version requirements:
+ - **v1.5+** for **[import blocks](https://developer.hashicorp.com/terraform/language/import)** (earlier versions can use [`terraform import`](https://developer.hashicorp.com/terraform/cli/import))
+ - **v1.1+** for **[moved blocks](https://developer.hashicorp.com/terraform/language/moved)** (useful for modules)
+ - **v1.7+** for **[removed blocks](https://developer.hashicorp.com/terraform/language/resources/syntax#removing-resources)** (earlier versions can use [`terraform state rm`](https://developer.hashicorp.com/terraform/cli/commands/state/rm))
+
+---
+
+
+
+
+ Org Membership
+
+## Org Invitation to Cloud User Org Assignment
+
+**Objective**: Migrate from the deprecated `mongodbatlas_org_invitation` resource and data source to the `mongodbatlas_cloud_user_org_assignment` resource. If you previously assigned teams via `teams_ids`, also migrate those to `mongodbatlas_cloud_user_team_assignment`.
+
+### What’s changing?
+
+- `mongodbatlas_org_invitation` only managed invitations and is deprecated. It didn’t manage the actual user membership or expose `user_id`.
+- New `mongodbatlas_cloud_user_org_assignment` manages the user’s organization membership (pending or active) and exposes both `username` and `user_id`. It supports import using either `ORG_ID/USERNAME` or `ORG_ID/USER_ID`.
+- If you previously used `teams_ids` on invitations, use `mongodbatlas_cloud_user_team_assignment` to manage team membership for each user.
+
+---
+### _Use-case 1: Existing org invite is still PENDING (resource exists in config)_
+
+Original configuration (note: `user_id` does not exist on `mongodbatlas_org_invitation`):
+
+```terraform
+locals {
+ org_id = ""
+ username = "user1@email.com"
+ roles = ["ORG_MEMBER"]
+}
+
+resource "mongodbatlas_org_invitation" "this" {
+ username = local.username
+ org_id = local.org_id
+ roles = local.roles
+ # teams_ids = local.team_ids # if applicable, also see Use-case #3 below
+}
+```
+
+### Option A) [Recommended] Moved block
+
+#### Step 1: Add `mongodbatlas_cloud_user_org_assignment` and `moved` block
+
+Handling migration in modules:
+- For module maintainers: Add the new `mongodbatlas_cloud_user_org_assignment` resource inside the module with a `moved {}` block from `mongodbatlas_org_invitation` to the new resource, remove current `mongodbatlas_org_invitation` resource (Step 2) and publish a new module version.
+- For module users: Simply bump the module version and run `terraform init -upgrade`, then `terraform plan` / `terraform apply`. Terraform performs an in-place state move without users writing import blocks or touching state.
+- Works at any scale (any number of module instances) and keeps the migration self-contained within the module. No per-environment import steps are required.
+
+```terraform
+resource "mongodbatlas_cloud_user_org_assignment" "this" {
+ org_id = local.org_id
+ username = local.username
+ roles = { org_roles = local.roles }
+}
+
+moved {
+ from = mongodbatlas_org_invitation.this
+ to = mongodbatlas_cloud_user_org_assignment.this
+}
+```
+
+
+#### Step 2: Remove `mongodbatlas_org_invitation` from config and state
+
+- With a moved block, `terraform plan` should show the move and no other changes. Then `terraform apply`.
+
+
+### Option B) Import by username
+
+#### Step 1: Add `mongodbatlas_cloud_user_org_assignment` and `import` block
+
+Handling migration in modules:
+- Terraform import blocks cannot live inside modules; they must be defined in the root module. See `https://github.com/hashicorp/terraform/issues/33474`.
+- Module maintainers cannot ship import steps. Each module user must add root-level import blocks for every instance to import, which is error-prone and repetitive.
+- This creates extra coordination for every environment and workspace. Prefer Option A whenever you can modify the module source.
+
+```terraform
+resource "mongodbatlas_cloud_user_org_assignment" "this" {
+ org_id = local.org_id
+ username = local.username
+ roles = { org_roles = local.roles }
+}
+
+import {
+ to = mongodbatlas_cloud_user_org_assignment.this
+ id = "${local.org_id}/${local.username}"
+}
+```
+
+#### Step 2: Remove `mongodbatlas_org_invitation` from config and state
+
+- With import, remove the old `mongodbatlas_org_invitation` block and delete it from state if still present: `terraform state rm mongodbatlas_org_invitation.this`.
+
+---
+
+### _Use-case 2: Invitations already ACCEPTED (no `mongodbatlas_org_invitation` in config)_
+
+When an invite is accepted, Atlas deletes the underlying invitation. To manage these users going forward, import them into `mongodbatlas_cloud_user_org_assignment`.
+
+#### Step 1: Fetch active org users (optional helper)
+
+```terraform
+data "mongodbatlas_organization" "org" {
+ org_id = var.org_id
+}
+
+locals {
+ active_users = {
+ for u in data.mongodbatlas_organization.org.users :
+ u.id => u if u.org_membership_status == "ACTIVE"
+ }
+}
+```
+
+#### Step 2: Define and import `mongodbatlas_cloud_user_org_assignment`
+
+Handling migration in modules:
+- Terraform import blocks cannot live inside modules; they must be defined in the root module. See `https://github.com/hashicorp/terraform/issues/33474`.
+
+Use the `local.active_users` map defined in Step 1 so you don’t have to manually curate a list:
+
+```terraform
+resource "mongodbatlas_cloud_user_org_assignment" "user" {
+ for_each = local.active_users # key = user_id, value = user object from data source
+
+ org_id = var.org_id
+ username = each.value.username
+
+ # Keep roles aligned with current assignments to avoid drift after import
+ roles = {
+ org_roles = each.value.roles[0].org_roles
+ }
+}
+
+# Import existing users (root module only)
+import {
+ for_each = local.active_users
+ to = mongodbatlas_cloud_user_org_assignment.user[each.key]
+ id = "${var.org_id}/${each.key}" # org_id/user_id
+}
+```
+
+Run `terraform plan` (you should see import operations), then `terraform apply`.
+
+---
+
+### _Use-case 3: You also set `teams_ids` on the original invitation_
+
+Original configuration where `mongodbatlas_org_invitation` defines `teams_ids`:
+
+```terraform
+locals {
+ org_id = ""
+ username = "user1@email.com"
+ roles = ["ORG_MEMBER"]
+}
+
+resource "mongodbatlas_org_invitation" "this" {
+ username = local.username
+ org_id = local.org_id
+ roles = local.roles
+ teams_ids = local.team_ids
+}
+```
+
+Migrate team assignments to `mongodbatlas_cloud_user_team_assignment` in addition to Use-case 1 or 2 above.
+
+```terraform
+variable "team_ids" { type = set(string) }
+
+resource "mongodbatlas_cloud_user_team_assignment" "team" {
+ for_each = var.team_ids
+
+ org_id = local.org_id
+ team_id = each.key
+ user_id = mongodbatlas_cloud_user_org_assignment.this.id
+}
+
+# Import existing team assignments (root module only)
+import {
+ for_each = var.team_ids
+ to = mongodbatlas_cloud_user_team_assignment.team[each.key]
+ id = "${local.org_id}/${each.key}/${local.username}" # OR use user_id in place of username
+}
+```
+
+Run `terraform plan` (you should see import operations), then `terraform apply`.
+
+Finally, remove any remaining `mongodbatlas_org_invitation` references from config and state.
+
+---
+
+### Data source migration
+
+Original configuration:
+
+```terraform
+locals {
+ org_id = ""
+ username = "user1@email.com"
+}
+
+data "mongodbatlas_org_invitation" "test" {
+ org_id = local.org_id
+ username = local.username
+ invitation_id = mongodbatlas_org_invitation.test.invitation_id
+}
+```
+
+Replace with the new data source:
+
+```terraform
+data "mongodbatlas_cloud_user_org_assignment" "user_1" {
+ org_id = local.org_id
+ username = local.username
+}
+```
+
+Then:
+
+1. Run `terraform apply` to ensure the new data source reads correctly.
+2. Replace all usages of `data.mongodbatlas_org_invitation.test` with `data.mongodbatlas_cloud_user_org_assignment.user_1`.
+3. Run `terraform plan` followed by `terraform apply`.
+
+
+
+### Examples
+
+For complete, working configurations that mirror the use-cases above, see the examples in the provider repository: [migrate_org_invitation_to_cloud_user_org_assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/v2.0.0/examples/migrate_org_invitation_to_cloud_user_org_assignment). These include root-level setups for multiple approaches (e.g., moved blocks and imports) across different versions.
+
+
+
+### Notes and tips
+
+- Import formats:
+ - Org assignment: `ORG_ID/USERNAME` or `ORG_ID/USER_ID`.
+ - Team assignment: `ORG_ID/TEAM_ID/USERNAME` or `ORG_ID/TEAM_ID/USER_ID`.
+- If you use modules, keep in mind import blocks must be placed at the root module.
+- After successful migration, ensure no references to `mongodbatlas_org_invitation` remain.
+
+
+
+
+
+
+ Team Membership
+
+## Team Usernames to Cloud User Team Assignment
+
+**Objective**: Migrate from the deprecated `usernames` attribute on the `mongodbatlas_team` resource to the new `mongodbatlas_cloud_user_team_assignment` resource.
+
+### Why should I migrate?
+
+- **Future Compatibility:** The `usernames` attribute on `mongodbatlas_team` is deprecated and may be removed in future provider versions. Migrating ensures your Terraform configuration remains functional.
+- **Flexibility:** Manage teams and user assignments independently, without coupling membership changes to team creation or updates.
+- **Clarity:** Clear separation between the `mongodbatlas_team` resource (team definition) and `mongodbatlas_cloud_user_team_assignment` (membership management).
+
+### What’s changing?
+
+- `mongodbatlas_team` included a `usernames` argument that allowed assigning users to a team directly inside the resource. This argument is now deprecated.
+- New attribute `users` in `mongodbatlas_team` data source can be used to retrieve information about all the users assigned to that team.
+- `mongodbatlas_cloud_user_team_assignment` manages the user’s team membership (pending or active) and exposes both `username` and `user_id`. It supports import using either `ORG_ID/TEAM_ID/USERNAME` or `ORG_ID/TEAM_ID/USER_ID`.
+
+---
+### From `mongodbatlas_team.usernames` to `mongodbatlas_cloud_user_team_assignment`
+
+#### Original configuration
+
+```terraform
+locals {
+ usernames = ["user1@email.com", "user2@email.com", "user3@email.com"]
+}
+
+resource "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+ usernames = local.usernames
+}
+```
+
+
+#### Step 1: Use `mongodbatlas_team` data source to retrieve user IDs
+
+We first need to retrieve each user's `user_id` via the new `users` attribute in `mongodbatlas_team` data source.
+
+```terraform
+# Use data source to get team members (with user_id)
+locals {
+ usernames = ["user1@email.com", "user2@email.com", "user3@email.com"]
+ team_assignments = {
+ for user in data.mongodbatlas_team.this.users :
+ user.id => {
+ org_id = var.org_id
+ team_id = mongodbatlas_team.this.team_id
+ user_id = user.id
+ }
+ }
+}
+
+resource "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+ usernames = local.usernames
+}
+
+data "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ team_id = mongodbatlas_team.this.team_id
+}
+```
+
+#### Step 2: Add `mongodbatlas_cloud_user_team_assignment` and use import blocks
+
+```terraform
+locals {
+ usernames = ["user1@email.com", "user2@email.com", "user3@email.com"]
+ team_assignments = {
+ for user in data.mongodbatlas_team.this.users :
+ user.id => {
+ org_id = var.org_id
+ team_id = mongodbatlas_team.this.team_id
+ user_id = user.id
+ }
+ }
+}
+
+resource "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+ usernames = local.usernames
+}
+
+data "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ team_id = mongodbatlas_team.this.team_id
+}
+
+# New resource for each (user, team) assignment
+resource "mongodbatlas_cloud_user_team_assignment" "this" {
+ for_each = local.team_assignments
+
+ org_id = each.value.org_id
+ team_id = each.value.team_id
+ user_id = each.value.user_id # Use user_id instead of username
+}
+
+# Import existing team-user relationships into the new resource
+import {
+ for_each = local.team_assignments
+
+ to = mongodbatlas_cloud_user_team_assignment.this[each.key]
+ id = "${each.value.org_id}/${each.value.team_id}/${each.value.user_id}"
+}
+```
+
+#### Step 3: Remove deprecated `usernames` from `mongodbatlas_team`
+
+Once the new resources are in place:
+
+```terraform
+resource "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = "this"
+ # usernames = local.usernames # Remove this line
+}
+```
+
+#### Step 4: Run migration
+
+Run `terraform plan` (you should see **import** operations), then `terraform apply`.
+
+
+#### Step 5: Update any references to `mongodbatlas_team.usernames`
+
+Before:
+
+```terraform
+output "team_usernames" {
+ value = mongodbatlas_team.this.usernames
+}
+```
+
+After:
+
+```terraform
+output "team_usernames" {
+ value = [for u in data.mongodbatlas_team.this.users : u.username]
+}
+```
+
+Run `terraform plan`. There should be **no changes**.
+
+---
+
+### Data source migration
+
+If you previously used the `usernames` attribute in the `data.mongodbatlas_team` data source:
+
+**Original:**
+
+```terraform
+output "team_usernames" {
+ description = "Usernames in the MongoDB Atlas team"
+ value = data.mongodbatlas_team.this.usernames
+}
+```
+
+**Replace with:**
+
+```terraform
+output "team_usernames" {
+ description = "Usernames in the MongoDB Atlas team"
+ value = [for u in data.mongodbatlas_team.this.users : u.username]
+}
+```
+
+Run `terraform plan`. There should be **no changes**.
+
+---
+
+### Migration using Modules
+
+If you are using modules to manage teams and user assignments to teams, migrating from `mongodbatlas_team` to the new pattern requires special attention. Because the old `mongodbatlas_team.usernames` attribute corresponds to `mongodbatlas_cloud_user_team_assignment`, you cannot simply move the resource block inside your module and expect Terraform to handle the migration automatically. This section demonstrates how to migrate from a module using the `mongodbatlas_team` resource to a module using both `mongodbatlas_team` and the new `mongodbatlas_cloud_user_team_assignment` resources.
+
+**Key points for module users:**
+- You must use `terraform import` to bring existing user-team assignments into the new resources, even when they are managed inside a module.
+- The import command must match the resource address as used in your module (e.g., `module..mongodbatlas_cloud_user_team_assignment.`).
+- If you were using a list of usernames in your previous configuration, you also need to include the `mongodbatlas_team` data source and use the new `users` attribute to retrieve the corresponding user IDs, along with team ID, for the import to work correctly.
+
+**Example import blocks for modules**
+```terraform
+import {
+ to = module..mongodbatlas_cloud_user_team_assignment.
+ id = "//"
+}
+import {
+ to = module..mongodbatlas_cloud_user_team_assignment.
+ id = "//"
+}
+```
+
+**Example import commands for modules:**
+```shell
+terraform import 'module..mongodbatlas_cloud_user_team_assignment.' //
+terraform import 'module..mongodbatlas_cloud_user_team_assignment.' //
+```
+
+#### 1. Old Module Usage (Legacy)
+
+```hcl
+module "user_team_assignment" {
+ source = "./old_module"
+ org_id = var.org_id
+ team_name = var.team_name
+ usernames = var.usernames
+}
+```
+
+#### 2. New Module Usage (Recommended)
+
+```hcl
+data "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+}
+
+locals {
+ team_assigments = {
+ for user in data.mongodbatlas_team.this.users :
+ user.id => {
+ org_id = var.org_id
+ team_id = data.mongodbatlas_team.this.team_id
+ user_id = user.id
+ }
+ }
+}
+
+module "user_team_assignment" {
+ source = "./new_module"
+ org_id = var.org_id
+ team_name = var.team_name
+ team_assigments = local.team_assigments
+}
+```
+
+#### 3. Migration Steps
+
+1. **Add the new module to your configuration:**
+ - Add the new module block as shown above, using the same input variables as appropriate.
+ - Also add the `data.mongodbatlas_team` data source and declare the `team_assignments` local variable to retrieve user IDs and team ID.
+
+2. **Import the existing user-team assignments into the new resources:**
+
+- An `import block` (available in Terraform 1.5 and later) can be used to import the resource and iterate through a list of users, e.g.:
+ ```terraform
+ import {
+ for_each = local.team_assigments
+
+ to = module.user_team_assignment.mongodbatlas_cloud_user_team_assignment.this[each.key]
+ id = "${var.org_id}/${data.mongodbatlas_team.this.team_id}/${each.value.user_id}"
+ }
+```
+
+- Alternatively, use the correct resource addresses for your module and each of the user-team assignments:
+```shell
+ terraform import 'module.user_team_assignment.mongodbatlas_cloud_user_team_assignment.this' //
+```
+
+
+3. **Remove the old module block from your configuration.**
+4. **Run `terraform plan` to review the changes.**
+ - Ensure that Terraform imports the user-team assignments and does not plan to create these.
+ - Ensure that Terraform does not plan to destroy and recreate the `mongodbatlas_team` resource.
+5. **Run `terraform apply` to apply the migration.**
+
+For complete working examples, see:
+- [Old module example](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/migrate_user_team_assignment/module/old_module/)
+- [New module example](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/migrate_user_team_assignment/module/new_module/)
+
+---
+### Notes and tips
+
+- **Import format** for `mongodbatlas_cloud_user_team_assignment`:
+
+```
+ ORG_ID/TEAM_ID/USERNAME
+ ORG_ID/TEAM_ID/USER_ID
+```
+
+- **Importing inside modules:** Terraform import blocks cannot live inside modules. See ([Terraform issue](https://github.com/hashicorp/terraform/issues/33474)). Each module user must add root-level import blocks for every instance to import.
+
+- After successful migration, ensure **no references to** `mongodbatlas_team.usernames` remain.
+
+---
+### FAQ
+**Q: Can I assign the same user to multiple teams?**
+A: Yes, simply create multiple `mongodbatlas_cloud_user_team_assignment` resources for each team.
+
+**Q: Where can I find a working example?**
+A: See [examples/mongodbatlas_cloud_user_team_assignment/main.tf](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/mongodbatlas_cloud_user_team_assignment/main.tf).
+
+---
+### Further Resources
+- [Cloud User Team Assignment Resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cloud_user_team_assignment)
+
+
+
+
+
+
+ Project Team Assignment
+
+## Project Teams to Team Project Assignment
+
+**Objective:** Migrate from the deprecated `teams` attribute on the `mongodbatlas_project` resource to the new `mongodbatlas_team_project_assignment` resource.
+
+### Why should I migrate?
+
+- **Future compatibility:** The `teams` attribute inside `mongodbatlas_project` is deprecated and will be removed in a future provider release.
+- **Separation of concerns:** Manage projects and team-to-project role assignments independently.
+- **Clearer diffs:** Role or team modifications won't require re‑applying the entire project resource.
+
+
+### What's changing?
+
+- Historically, `mongodbatlas_project` accepted an inline `teams` block to assign one or more teams to a project with specific roles.
+- Now, each project-team role mapping must be managed with `mongodbatlas_team_project_assignment`.
+
+---
+### From `mongodbatlas_project.teams` to `mongodbatlas_team_project_assignment`
+
+#### Original configuration
+
+```hcl
+locals {
+ team_map = { # team_id => set(role_names)
+ = ["GROUP_OWNER"]
+ = ["GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_WRITE"]
+ }
+}
+
+resource "mongodbatlas_project" "this" {
+ name = var.project_name
+ org_id = var.org_id
+ project_owner_id = var.project_owner_id
+
+ dynamic "teams" {
+ for_each = local.team_map
+ content {
+ team_id = teams.key
+ role_names = teams.value
+ }
+ }
+}
+```
+
+#### Step 1: Ignore `teams` and remove from configuration
+
+-> **Note:** The `teams` attribute is a `SetNestedBlock` and cannot be marked `Optional`/`Computed` for a smooth migration. For now, `ignore_changes` is required during Step 1. Support for removing `teams` entirely will come in a future Atlas Provider release.
+
+Replace the `mongodbatlas_project.teams` block with:
+
+```hcl
+resource "mongodbatlas_project" "this" {
+ name = var.project_name
+ org_id = var.org_id
+ project_owner_id = var.project_owner_id
+
+ lifecycle {
+ ignore_changes = ["teams"]
+ }
+}
+```
+
+Then run:
+
+```shell
+terraform plan
+terraform apply
+```
+
+This removes the `teams` block from the config but keeps the assignments in Atlas unchanged until we explicitly manage them in new resources.
+
+
+#### Step 2: Add the new `mongodbatlas_team_project_assignment` resources
+
+```hcl
+resource "mongodbatlas_project" "this" {
+ name = var.project_name
+ org_id = var.org_id
+ project_owner_id = var.project_owner_id
+
+ lifecycle {
+ ignore_changes = ["teams"]
+ }
+}
+
+resource "mongodbatlas_team_project_assignment" "this" {
+ for_each = local.team_map
+
+ project_id = mongodbatlas_project.this.id
+ team_id = each.key
+ role_names = each.value
+}
+
+import {
+ for_each = local.team_map
+
+ to = mongodbatlas_team_project_assignment.this[each.key]
+ id = "${mongodbatlas_project.this.id}/${each.key}"
+}
+```
+
+Run `terraform plan` (you should see **import** operations), then `terraform apply`.
+
+#### Step 3: Verify and clean up
+
+- After successful import and apply, `terraform plan` should show **no changes**.
+- Keep the `ignore_changes = ["teams"]` lifecycle rule until the provider releases a version without the `teams` argument in `mongodbatlas_project`.
+
+---
+
+### Examples
+
+For complete, working configurations that demonstrate the migration process, see the examples in the provider repository: [migrate_team_project_assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/v2.0.0/examples/migrate_team_project_assignment).
+
+The examples include:
+- **v1**: Original configuration using deprecated `teams` attribute in `mongodbatlas_project` resource.
+- **v2**: Final configuration using `mongodbatlas_team_project_assignment` resource for team-to-project assignments.
+
+---
+### Notes and tips
+
+- **Import format** for `mongodbatlas_team_project_assignment`:
+```
+PROJECT_ID/TEAM_ID
+```
+- **Modules:** Terraform import blocks cannot live inside modules ([Terraform issue](https://github.com/hashicorp/terraform/issues/33474)).
+- If you manage team assignments in modules, import each at the root level using the correct resource address (e.g. `module..mongodbatlas_team_project_assignment.`).
+- You can use `terraform plan` to confirm imports before applying.
+
+---
+
+### FAQ
+
+**Q: Do I need to delete the old `teams` from state?**
+A: No — using `ignore_changes` ensures they remain in Atlas until the provider removes the field. Then you can drop the lifecycle rule.
+
+---
+
+### Further resources
+- [`mongodbatlas_team_project_assignment` docs](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/team_project_assignment)
+
+
+
+
+
+ Project Membership
+
+## Project Invitation to Cloud User Project Assignment
+
+**Objective**: Migrate from the deprecated `mongodbatlas_project_invitation` resource and data source to the `mongodbatlas_cloud_user_project_assignment` resource.
+
+### What’s changing?
+
+- `mongodbatlas_project_invitation` only managed invitations and is deprecated. If the user accepted the invitation and is now a project member, the provider removed the invitation from Terraform state and you should remove it from your configuration as well. See the resource [documentation](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/project_invitation) for more details.
+- `mongodbatlas_cloud_user_project_assignment` manages the user’s project membership (active members).
+- Pending project invitations are not discoverable with the new APIs. The only migration path for existing PENDING invites is to re-create them using `mongodbatlas_cloud_user_project_assignment` with the same `username` and `roles`.
+ - For details on the new resource, see the `mongodbatlas_cloud_user_project_assignment` resource documentation: https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cloud_user_project_assignment
+
+ ---
+### Migrating PENDING invitations
+
+Original configuration:
+
+```terraform
+locals {
+ username = "user1@email.com"
+ roles = ["GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_ONLY"]
+}
+
+resource "mongodbatlas_project_invitation" "this" {
+ project_id = var.project_id
+ username = local.username
+ roles = local.roles
+}
+```
+
+#### Step 1: Add the new resource alongside existing configuration
+
+Add the new resource to re-create the pending invite via the new API:
+
+```terraform
+resource "mongodbatlas_cloud_user_project_assignment" "this" {
+ project_id = var.project_id
+ username = local.username
+ roles = local.roles
+}
+```
+
+Use the same `roles` as the original invitation to avoid drift.
+
+#### Step 2: Remove the deprecated resource from the configuration and state
+
+#### Option A) [Recommended] Removed block
+
+Remove the resource block and replace it with a `removed` block to cleanly remove the old resource from state:
+
+```terraform
+removed {
+ from = mongodbatlas_project_invitation.this
+
+ lifecycle {
+ destroy = false
+ }
+}
+```
+
+#### Option B) Manual state removal
+
+Remove the `mongodbatlas_project_invitation` resource from configuration and then remove it from the Terraform state using the command line (this does not affect the actual invitation in Atlas):
+
+```bash
+terraform state rm mongodbatlas_project_invitation.this
+```
+
+#### Step 3: Apply the changes
+
+Run `terraform apply` to create the assignment with the new resource. Afterwards, run `terraform plan` and ensure no further changes are pending.
+
+---
+
+### Examples
+
+For complete, working configurations that demonstrate the migration process, see the examples in the provider repository: [migrate_project_invitation_to_cloud_user_project_assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_project_invitation_to_cloud_user_project_assignment).
+
+The examples include:
+- **v1**: Original configuration using deprecated `mongodbatlas_project_invitation`
+- **v2**: Migration phase with re-creation using new resource and clean state removal
+- **v3**: Final clean configuration using only `mongodbatlas_cloud_user_project_assignment`
+
+These examples provide practical validation of the migration steps and demonstrate the re-creation approach for pending invitations.
+
+---
+
+### Notes and tips
+
+- After successful migration, ensure no references to `mongodbatlas_project_invitation` remain in configuration or state.
+- Pending invitations are not discoverable by the new APIs and resources; there is no data source replacement for reading pending invites. Re-create them using the new resource as shown above.
+- For additional details on how accepted invitations are handled, see the `mongodbatlas_project_invitation` resource [documentation](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/project_invitation).
+
+
+
+
+
+
+
+ Atlas User details
+
+## Atlas User/Users Data Sources
+
+**Objective**: Migrate from the deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users` data sources to their respective replacements.
+
+### What’s changing?
+
+- `mongodbatlas_atlas_user` returned a user profile by `user_id` or `username` and is deprecated. Replace it with `mongodbatlas_cloud_user_org_assignment` which reads a user's assignment in a specific organization using either `username` or `user_id` together with `org_id`. For details, see the `mongodbatlas_cloud_user_org_assignment` data source [documentation](../data-sources/cloud_user_org_assignment).
+
+- `mongodbatlas_atlas_users` returned lists of users by `org_id`, `project_id`, or `team_id` and is deprecated. Replace it with the `users` attribute available on `mongodbatlas_organization`, `mongodbatlas_project`, or `mongodbatlas_team` data sources, respectively.
+- Attribute structure differences: The new organization users API does not return `email_address` as a separate field and replaces the consolidated `roles` with structured `org_roles` and `project_role_assignments`.
+
+---
+
+### Migrate reads to `mongodbatlas_cloud_user_org_assignment`
+
+Original configuration:
+
+```terraform
+data "mongodbatlas_atlas_user" "test" {
+ user_id = ""
+}
+
+# OR
+
+data "mongodbatlas_atlas_user" "test" {
+ username = ""
+}
+```
+
+#### Step 1: Add the new data source alongside the existing one
+
+Use either `username` or `user_id` with the target `org_id`:
+
+```terraform
+# Keep existing data source temporarily
+data "mongodbatlas_atlas_user" "test" {
+ user_id = ""
+}
+
+# Add new data source
+data "mongodbatlas_cloud_user_org_assignment" "user_1" {
+ user_id = ""
+ org_id = ""
+}
+```
+
+#### Step 2: Verify the new data source works
+
+Run `terraform plan` to ensure the new data source will read correctly without errors.
+
+#### Step 3: Replace references incrementally
+
+Replace references from `data.mongodbatlas_atlas_user.test` to `data.mongodbatlas_cloud_user_org_assignment.user_1`.
+
+**Important**: Update attribute references as the structure has changed:
+
+Key attribute changes:
+
+| Old Attribute | New Attribute |
+|---------------|---------------|
+| `email_address` | `username` |
+| `roles` (filtered by org_id) | `roles.org_roles` |
+| `roles` (filtered by group_id) | `roles.project_role_assignments[*].project_roles` |
+
+**Examples**:
+- Email: `data.mongodbatlas_atlas_user.test.email_address` → `data.mongodbatlas_cloud_user_org_assignment.user_1.username`
+- Org roles: Use `data.mongodbatlas_cloud_user_org_assignment.user_1.roles.org_roles` directly
+- Project roles: Access via `roles.project_role_assignments` list, filtering by `project_id` as needed
+
+#### Step 4: Remove the old data source
+
+Once all references are updated and working, remove the old data source from your configuration:
+
+```terraform
+# Remove this block
+# data "mongodbatlas_atlas_user" "test" {
+# user_id = ""
+# }
+```
+
+#### Step 5: Apply and verify
+
+Run `terraform plan` to ensure no unexpected changes, then `terraform apply`.
+
+---
+
+### Migrate list reads from `mongodbatlas_atlas_users`
+
+Original configuration:
+
+```terraform
+data "mongodbatlas_atlas_users" "test" {
+ org_id = ""
+}
+
+# OR
+
+data "mongodbatlas_atlas_users" "test" {
+ project_id = ""
+}
+
+# OR
+
+data "mongodbatlas_atlas_users" "test" {
+ team_id = ""
+ org_id = ""
+}
+```
+
+#### Step 1: Add new data sources alongside existing ones
+
+Add the appropriate replacement data source(s) while keeping the old one temporarily:
+
+Organization users:
+```terraform
+# Keep existing temporarily
+data "mongodbatlas_atlas_users" "test" {
+ org_id = ""
+}
+
+# Add new data source
+data "mongodbatlas_organization" "org" {
+ org_id = ""
+}
+
+locals {
+ org_users = data.mongodbatlas_organization.org.users
+}
+```
+
+Project users:
+```terraform
+# Keep existing temporarily
+data "mongodbatlas_atlas_users" "test" {
+ project_id = ""
+}
+
+# Add new data source
+data "mongodbatlas_project" "proj" {
+ project_id = ""
+}
+
+locals {
+ project_users = data.mongodbatlas_project.proj.users
+}
+```
+
+Team users:
+```terraform
+# Keep existing temporarily
+data "mongodbatlas_atlas_users" "test" {
+ team_id = ""
+ org_id = ""
+}
+
+# Add new data source
+data "mongodbatlas_team" "team" {
+ team_id = ""
+ org_id = ""
+}
+
+locals {
+ team_users = data.mongodbatlas_team.team.users
+}
+```
+
+#### Step 2: Verify new data sources work
+
+Run `terraform plan` to ensure the new data sources read correctly and return expected user data.
+
+#### Step 3: Replace references incrementally
+
+Replace `data.mongodbatlas_atlas_users.test.results` with the appropriate `...users` collection above.
+
+**Important**: Update attribute references as the structure has changed:
+
+| Old Attribute | New Attribute |
+|---------------|---------------|
+| `results[*].email_address` | `users[*].username` |
+| `results[*].roles` (filtered) | `users[*].roles.org_roles` or `users[*].roles` |
+
+**Examples**:
+- Email list: `data.mongodbatlas_atlas_users.test.results[*].email_address` → `data.mongodbatlas_organization.org.users[*].username`
+- User list: `data.mongodbatlas_atlas_users.test.results` → `data.mongodbatlas_organization.org.users` (or `.project.proj.users`, `.team.team.users`)
+- Org roles: Use `users[*].roles.org_roles` from organization data source
+- Project roles: Use `users[*].roles` from project data source, or `users[*].roles.project_role_assignments` from organization data source
+
+#### Step 4: Remove the old data source
+
+Once all references are updated and working, remove the old data source from your configuration:
+
+```terraform
+# Remove this block
+# data "mongodbatlas_atlas_users" "test" {
+# org_id = ""
+# }
+```
+
+#### Step 5: Apply and verify
+
+Run `terraform plan` to ensure no unexpected changes, then `terraform apply`.
+
+---
+
+### Examples
+
+For complete, working configurations that demonstrate the migration process, see the examples in the provider repository: [migrate_atlas_user_and_atlas_users](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_atlas_user_and_atlas_users).
+
+The examples include:
+- **v1**: Original configuration using deprecated data sources
+- **v2**: Migration phase with side-by-side comparison and validation
+- **v3**: Final clean configuration using only new data sources
+
+These examples provide practical validation of the migration steps and demonstrate the attribute mappings in working Terraform code.
+
+---
+
+### Notes
+
+- The new data source requires the `org_id` context to read the user's organization assignment.
+- After migration, ensure no remaining references to `mongodbatlas_atlas_user` exist in your configuration.
+
+
diff --git a/docs/guides/cluster-to-advanced-cluster-migration-guide.md b/docs/guides/cluster-to-advanced-cluster-migration-guide.md
index a5abef505e..fd0f8cee95 100644
--- a/docs/guides/cluster-to-advanced-cluster-migration-guide.md
+++ b/docs/guides/cluster-to-advanced-cluster-migration-guide.md
@@ -4,7 +4,7 @@ page_title: "Migration Guide: Cluster to Advanced Cluster"
# Migration Guide: Cluster to Advanced Cluster
-**Objective**: This guide explains how to replace the `mongodbatlas_cluster` resource with the `mongodbatlas_advanced_cluster` resource. For data source migrations, refer to the [output changes](#output-changes) section. If you're transitioning to independent sharding, additional guidance is available in the [Advanced Cluster New Sharding Configurations Migration Guide](advanced-cluster-new-sharding-schema#data-source-transition-for-asymmetric-clusters).
+**Objective**: This guide explains how to replace the deprecated `mongodbatlas_cluster` resource with the `mongodbatlas_advanced_cluster` resource. For data source migrations, refer to the [output changes](#output-changes) section. If you're transitioning to independent sharding, additional guidance is available in the [Advanced Cluster New Sharding Configurations Migration Guide](advanced-cluster-new-sharding-schema#data-source-transition-for-asymmetric-clusters).
## Why do we have both `mongodbatlas_cluster` and `mongodbatlas_advanced_cluster` resources?
@@ -16,18 +16,19 @@ More information about the main changes between the two resources can be found [
Due to its schema simplicity, `mongodbatlas_cluster` resource is unable to support most of the latest MongoDB Atlas features, such as [Multi-Cloud Clusters](https://www.mongodb.com/blog/post/introducing-multicloud-clusters-on-mongodb-atlas), [Asymmetric Sharding](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), [Independent Scaling of Analytics Node Tiers](https://www.mongodb.com/blog/post/introducing-ability-independently-scale-atlas-analytics-node-tiers) and more.
On the other hand, not only does `mongodbatlas_advanced_cluster` cover everything that `mongodbatlas_cluster` can do, but it offers all existing MongoDB Atlas functionalities and will continue to do so going forward.
-Having that in mind, to access all the latest functionalities and stay up to date with our best offering we recommend you to start planning your move to `mongodbatlas_advanced_cluster`. To maintain our focus on enhancing the overall experience with `mongodbatlas_advanced_cluster`, we will be phasing out `mongodbatlas_cluster` in the upcoming major provider version, 2.0.0, with the timeline yet to be determined. Begin your planning now to ensure you're ready for this transition.
+Having that in mind, to access all the latest functionalities and stay up to date with our best offering we recommend you to start planning your move to `mongodbatlas_advanced_cluster`. To maintain our focus on enhancing the overall experience with `mongodbatlas_advanced_cluster`, `mongodbatlas_cluster` is deprecated starting from provider version 2.0.0, and will be removed from the provider in the following major versions. Begin your planning now to ensure you're ready for this transition.
-### What is the `mongodbatlas_advanced_cluster` Preview of MongoDB Atlas Provider 2.0.0?
+### What is the `mongodbatlas_advanced_cluster` Preview of MongoDB Atlas Provider 2.0.0 released in 1.29.0?
-To make it easier to migrate to `mongodbatlas_advanced_cluster`, we decided to enable support for the [`moved` block](https://developer.hashicorp.com/terraform/language/moved). This functionality needs the resource to be implemented using the [Terraform Plugin Framework](https://developer.hashicorp.com/terraform/plugin/framework), whereas our existing implementation [uses the SDKv2](https://developer.hashicorp.com/terraform/plugin/sdkv2). Given the considerable changes between the two frameworks and the breaking changes it causes, we have decided to release a preview version of the `mongodbatlas_advanced_cluster` usable under an environment variable and keep the existing implementation as-is to avoid breaking existing users.
-Once the MongoDB Atlas Provider 2.0.0 is released, only the new version will remain and the environment variable won't be needed.
-More information about the preview version of `mongodbatlas_advanced_cluster` can be found in the [resource documentation page](../resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+To make it easier to migrate to `mongodbatlas_advanced_cluster`, we enabled support for the [`moved` block](https://developer.hashicorp.com/terraform/language/moved). This functionality required the resource to be implemented using the [Terraform Plugin Framework](https://developer.hashicorp.com/terraform/plugin/framework). However, our previous implementation [used the SDKv2](https://developer.hashicorp.com/terraform/plugin/sdkv2).
+Given the considerable changes between the two frameworks and the breaking changes it causes we released a preview version of the `mongodbatlas_advanced_cluster`, in our provider versions 1.29.0 and later, which you can use under an environment variable and keep the existing implementation as-is to avoid breaking existing users.
+Now with the release of MongoDB Atlas Provider 2.0.0, only the new version remains and the environment variable is no longer needed.
+To learn more about `mongodbatlas_advanced_cluster`, see the [resource documentation page](../resources/advanced_cluster).
## How should I move to `mongodbatlas_advanced_cluster`?
To move from `mongodbatlas_cluster` to `mongodbatlas_advanced_cluster` we offer two alternatives:
-1. [(Recommended) Use the `moved` block using the Preview of MongoDB Atlas Provider 2.0.0 for `mongodbatlas_advanced_cluster`](#migration-using-the-moved-block-recommended)
+1. [(Recommended) Use the `moved` block](#migration-using-the-moved-block-recommended)
2. [Manually use the import command with the `mongodbatlas_advanced_cluster` resource](#migration-using-import)
### Best Practices Before Migrating
@@ -40,9 +41,7 @@ This is our recommended method to migrate from `mongodbatlas_cluster` to `mongod
**Prerequisites:**
- Terraform version 1.8 or later is required, more information in the [State Move page](https://developer.hashicorp.com/terraform/plugin/framework/resources/state-move).
- - MongoDB Atlas Provider version 1.29 or later is required.
- - Ability to set **Environment Variables** in your working space
- - More information can be found in the [resource documentation page](../resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+ - MongoDB Atlas Provider version 2.0 or later is required.
The process to migrate from `mongodbatlas_cluster` to `mongodbatlas_advanced_cluster` using the `moved` block varies if you are using `modules` or the resource directly. Module maintainers can upgrade their implementation to `mongodbatlas_advanced_cluster` by making this operation transparent to their users. To learn how, review the examples from a [module maintainer](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_cluster_to_advanced_cluster/module_maintainer) and [module user](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_cluster_to_advanced_cluster/module_user) point of view.
@@ -51,12 +50,11 @@ If you are managing the resource directly, see [this example](https://github.com
The basic experience when using the `moved` block is as follows:
1. Before starting, run `terraform plan` to make sure that there are no planned changes.
2. Add the `mongodbatlas_advanced_cluster` resource definition.
- - Set the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true` in order to use the Preview for MongoDB Atlas Provider 2.0.0. You can also define the environment variable in your local development environment so your tools can use the new format and help you with linting and auto-completion.
- - You can use the [Atlas CLI plugin](https://github.com/mongodb-labs/atlas-cli-plugin-terraform) to generate the `mongodbatlas_advanced_cluster` resource definition. This is the recommended method as it will generate a clean configuration while keeping the original Terraform expressions. Please be aware of the [plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform#limitations), see the [section below](#alternatives-to-using-the-mongodb-atlas-cli-plugin-to-generate-the-mongodbatlas_advanced_cluster-resource-definition) for the available alternatives.
+ - You can use the [Atlas CLI plugin](https://github.com/mongodb-labs/atlas-cli-plugin-terraform?tab=readme-ov-file#1-clustertoadvancedcluster-clu2adv) to generate the `mongodbatlas_advanced_cluster` resource definition. This is the recommended method as it will generate a clean configuration while keeping the original Terraform expressions. Please be aware of the [plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/blob/main/docs/command_clu2adv.md#limitations), see the [section below](#alternatives-to-using-the-mongodb-atlas-cli-plugin-to-generate-the-mongodbatlas_advanced_cluster-resource-definition) for the available alternatives.
3. Comment out or delete the `mongodbatlas_cluster` resource definition.
4. Update the references from your previous cluster resource: `mongodbatlas_cluster.this.XXXX` to the new `mongodbatlas_advanced_cluster.this.XXX`.
- Double check [output-changes](#output-changes) to ensure the underlying configuration stays unchanged.
- - If you are using output variables that use the new resource `mongodbatlas_advanced_cluster.this`, the plan output can be more verbose than expected (extra `Note: Objects have changed outside of Terraform` section). Consider adding/updating output variables only **after** performing the move (see more in the [Github Issue](https://github.com/hashicorp/terraform/issues/36796).
+ - If you use output variables that use the new resource `mongodbatlas_advanced_cluster.this`, the plan output can be more verbose than expected (extra `Note: Objects have changed outside of Terraform` section). Consider adding or updating output variables only **after** performing the move (see more in the [Github Issue](https://github.com/hashicorp/terraform/issues/36796)).
5. Add the `moved` block to your configuration file, e.g.:
```terraform
moved {
@@ -80,7 +78,7 @@ moved {
## Migration using import
-**Note**: We recommend the [`moved` block](#migration-using-the-moved-block-recommended) method as it's more convenient and less error-prone. If you continue with this method, you can still use the Preview for MongoDB Atlas Provider 2.0.0 by setting the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true` to avoid having to migrate again in the future.
+**Note**: We recommend the [`moved` block](#migration-using-the-moved-block-recommended) method as it's more convenient and less error-prone.
This method uses [Terraform native tools](https://developer.hashicorp.com/terraform/language/import/generating-configuration) and works if you:
1. Have an existing cluster without any Terraform configuration and want to import and manage your cluster with Terraform.
@@ -112,17 +110,14 @@ import {
# ....
backup_enabled = true
cluster_type = "REPLICASET"
- disk_size_gb = 10
name = "legacy-cluster"
project_id = "664619d870c247237f4b86a6"
state_name = "IDLE"
termination_protection_enabled = false
version_release_system = "LTS"
- advanced_configuration {
- default_read_concern = null
+ advanced_configuration = {
default_write_concern = null
- fail_index_key_too_long = false
javascript_enabled = true
minimum_enabled_tls_protocol = "TLS1_2"
no_table_scan = false
@@ -133,19 +128,18 @@ import {
transaction_lifetime_limit_seconds = 0
}
- replication_specs {
+ replication_specs = [{
container_id = {
"AWS:US_EAST_1" = "669644ae01bf814e3d25b963"
}
- id = "66978026668b7619f6f48cf2"
zone_name = "ZoneName managed by Terraform"
- region_configs {
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- auto_scaling {
+ auto_scaling = {
compute_enabled = false
compute_max_instance_size = null
compute_min_instance_size = null
@@ -153,24 +147,24 @@ import {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
disk_iops = 3000
ebs_volume_type = null
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
disk_iops = 3000
ebs_volume_type = null
instance_size = "M10"
node_count = 1
}
- }
- }
+ }]
+ }]
}
```
This file includes all configurable values in the schema, but none of the previous configuration defined for your `mongodbatlas_cluster`. Therefore, the new configuration will likely be a lot more verbose and contain none of your original [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions).
-Alternatively you can use the [Atlas CLI plugin](https://github.com/mongodb-labs/atlas-cli-plugin-terraform) to generate the `mongodbatlas_advanced_cluster` resource definition from a `mongodbatlas_cluster` definition. This will generate a clean configuration keeping the original Terraform expressions. Please be aware of the [plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform#limitations) and always review the generated configuration.
+Alternatively you can use the [Atlas CLI plugin](https://github.com/mongodb-labs/atlas-cli-plugin-terraform?tab=readme-ov-file#1-clustertoadvancedcluster-clu2adv) to generate the `mongodbatlas_advanced_cluster` resource definition from a `mongodbatlas_cluster` definition. This will generate a clean configuration keeping the original Terraform expressions. Please be aware of the [plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/blob/main/docs/command_clu2adv.md#limitations) and always review the generated configuration.
5. Update the references from your previous cluster resource: `mongodbatlas_cluster.this.XXXX` to the new `mongodbatlas_advanced_cluster.this.XXX`.
- Double check [output-changes](#output-changes) to ensure the underlying configuration stays unchanged.
6. Run `terraform apply`. You should see the resource(s) imported: `Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.`
@@ -221,37 +215,6 @@ resource "mongodbatlas_cluster" "this" {
### Example 2: New Configuration (`mongodbatlas_advanced_cluster`)
-```terraform
-resource "mongodbatlas_advanced_cluster" "this" {
- project_id = var.project_id
- name = "advanced-cluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # 4 Backup Configuration
-
- replication_specs {
- region_configs {
- auto_scaling { # 3 Auto Scaling
- disk_gb_enabled = true
- }
- region_name = "US_EAST_1"
- priority = 7
- provider_name = "AWS" # 2 Provider Settings
-
- electable_specs { # 1 Replication Spec Configuration
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs { # 1 Replication Spec Configuration
- instance_size = "M10"
- node_count = 1
- }
- }
- }
-}
-```
-
-### Example 3: New Configuration (`mongodbatlas_advanced_cluster`) using the Preview of MongoDB Atlas Provider 2.0.0
-
```terraform
resource "mongodbatlas_advanced_cluster" "this" {
project_id = var.project_id
@@ -300,6 +263,7 @@ resource "mongodbatlas_advanced_cluster" "this" {
- `id`:
- Before: `id` in the `mongodbatlas_cluster` resource had an internal encoded resource identifier. `id` in the data source had the Atlas cluster id.
- After: Use `cluster_id` attribute instead to get the Atlas cluster id.
+- [These attributes](migrate-to-advanced-cluster-2.0#configuration-changes-when-upgrading-datamongodbatlas_advanced_cluster-and-datamongodbatlas_advanced_clusters-from-v1x) are no longer supported in `mongodbatlas_advanced_cluster`. References to these must be removed.
## Alternatives to using the MongoDB Atlas CLI plugin to generate the `mongodbatlas_advanced_cluster` resource definition
@@ -307,6 +271,6 @@ While the [Atlas CLI plugin](https://github.com/mongodb-labs/atlas-cli-plugin-te
- **Option 1**: Follow the steps 3. and 4. of the ["migration using import"](#migration-using-import) section by temporarily adding an `import block` and executing the `terraform plan -generate-config-out=adv_cluster.tf` command. Once you have the generated configuration for `mongodbatlas_advanced_cluster` you can use it in your configuration files and remove the `import block`. **Note**: Terraform modules don't support `import` blocks so this option is not possible if you are a module maintainer.
-- **Option 2**: Simplify your `mongodbatlas_cluster` resource definition by removing the [Atlas CLI plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform#limitations). Given the output, proceed with restoring the remaining configuration in the `mongodbatlas_advanced_cluster` resource.
+- **Option 2**: Simplify your `mongodbatlas_cluster` resource definition by removing the [Atlas CLI plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/blob/main/docs/command_clu2adv.md#limitations). Given the output, proceed with restoring the remaining configuration in the `mongodbatlas_advanced_cluster` resource.
-- **Option 3**: Generate the new configuration for `mongodbatlas_advanced_cluster` manually, looking at the examples we provide in our [resource documentation page](../resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+- **Option 3**: Generate the new configuration for `mongodbatlas_advanced_cluster` manually, looking at the examples we provide in our [resource documentation page](../resources/advanced_cluster).
diff --git a/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md b/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md
index 466eb03107..e4690c6184 100644
--- a/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md
+++ b/docs/guides/flex-cluster-to-dedicated-cluster-migraton-guide.md
@@ -36,22 +36,22 @@ Complete the following procedure to resolves the configuration drift in Terrafor
cluster_type = "REPLICASET"
name = "clusterName"
project_id = "664619d870c247237f4b86a6"
- replication_specs {
+ replication_specs = [{
zone_name = "Zone 1"
- region_configs {
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_WEST_1"
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 0
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
```
6. Re-use existing [Terraform expressions](https://developer.hashicorp.com/terraform/language/expressions). All fields in the generated configuration have static values. Look in your previous configuration for:
diff --git a/docs/guides/migrate-to-advanced-cluster-2.0.md b/docs/guides/migrate-to-advanced-cluster-2.0.md
new file mode 100644
index 0000000000..6b9e66e85d
--- /dev/null
+++ b/docs/guides/migrate-to-advanced-cluster-2.0.md
@@ -0,0 +1,177 @@
+---
+page_title: "Migration Guide: Advanced Cluster (v1.x → v2.0.0)"
+---
+
+# Migration Guide: Advanced Cluster (v1.x → v2.0.0)
+
+This guide helps you migrate from the legacy schema of `mongodbatlas_advanced_cluster` resource to the new schema introduced in v2.0.0 of the provider. The new implementation uses:
+
+1. The recommended Terraform Plugin Framework, which, in addition to providing a better user experience and other features, adds support for the `moved` block between different resource types.
+2. New sharding configurations that supports scaling shards independently (see the [Migration Guide: Advanced Cluster New Sharding Configurations](advanced-cluster-new-sharding-schema#migration-sharded)).
+
+~> **IMPORTANT:** Preview of the new schema was already released in versions 1.29.0 and later which could be enabled by setting the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`. If you are already using the new schema preview with the new sharding configurations **and not using deprecated attributes**, you would not be required to make any additional changes except that the mentioned environment variable is no longer required.
+
+## Configuration changes when upgrading from v1.x
+
+In this section you can find the configuration changes between the legacy `mongodbatlas_advanced_cluster` and the new one released in v2.0.0.
+
+1. Below deprecated attributes have been removed:
+- `id`
+- `disk_size_gb`
+- `replication_specs.#.num_shards`
+- `replication_specs.#.id`
+- `advanced_configuration.default_read_concern`
+- `advanced_configuration.fail_index_key_too_long`
+
+2. Elements `replication_specs` and `region_configs` are now list attributes instead of blocks so they are an array of objects. If there is only one object, it still needs to be in an array. For example,
+```terraform
+replication_specs {
+ region_configs {
+ electable_specs {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_WEST_1"
+ }
+ region_configs {
+ electable_specs {
+ instance_size = "M10"
+ node_count = 2
+ }
+ provider_name = "AWS"
+ priority = 6
+ region_name = "US_EAST_1"
+ }
+}
+```
+goes to:
+```terraform
+replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_WEST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ provider_name = "AWS"
+ priority = 6
+ region_name = "US_EAST_1"
+ }
+ ]
+ }
+]
+```
+
+3. `mongodbatlas_advanced_cluster` now supports only the new sharding configuration that allows scaling shards independently. If your configuration defines the num_shards attribute (removed in 2.0.0), please also see the [Migration Guide: Advanced Cluster New Sharding Configurations](advanced-cluster-new-sharding-schema#migration-sharded).
+
+4. Elements `connection_strings`, `timeouts`, `advanced_configuration`, `bi_connector_config`, `pinned_fcv`, `electable_specs`, `read_only_specs`, `analytics_specs`, `auto_scaling` and `analytics_auto_scaling` are now single attributes instead of blocks so they are an object. For example,
+```terraform
+advanced_configuration {
+ default_write_concern = "majority"
+ javascript_enabled = true
+}
+```
+goes to:
+```terraform
+advanced_configuration = {
+ default_write_concern = "majority"
+ javascript_enabled = true
+}
+```
+If there are references to them, `[0]` or `.0` are dropped. For example,
+```terraform
+output "standard" {
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard
+}
+output "javascript_enabled" {
+ value = mongodbatlas_advanced_cluster.cluster.advanced_configuration.0.javascript_enabled
+}
+```
+goes to:
+```terraform
+output "standard" {
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard
+}
+output "javascript_enabled" {
+ value = mongodbatlas_advanced_cluster.cluster.advanced_configuration.javascript_enabled
+}
+```
+
+5. Elements `tags` and `labels` are now `maps` instead of `blocks`. For example,
+```terraform
+tags {
+ key = "env"
+ value = "dev"
+}
+tags {
+ key = "tag 2"
+ value = "val"
+}
+tags {
+ key = var.tag_key
+ value = "another_val"
+}
+
+```
+goes to:
+```terraform
+tags = {
+ env = "dev" # key strings without blanks can be enclosed in quotes but not required
+ "tag 2" = "val" # enclose key strings with blanks in quotes
+ (var.tag_key) = "another_val" # enclose key expressions in brackets so they can be evaluated
+}
+```
+
+6. `id` attribute which was an internal encoded resource identifier has been removed. Use `cluster_id` instead.
+
+### Configuration changes when upgrading `data.mongodbatlas_advanced_cluster` and `data.mongodbatlas_advanced_clusters` from v1.x
+
+1. Below deprecated attributes have been removed (same as resource):
+ - `id`
+ - `disk_size_gb`
+ - `replication_specs.#.num_shards`
+ - `replication_specs.#.id`
+ - `advanced_configuration.default_read_concern`
+ - `advanced_configuration.fail_index_key_too_long`
+
+2. Deprecated attribute `use_replication_spec_per_shard` has been removed. The data sources will now return only the new sharding configuration of the clusters.
+
+3. `id` attribute which was an internal encoded resource identifier has been removed. Use `cluster_id` instead.
+
+
+## How to migrate
+
+If you currently use `mongodbatlas_cluster`, see our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
+
+If you currently use `mongodbatlas_advanced_cluster` from v1.x.x of our provider, we recommend that you do the following steps:
+
+~> **IMPORTANT:** Before you migrate, create a backup of your [Terraform state file](https://developer.hashicorp.com/terraform/cli/commands/state). The state file will update to the new format and the old format will no longer be supported.
+
+After you upgrade to v2.0.0+ from v1.x.x, when you run `terraform plan`, syntax errors will return as expected since the definition file hasn't been updated yet using the latest schema. At this point, you need to update the configuration by following all of below steps at once and finally running `terraform apply`:
+
+- **Step #1:** Apply definition changes [explained on this page](#configuration-changes-when-upgrading-from-v1x) until there are no errors and no planned changes.
+ - **[Recommended]** You can also use the [Atlas CLI plugin](https://github.com/mongodb-labs/atlas-cli-plugin-terraform?tab=readme-ov-file#2-advancedclustertov2-adv2v2) to generate the `mongodbatlas_advanced_cluster` resource definition. This is the recommended method as it will generate a clean configuration while keeping the original Terraform expressions. Please be aware of the [plugin limitations](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/blob/main/docs/command_adv2v2.md#limitations).
+
+- **Step #2:** Remove any deprecated attributes (and their references) mentioned [above](#configuration-changes-when-upgrading-from-v1x).
+
+~> NOTE: For nested attributes that have been removed, such as `replication_specs.#.num_shards` etc, Terraform may NOT throw an explicit error even if these attributes are left in the configuration. This is a [known Terraform issue](https://github.com/hashicorp/terraform-plugin-framework/issues/1210). Users should ensure to remove any such attributes from the configuration to avoid any confusion.
+
+~> **IMPORTANT:** Don't apply until the plan is empty. If it shows other changes, you must update the `mongodbatlas_advanced_cluster` configuration until it matches the original configuration.
+
+- **Step #3:** Even though there are no plan changes shown at this point, run `terraform apply`. This will update the `mongodbatlas_advanced_cluster` state to support the new schema.
+
+## Important notes
+
+Please refer to our [Considerations and Best Practices](#considerations-and-best-practices) section for additional guidance on this resource.
diff --git a/docs/guides/serverless-shared-migration-guide.md b/docs/guides/serverless-shared-migration-guide.md
index 4e36b534fe..3372c07a1f 100644
--- a/docs/guides/serverless-shared-migration-guide.md
+++ b/docs/guides/serverless-shared-migration-guide.md
@@ -115,17 +115,17 @@ The following steps resolve the configuration drift in Terraform without affecti
name = "freeClusterName"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M0"
}
provider_name = "TENANT"
backing_provider_name = "AWS"
region_name = "US_EAST_1"
priority = 7
- }
- }
+ }]
+ }]
}
```
2. Run `terraform apply` to create the new resource.
diff --git a/docs/index.md b/docs/index.md
index e2c36df679..7731a957a9 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -5,10 +5,7 @@ The provider needs to be configured with the proper credentials before it can be
Use the navigation to the left to read about the available provider resources and data sources.
-You may want to consider pinning the [provider version](https://www.terraform.io/docs/configuration/providers.html#provider-versions) to ensure you have a chance to review and prepare for changes.
-Speaking of changes, see [CHANGELOG](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/CHANGELOG.md) for current version information.
-
-For the best experience, we recommend using the latest [HashiCorp Terraform Core Version](https://github.com/hashicorp/terraform). For more details see [HashiCorp Terraform Version Compatibility Matrix](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs#hashicorp-terraform-versionhttpswwwterraformiodownloadshtml-compatibility-matrix).
+See [CHANGELOG](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/CHANGELOG.md) for current version information.
## Example Usage
@@ -20,6 +17,13 @@ provider "mongodbatlas" {
}
# Create the resources
```
+
+### Provider and terraform version constraints
+
+We recommend that you pin your Atlas [provider version](https://developer.hashicorp.com/terraform/language/providers/requirements#version) to at least the [major version](#versioning-strategy) (e.g. `~> 2.0`) to avoid accidental upgrades to incompatible new versions. Starting on `2.0.0`, the [MongoDB Atlas Provider Versioning Policy](#mongodb-atlas-provider-versioning-policy) ensures that minor and patch versions do not include [Breaking Changes](#definition-of-breaking-changes).
+
+For Terraform version, we recommend that you use the latest [HashiCorp Terraform Core Version](https://github.com/hashicorp/terraform). For more details see [HashiCorp Terraform Version Compatibility Matrix](#hashicorp-terraform-version-compatibility-matrix).
+
## Configure Atlas Programmatic Access
In order to set up authentication with the MongoDB Atlas provider, you must generate a programmatic API key for MongoDB Atlas with the appropriate [role](https://docs.atlas.mongodb.com/reference/user-roles/).
diff --git a/docs/resources/access_list_api_key.md b/docs/resources/access_list_api_key.md
index c91a00c87f..f43a6ddbdf 100644
--- a/docs/resources/access_list_api_key.md
+++ b/docs/resources/access_list_api_key.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Resource: mongodbatlas_access_list_api_key
`mongodbatlas_access_list_api_key` provides an IP Access List entry resource. The access list grants access from IPs, CIDRs or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project.
@@ -32,6 +36,9 @@ resource "mongodbatlas_access_list_api_key" "test" {
}
```
+### Further Examples
+- [Create Programmatic API Key](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_api_key)
+
## Argument Reference
* `org_id` - (Required) Unique 24-hexadecimal digit string that identifies the organization that contains your projects.
diff --git a/docs/resources/advanced_cluster (preview provider 2.0.0).md b/docs/resources/advanced_cluster (preview provider 2.0.0).md
deleted file mode 100644
index 4722982caf..0000000000
--- a/docs/resources/advanced_cluster (preview provider 2.0.0).md
+++ /dev/null
@@ -1,1022 +0,0 @@
-# Resource: mongodbatlas_advanced_cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-`mongodbatlas_advanced_cluster` provides an Advanced Cluster resource. The resource lets you create, edit and delete advanced clusters. The resource requires your Project ID.
-
-This page describes the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`, the page for the current version can be found [here](./advanced_cluster).
-
-## How to enable
-
-In order to enable the Preview for MongoDB Atlas Provider 2.0.0 for `mongodbatlas_advanced_cluster`, set the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`. This will allow you to use the new `mongodbatlas_advanced_cluster` resource. You can also define the environment variable in your local development environment so your tools can use the new format and help you with linting and auto-completion.
-
-This environment variable only affects the `mongodbatlas_advanced_cluster` resource and corresponding data sources. It doesn't affect other resources. `mongodbatlas_advanced_cluster` definition will use the new format and new features like `moved block` from `mongodbatlas_cluster` to `mongodbatlas_advanced_cluster` will be available.
-
-## Configuration changes
-
-In this section you can find the configuration changes between the current `mongodbatlas_advanced_cluster` and the one in Preview for MongoDB Atlas Provider 2.0.0.
-
-1. Elements `replication_specs` and `region_configs` are now list attributes instead of blocks so they are an array of objects. If there is only one object, it still needs to be in an array. For example,
-```terraform
-replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_1"
- }
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "AWS"
- priority = 6
- region_name = "US_EAST_1"
- }
-}
-```
-goes to:
-```terraform
-replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_1"
- },
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "AWS"
- priority = 6
- region_name = "US_EAST_1"
- }
- ]
- }
-]
-```
-
-2. Elements `connection_strings`, `timeouts`, `advanced_configuration`, `bi_connector_config`, `pinned_fcv`, `electable_specs`, `read_only_specs`, `analytics_specs`, `auto_scaling` and `analytics_auto_scaling` are now single attributes instead of blocks so they are an object. For example,
-```terraform
-advanced_configuration {
- default_write_concern = "majority"
- javascript_enabled = true
-}
-```
-goes to:
-```terraform
-advanced_configuration = {
- default_write_concern = "majority"
- javascript_enabled = true
-}
-```
-If there are references to them, `[0]` or `.0` are dropped. For example,
-```terraform
-output "standard" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard
-}
-output "javascript_enabled" {
- value = mongodbatlas_advanced_cluster.cluster.advanced_configuration.0.javascript_enabled
-}
-```
-goes to:
-```terraform
-output "standard" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard
-}
-output "javascript_enabled" {
- value = mongodbatlas_advanced_cluster.cluster.advanced_configuration.javascript_enabled
-}
-```
-
-3. Elements `tags` and `labels` are now `maps` instead of `blocks`. For example,
-```terraform
-tags {
- key = "env"
- value = "dev"
-}
-tags {
- key = "tag 2"
- value = "val"
-}
-tags {
- key = var.tag_key
- value = "another_val"
-}
-
-```
-goes to:
-```terraform
-tags = {
- env = "dev" # key strings without blanks can be enclosed in quotes but not required
- "tag 2" = "val" # enclose key strings with blanks in quotes
- (var.tag_key) = "another_val" # enclose key expressions in brackets so they can be evaluated
-}
-```
-
-## How to migrate
-
-If you're currently utilizing `mongodbatlas_cluster`, see our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
-
-If you're currently utilizing `mongodbatlas_advanced_cluster`, you may also proactively address the upcoming breaking changes that will affect all `mongodbatlas_advanced_cluster` resources when the next major provider version, 2.0.0, is released (timeline yet to be announced).
-If you decide to go ahead, we recommend to follow these steps in order:
-
-1. If you are using the deprecated sharding configuration (with [`num_shards`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#num_shards-1)), you should first migrate to the new [Independent Shard Scaling]((https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema)) schema. See our [Advanced Cluster New Sharding Configurations Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema) for details.
-
-~> **IMPORTANT:** Before doing any migration, create a backup of your [Terraform state file](https://developer.hashicorp.com/terraform/cli/commands/state). The state file will be updated to the new format and the old format will no longer be supported.
-
-2. Enable the Preview for MongoDB Atlas Provider 2.0.0 by following these steps:
- - Run `terraform plan` to make sure that there are no planned changes.
- - Set the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true` in order to use the Preview for MongoDB Atlas Provider 2.0.0.
- - If you run `terraform plan` again, you'll see syntax errors: this is expected since the definition file hasn't been updated yet using the latest schema.
- - At this point, you can apply definition changes [explained on this page](#configuration-changes) until there are no errors and no planned changes. **Important**: Don't apply until the plan is empty. If it shows other changes, you must update the `mongodbatlas_advanced_cluster` configuration until it matches the original configuration.
- - Run `terraform apply` to apply the changes. Although there are no plan changes shown to the user, the `mongodbatlas_advanced_cluster` state will be updated to support the Preview for MongoDB Atlas Provider 2.0.0.
-
-~> **IMPORTANT:** If you migrate to the [Preview for MongoDB Atlas Provider 2.0.0](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529) while still using the [deprecated sharding configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), you will be required to perform the migration to the new [Independent Shard Scaling](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema) schema when version 2.0.0 is released.
-
-## Important notes
-
-Please refer to our [Considerations and Best Practices](#considerations-and-best-practices) section for additional guidance on this resource.
-
-~> **IMPORTANT:** We recommend all new MongoDB Atlas Terraform users start with the [`mongodbatlas_advanced_cluster`](advanced_cluster) resource. Key differences between [`mongodbatlas_cluster`](cluster) and [`mongodbatlas_advanced_cluster`](advanced_cluster) include support for [Multi-Cloud Clusters](https://www.mongodb.com/blog/post/introducing-multicloud-clusters-on-mongodb-atlas), [Asymmetric Sharding](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), and [Independent Scaling of Analytics Node Tiers](https://www.mongodb.com/blog/post/introducing-ability-independently-scale-atlas-analytics-node-tiers). For existing [`mongodbatlas_cluster`](cluster) resource users see our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
-
--> **NOTE:** If Backup Compliance Policy is enabled for the project for which this backup schedule is defined, you cannot modify the backup schedule for an individual cluster below the minimum requirements set in the Backup Compliance Policy. See [Backup Compliance Policy Prohibited Actions and Considerations](https://www.mongodb.com/docs/atlas/backup/cloud-backup/backup-compliance-policy/#configure-a-backup-compliance-policy).
-
--> **NOTE:** A network container is created for each provider/region combination on the advanced cluster. This can be referenced via a computed attribute for use with other resources. Refer to the `replication_specs[#].container_id` attribute in the [Attributes Reference](#attributes_reference) for more information.
-
--> **NOTE:** To enable Cluster Extended Storage Sizes use the `is_extended_storage_sizes_enabled` parameter in the [mongodbatlas_project resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/project).
-
--> **NOTE:** The Low-CPU instance clusters are prefixed with `R`, for example `R40`. For complete list of Low-CPU instance clusters see Cluster Configuration Options under each [Cloud Provider](https://www.mongodb.com/docs/atlas/reference/cloud-providers).
-
--> **NOTE:** Groups and projects are synonymous terms. You might find group_id in the official documentation.
-
--> **NOTE:** This resource supports Flex clusters. Additionally, you can upgrade [M0 clusters to Flex](#example-tenant-cluster-upgrade-to-flex) and [Flex clusters to Dedicated](#Example-Flex-Cluster-Upgrade). When creating a Flex cluster, make sure to set the priority value to 7.
-
-## Example Usage
-
-### Example single provider and single region
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "test" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs = {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
- ]
- }
- ]
-}
-```
-
-### Example Tenant Cluster
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "test" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M0"
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-```
-
-**NOTE**: Upgrading the tenant cluster to a Flex cluster or a dedicated cluster is supported. When upgrading to a Flex cluster, change the `provider_name` from "TENANT" to "FLEX". See [Example Tenant Cluster Upgrade to Flex](#example-tenant-cluster-upgrade-to-flex) below. When upgrading to a dedicated cluster, change the `provider_name` to your preferred provider (AWS, GCP or Azure) and remove the variable `backing_provider_name`. See the [Example Tenant Cluster Upgrade](#Example-Tenant-Cluster-Upgrade) below. You can upgrade a tenant cluster only to a single provider on an M10-tier cluster or greater.
-
-When upgrading from the tenant, *only* the upgrade changes will be applied. This helps avoid a corrupt state file in the event that the upgrade succeeds but subsequent updates fail within the same `terraform apply`. To apply additional cluster changes, run a secondary `terraform apply` after the upgrade succeeds.
-
-
-### Example Tenant Cluster Upgrade
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "test" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- }
- provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-```
-
-### Example Tenant Cluster Upgrade to Flex
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example-flex" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-```
-
-### Example Flex Cluster
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "example-flex" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-```
-
-**NOTE**: Upgrading the Flex cluster is supported. When upgrading from a Flex cluster, change the `provider_name` from "TENANT" to your preferred provider (AWS, GCP or Azure) and remove the variable `backing_provider_name`. See the [Example Flex Cluster Upgrade](#Example-Flex-Cluster-Upgrade) below. You can upgrade a Flex cluster only to a single provider on an M10-tier cluster or greater.
-
-When upgrading from a flex cluster, *only* the upgrade changes will be applied. This helps avoid a corrupt state file in the event that the upgrade succeeds but subsequent updates fail within the same `terraform apply`. To apply additional cluster changes, run a secondary `terraform apply` after the upgrade succeeds.
-
-
-### Example Flex Cluster Upgrade
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "test" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- }
- provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-}
-```
-
-### Example Multi-Cloud Cluster
-```terraform
-resource "mongodbatlas_advanced_cluster" "test" {
- project_id = "PROJECT ID"
- name = "NAME OF CLUSTER"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs = {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "GCP"
- priority = 6
- region_name = "NORTH_AMERICA_NORTHEAST_1"
- }
- ]
- }
- ]
-}
-```
-### Example of a Multi Cloud Sharded Cluster with 2 shards
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = var.cluster_name
- cluster_type = "SHARDED"
- backup_enabled = true
-
- replication_specs = [
- { # shard 1
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- ]
- },
- { # shard 2
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- ]
- }
- ]
-
- advanced_configuration = {
- javascript_enabled = true
- oplog_size_mb = 991
- sample_refresh_interval_bi_connector = 300
- }
-}
-```
-
-### Example of a Global Cluster with 2 zones
-```terraform
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = var.cluster_name
- cluster_type = "GEOSHARDED"
- backup_enabled = true
-
- replication_specs = [
- { # shard 1 - zone n1
- zone_name = "zone n1"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- ]
- },
- { # shard 2 - zone n1
- zone_name = "zone n1"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- ]
- },
- { # shard 1 - zone n2
- zone_name = "zone n2"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
- }
- ]
- },
- { # shard 2 - zone n2
- zone_name = "zone n2"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }, {
- electable_specs ={
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
- }
- ]
- }
- ]
-
- advanced_configuration = {
- javascript_enabled = true
- oplog_size_mb = 999
- sample_refresh_interval_bi_connector = 300
- }
-}
-```
-
-
-### Example - Return a Connection String
-Standard
-```terraform
-output "standard" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard
-}
-# Example return string: standard = "mongodb://cluster-atlas-shard-00-00.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-01.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-02.ygo1m.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-12diht-shard-0"
-```
-Standard srv
-```terraform
-output "standard_srv" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard_srv
-}
-# Example return string: standard_srv = "mongodb+srv://cluster-atlas.ygo1m.mongodb.net"
-```
-Private with Network peering and Custom DNS AWS enabled
-```terraform
-output "private" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings.private
-}
-# Example return string: private = "mongodb://cluster-atlas-shard-00-00-pri.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-01-pri.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-02-pri.ygo1m.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-12diht-shard-0"
-```
-Private srv with Network peering and Custom DNS AWS enabled
-```terraform
-output "private_srv" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings.private_srv
-}
-# Example return string: private_srv = "mongodb+srv://cluster-atlas-pri.ygo1m.mongodb.net"
-```
-
-By endpoint_service_id
-```terraform
-locals {
- endpoint_service_id = google_compute_network.default.name
- private_endpoints = coalesce(mongodbatlas_advanced_cluster.cluster.connection_strings.private_endpoint, [])
- connection_strings = [
- for pe in local.private_endpoints : pe.srv_connection_string
- if contains([for e in pe.endpoints : e.endpoint_id], local.endpoint_service_id)
- ]
-}
-output "endpoint_service_connection_string" {
- value = length(local.connection_strings) > 0 ? local.connection_strings[0] : ""
-}
-# Example return string: connection_string = "mongodb+srv://cluster-atlas-pl-0.ygo1m.mongodb.net"
-```
-Refer to the following for full privatelink endpoint connection string examples:
-* [GCP Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/gcp)
-* [Azure Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/azure)
-* [AWS, Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster)
-* [AWS, Regionalized Private Endpoints](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded)
-
-## Argument Reference
-
-* `project_id` - (Required) Unique ID for the project to create the cluster.
-* `name` - (Required) Name of the cluster as it appears in Atlas. Once the cluster is created, its name cannot be changed. **WARNING** Changing the name will result in destruction of the existing cluster and the creation of a new cluster.
-
-* `backup_enabled` - (Optional) Flag that indicates whether the cluster can perform backups.
- If `true`, the cluster can perform backups. You must set this value to `true` for NVMe clusters.
-
- Backup uses:
- [Cloud Backups](https://docs.atlas.mongodb.com/backup/cloud-backup/overview/#std-label-backup-cloud-provider) for dedicated clusters.
- [Flex Cluster Backups](https://www.mongodb.com/docs/atlas/backup/cloud-backup/flex-cluster-backup/) for flex clusters.
- If "`backup_enabled`" is `false` (default), the cluster doesn't use Atlas backups.
-
-- `retain_backups_enabled` - (Optional) Set to true to retain backup snapshots for the deleted cluster. This parameter applies to the Delete operation and only affects M10 and above clusters. If you encounter the `CANNOT_DELETE_SNAPSHOT_WITH_BACKUP_COMPLIANCE_POLICY` error code, see [how to delete a cluster with Backup Compliance Policy](../guides/delete-cluster-with-backup-compliance-policy.md).
-
-**NOTE** Prior version of provider had parameter as `bi_connector` state will migrate it to new value you only need to update parameter in your terraform file
-
-* `bi_connector_config` - (Optional) Configuration settings applied to BI Connector for Atlas on this cluster. The MongoDB Connector for Business Intelligence for Atlas (BI Connector) is only available for M10 and larger clusters. The BI Connector is a powerful tool which provides users SQL-based access to their MongoDB databases. As a result, the BI Connector performs operations which may be CPU and memory intensive. Given the limited hardware resources on M10 and M20 cluster tiers, you may experience performance degradation of the cluster when enabling the BI Connector. If this occurs, upgrade to an M30 or larger cluster or disable the BI Connector. See [below](#bi_connector_config).
-* `cluster_type` - (Required)Type of the cluster that you want to create.
- Accepted values include:
- - `REPLICASET` Replica set
- - `SHARDED` Sharded cluster
- - `GEOSHARDED` Global Cluster
-
-* `disk_size_gb` - (Optional) Capacity, in gigabytes, of the host's root volume. Increase this number to add capacity, up to a maximum possible value of 4096 (4 TB). This value must be a positive number. You can't set this value with clusters with local [NVMe SSDs](https://docs.atlas.mongodb.com/cluster-tier/#std-label-nvme-storage). The minimum disk size for dedicated clusters is 10 GB for AWS and GCP. If you specify diskSizeGB with a lower disk size, Atlas defaults to the minimum disk size value. If your cluster includes Azure nodes, this value must correspond to an existing Azure disk type (8, 16, 32, 64, 128, 256, 512, 1024, 2048, or 4095). Atlas calculates storage charges differently depending on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require additional storage space beyond this limitation, consider [upgrading your cluster](https://docs.atlas.mongodb.com/scale-cluster/#std-label-scale-cluster-instance) to a higher tier. If your cluster spans cloud service providers, this value defaults to the minimum default of the providers involved. **(DEPRECATED)** Use `replication_specs[#].region_config[#].(analytics_specs|electable_specs|read_only_specs).disk_size_gb` instead. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide).
-* `encryption_at_rest_provider` - (Optional) Possible values are AWS, GCP, AZURE or NONE. Only needed if you desire to manage the keys, see [Encryption at Rest using Customer Key Management](https://docs.atlas.mongodb.com/security-kms-encryption/) for complete documentation. You must configure encryption at rest for the Atlas project before enabling it on any cluster in the project. For Documentation, see [AWS](https://docs.atlas.mongodb.com/security-aws-kms/), [GCP](https://docs.atlas.mongodb.com/security-kms-encryption/) and [Azure](https://docs.atlas.mongodb.com/security-azure-kms/#std-label-security-azure-kms). Requirements are if `replication_specs[#].region_configs[#].Specs.instance_size` is M10 or greater and `backup_enabled` is false or omitted.
-* `tags` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
-* `labels` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **DEPRECATED** Use `tags` instead.
-* `mongo_db_major_version` - (Optional) Version of the cluster to deploy. Atlas supports all the MongoDB versions that have **not** reached [End of Live](https://www.mongodb.com/legal/support-policy/lifecycles) for M10+ clusters. If omitted, Atlas deploys the cluster with the default version. For more details, see [documentation](https://www.mongodb.com/docs/atlas/reference/faq/database/#which-versions-of-mongodb-do-service-clusters-use-). Atlas always deploys the cluster with the latest stable release of the specified version. If you set a value to this parameter and set `version_release_system` `CONTINUOUS`, the resource returns an error. Either clear this parameter or set `version_release_system`: `LTS`.
-* `pinned_fcv` - (Optional) Pins the Feature Compatibility Version (FCV) to the current MongoDB version with a provided expiration date. To unpin the FCV the `pinned_fcv` attribute must be removed. This operation can take several minutes as the request processes through the MongoDB data plane. Once FCV is unpinned it will not be possible to downgrade the `mongo_db_major_version`. It is advised that updates to `pinned_fcv` are done isolated from other cluster changes. If a plan contains multiple changes, the FCV change will be applied first. If FCV is unpinned past the expiration date the `pinned_fcv` attribute must be removed. The following [knowledge hub article](https://kb.corp.mongodb.com/article/000021785/) and [FCV documentation](https://www.mongodb.com/docs/atlas/tutorial/major-version-change/#manage-feature-compatibility--fcv--during-upgrades) can be referenced for more details. See [below](#pinned_fcv).
-* `pit_enabled` - (Optional) Flag that indicates if the cluster uses Continuous Cloud Backup.
-* `replication_specs` - List of settings that configure your cluster regions. This attribute has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. If for each `replication_specs` a `num_shards` is configured with a value greater than 1 (using deprecated sharding configurations), then each object represents a zone with one or more shards. The `replication_specs` configuration for all shards within the same zone must be the same, with the exception of `instance_size` and `disk_iops` that can scale independently. Note that independent `disk_iops` values are only supported for AWS provisioned IOPS, or Azure regions that support Extended IOPS. See [below](#replication_specs).
-* `root_cert_type` - (Optional) - Certificate Authority that MongoDB Atlas clusters use. You can specify ISRGROOTX1 (for ISRG Root X1).
-* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
-* `version_release_system` - (Optional) - Release cadence that Atlas uses for this cluster. This parameter defaults to `LTS`. If you set this field to `CONTINUOUS`, you must omit the `mongo_db_major_version` field. Atlas accepts:
- - `CONTINUOUS`: Atlas creates your cluster using the most recent MongoDB release. Atlas automatically updates your cluster to the latest major and rapid MongoDB releases as they become available.
- - `LTS`: Atlas creates your cluster using the latest patch release of the MongoDB version that you specify in the mongoDBMajorVersion field. Atlas automatically updates your cluster to subsequent patch releases of this MongoDB version. Atlas doesn't update your cluster to newer rapid or major MongoDB releases as they become available.
-* `paused` (Optional) - Flag that indicates whether the cluster is paused or not. You can pause M10 or larger clusters. You cannot initiate pausing for a shared/tenant tier cluster. If you try to update a `paused` cluster you will get a `CANNOT_UPDATE_PAUSED_CLUSTER` error. See [Considerations for Paused Clusters](https://docs.atlas.mongodb.com/pause-terminate-cluster/#considerations-for-paused-clusters).
- **NOTE** Pause lasts for up to 30 days. If you don't resume the cluster within 30 days, Atlas resumes the cluster. When the cluster resumption happens Terraform will flag the changed state. If you wish to keep the cluster paused, reapply your Terraform configuration. If you prefer to allow the automated change of state to unpaused use:
- `lifecycle {
- ignore_changes = [paused]
- }`
-* `timeouts`- (Optional) The duration of time to wait for Cluster to be created, updated, or deleted. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Advanced Cluster create & delete is `3h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
-* `accept_data_risks_and_force_replica_set_reconfig` - (Optional) If reconfiguration is necessary to regain a primary due to a regional outage, submit this field alongside your topology reconfiguration to request a new regional outage resistant topology. Forced reconfigurations during an outage of the majority of electable nodes carry a risk of data loss if replicated writes (even majority committed writes) have not been replicated to the new primary node. MongoDB Atlas docs contain more information. To proceed with an operation which carries that risk, set `accept_data_risks_and_force_replica_set_reconfig` to the current date. Learn more about Reconfiguring a Replica Set during a regional outage [here](https://dochub.mongodb.org/core/regional-outage-reconfigure-replica-set).
-* `global_cluster_self_managed_sharding` - (Optional) Flag that indicates if cluster uses Atlas-Managed Sharding (false, default) or Self-Managed Sharding (true). It can only be enabled for Global Clusters (`GEOSHARDED`). It cannot be changed once the cluster is created. Use this mode if you're an advanced user and the default configuration is too restrictive for your workload. If you select this option, you must manually configure the sharding strategy, more information [here](https://www.mongodb.com/docs/atlas/tutorial/create-global-cluster/#select-your-sharding-configuration).
-* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster. Valid values are `WORKLOAD_TYPE`, `SEQUENTIAL` and `NODE_TYPE`. By default, Atlas scales under `WORKLOAD_TYPE`. This mode allows Atlas to scale your analytics nodes in parallel to your operational nodes. When configured as `SEQUENTIAL`, Atlas scales all nodes sequentially. This mode is intended for steady-state workloads and applications performing latency-sensitive secondary reads. When configured as `NODE_TYPE`, Atlas scales your electable nodes in parallel with your read-only and analytics nodes. This mode is intended for large, dynamic workloads requiring frequent and timely cluster tier scaling. This is the fastest scaling strategy, but it might impact latency of workloads when performing extensive secondary reads. [Modify the Replica Set Scaling Mode](https://dochub.mongodb.org/core/scale-nodes)
-* `redact_client_log_data` - (Optional) Flag that enables or disables log redaction, see the [manual](https://www.mongodb.com/docs/manual/administration/monitoring/#log-redaction) for more information. Use this in conjunction with Encryption at Rest and TLS/SSL (Transport Encryption) to assist compliance with regulatory requirements. **Note**: Changing this setting on a cluster will trigger a rolling restart as soon as the cluster is updated.
-* `config_server_management_mode` - (Optional) Config Server Management Mode for creating or updating a sharded cluster. Valid values are `ATLAS_MANAGED` (default) and `FIXED_TO_DEDICATED`. When configured as `ATLAS_MANAGED`, Atlas may automatically switch the cluster's config server type for optimal performance and savings. When configured as `FIXED_TO_DEDICATED`, the cluster will always use a dedicated config server. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-- `delete_on_create_timeout`- (Optional) Flag that indicates whether to delete the cluster if the cluster creation times out. Default is false.
-
-### bi_connector_config
-
-Specifies BI Connector for Atlas configuration.
-
-```terraform
-bi_connector_config = {
- enabled = true
- read_preference = "secondary"
-}
-```
-
-* `enabled` - (Optional) Specifies whether or not BI Connector for Atlas is enabled on the cluster.
-*
- - Set to `true` to enable BI Connector for Atlas.
- - Set to `false` to disable BI Connector for Atlas.
-
-* `read_preference` - (Optional) Specifies the read preference to be used by BI Connector for Atlas on the cluster. Each BI Connector for Atlas read preference contains a distinct combination of [readPreference](https://docs.mongodb.com/manual/core/read-preference/) and [readPreferenceTags](https://docs.mongodb.com/manual/core/read-preference/#tag-sets) options. For details on BI Connector for Atlas read preferences, refer to the [BI Connector Read Preferences Table](https://docs.atlas.mongodb.com/tutorial/create-global-writes-cluster/#bic-read-preferences).
-
- - Set to "primary" to have BI Connector for Atlas read from the primary.
-
- - Set to "secondary" to have BI Connector for Atlas read from a secondary member. Default if there are no analytics nodes in the cluster.
-
- - Set to "analytics" to have BI Connector for Atlas read from an analytics node. Default if the cluster contains analytics nodes.
-
-### Advanced Configuration Options
-
--> **NOTE:** Prior to setting these options please ensure you read https://docs.atlas.mongodb.com/cluster-config/additional-options/.
-
--> **NOTE:** Once you set some `advanced_configuration` attributes, we recommended to explicitly set those attributes to their intended value instead of removing them from the configuration. For example, if you set `javascript_enabled` to `false`, and later you want to go back to the default value (true), you must set it back to `true` instead of removing it.
-
-Include **desired options** within advanced_configuration:
-
-```terraform
-// Nest options within advanced_configuration
- advanced_configuration = {
- javascript_enabled = false
- minimum_enabled_tls_protocol = "TLS1_2"
- }
-```
-
-* `default_read_concern` - (Optional) [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. **(DEPRECATED)** MongoDB 6.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation.
-* `default_write_concern` - (Optional) [Default level of acknowledgment requested from MongoDB for write operations](https://docs.mongodb.com/manual/reference/write-concern/) set for this cluster. MongoDB 6.0 clusters default to [majority](https://docs.mongodb.com/manual/reference/write-concern/).
-* `fail_index_key_too_long` - **(DEPRECATED)** (Optional) When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
-* `javascript_enabled` - (Optional) When true (default), the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
-* `minimum_enabled_tls_protocol` - (Optional) Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are:
- - TLS1_0
- - TLS1_1
- - TLS1_2
-* `no_table_scan` - (Optional) When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations.
-* `oplog_size_mb` - (Optional) The custom oplog size of the cluster. Without a value that indicates that the cluster uses the default oplog size calculated by Atlas.
-* `oplog_min_retention_hours` - (Optional) Minimum retention window for cluster's oplog expressed in hours. A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates.
-* **Note** A minimum oplog retention is required when seeking to change a cluster's class to Local NVMe SSD. To learn more and for latest guidance see [`oplogMinRetentionHours`](https://www.mongodb.com/docs/manual/core/replica-set-oplog/#std-label-replica-set-minimum-oplog-size)
-* `sample_size_bi_connector` - (Optional) Number of documents per database to sample when gathering schema information. Defaults to 100. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
-* `sample_refresh_interval_bi_connector` - (Optional) Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300. The specified value must be a positive integer. Available only for Atlas deployments in which BI Connector for Atlas is enabled.
-* `transaction_lifetime_limit_seconds` - (Optional) Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds.
-* `change_stream_options_pre_and_post_images_expire_after_seconds` - (Optional) The minimum pre- and post-image retention time in seconds. This option corresponds to the `changeStreamOptions.preAndPostImages.expireAfterSeconds` cluster parameter. Defaults to `-1`(off). This setting controls the retention policy of change stream pre- and post-images. Pre- and post-images are the versions of a document before and after document modification, respectively. `expireAfterSeconds` controls how long MongoDB retains pre- and post-images. When set to -1 (off), MongoDB uses the default retention policy: pre- and post-images are retained until the corresponding change stream events are removed from the oplog. To set the minimum pre- and post-image retention time, specify an integer value greater than zero. Setting this too low could increase the risk of interrupting Realm sync or triggers processing. This parameter is only supported for MongoDB version 6.0 and above.
-* `default_max_time_ms` - (Optional) Default time limit in milliseconds for individual read operations to complete. This option corresponds to the [defaultMaxTimeMS](https://www.mongodb.com/docs/upcoming/reference/cluster-parameters/defaultMaxTimeMS/) cluster parameter. This parameter is supported only for MongoDB version 8.0 and above.
-* `tls_cipher_config_mode` - (Optional) The TLS cipher suite configuration mode. Valid values include `CUSTOM` or `DEFAULT`. The `DEFAULT` mode uses the default cipher suites. The `CUSTOM` mode allows you to specify custom cipher suites for both TLS 1.2 and TLS 1.3. To unset, this should be set back to `DEFAULT`.
-* `custom_openssl_cipher_config_tls12` - (Optional) The custom OpenSSL cipher suite list for TLS 1.2. This field is only valid when `tls_cipher_config_mode` is set to `CUSTOM`.
-
-
-### tags
-
- ```terraform
- tags = {
- "Key 1" = "Value 1"
- "Key 2" = "Value 2"
- Key3 = "Value 3"
- }
-```
-
-Key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster.
-
-* `key` - (Required) Constant that defines the set of the tag.
-* `value` - (Required) Variable that belongs to the set of the tag.
-
-To learn more, see [Resource Tags](https://dochub.mongodb.org/core/add-cluster-tag-atlas).
-
-### labels
-
- ```terraform
- labels = {
- "Key 1" = "Value 1"
- "Key 2" = "Value 2"
- Key3 = "Value 3"
- }
-```
-
-Key-value pairs that categorize the cluster. Each key and value has a maximum length of 255 characters. You cannot set the key `Infrastructure Tool`, it is used for internal purposes to track aggregate usage.
-
-* `key` - The key that you want to write.
-* `value` - The value that you want to write.
-
--> **NOTE:** MongoDB Atlas doesn't display your labels.
-
-
-### replication_specs
-
-```terraform
-//Example Multicloud
-replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs = {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "GCP"
- priority = 6
- region_name = "NORTH_AMERICA_NORTHEAST_1"
- }
- ]
- }
-]
-```
-
-* `id` - **(DEPRECATED)** Unique identifer of the replication document for a zone in a Global Cluster. This value corresponds to the legacy sharding schema (no independent shard scaling) and is different from the Shard ID you may see in the Atlas UI. This value is not populated (empty string) when a sharded cluster has independently scaled shards.
-* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI. When using old sharding configuration (replication spec with `num_shards` greater than 1) this value is not populated.
-* `num_shards` - (Optional) Provide this value if you set a `cluster_type` of SHARDED or GEOSHARDED. Omit this value if you selected a `cluster_type` of REPLICASET. This API resource accepts 1 through 50, inclusive. This parameter defaults to 1. If you specify a `num_shards` value of 1 and a `cluster_type` of SHARDED, Atlas deploys a single-shard [sharded cluster](https://docs.atlas.mongodb.com/reference/glossary/#std-term-sharded-cluster). Don't create a sharded cluster with a single shard for production environments. Single-shard sharded clusters don't provide the same benefits as multi-shard configurations.
-If you are upgrading a replica set to a sharded cluster, you cannot increase the number of shards in the same update request. You should wait until after the cluster has completed upgrading to sharded and you have reconnected all application clients to the MongoDB router before adding additional shards. Otherwise, your data might become inconsistent once MongoDB Cloud begins distributing data across shards. To learn more, see [Convert a replica set to a sharded cluster documentation](https://www.mongodb.com/docs/atlas/scale-cluster/#convert-a-replica-set-to-a-sharded-cluster) and [Convert a replica set to a sharded cluster tutorial](https://www.mongodb.com/docs/upcoming/tutorial/convert-replica-set-to-replicated-shard-cluster). **(DEPRECATED)** To learn more, see the [1.18.0 Upgrade Guide](../guides/1.18.0-upgrade-guide).
-* `region_configs` - (Optional) Configuration for the hardware specifications for nodes set for a given region. Each `region_configs` object describes the region's priority in elections and the number and type of MongoDB nodes that Atlas deploys to the region. Each `region_configs` object must have either an `analytics_specs` object, `electable_specs` object, or `read_only_specs` object. See [below](#region_configs).
-* `zone_name` - (Optional) Name for the zone in a Global Cluster.
-* `zone_id` - Unique 24-hexadecimal digit string that identifies the zone in a Global Cluster. If clusterType is GEOSHARDED, this value indicates the zone that the given shard belongs to and can be used to configure Global Cluster backup policies.
-
-
-### region_configs
-
-* `analytics_specs` - (Optional) Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. Analytics nodes handle analytic data such as reporting queries from BI Connector for Atlas. Analytics nodes are read-only and can never become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary). If you don't specify this parameter, no analytics nodes deploy to this region. See [below](#specs).
-* `auto_scaling` - (Optional) Configuration for the collection of settings that configures auto-scaling information for the cluster. The values for the `auto_scaling` attribute must be the same for all `region_configs` of a cluster. See [below](#auto_scaling).
-* `analytics_auto_scaling` - (Optional) Configuration for the Collection of settings that configures analytics-auto-scaling information for the cluster. The values for the `analytics_auto_scaling` attribute must be the same for all `region_configs` of a cluster. See [below](#analytics_auto_scaling).
-* `backing_provider_name` - (Optional) Cloud service provider on which you provision the host for a multi-tenant cluster. Use this only when a `provider_name` is `TENANT` and `instance_size` is `M0`.
-* `electable_specs` - (Optional) Hardware specifications for electable nodes in the region. All `electable_specs` in the `region_configs` of a `replication_specs` must have the same `instance_size`. Electable nodes can become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary) and can enable local reads. If you do not specify this option, no electable nodes are deployed to the region. See [below](#specs).
-* `priority` - (Optional) Election priority of the region. For regions with only read-only nodes, set this value to 0.
- * If you have multiple `region_configs` objects (your cluster is multi-region or multi-cloud), they must have priorities in descending order. The highest priority is 7.
- * If your region has set `region_configs[#].electable_specs.node_count` to 1 or higher, it must have a priority of exactly one (1) less than another region in the `replication_specs[#].region_configs[#]` array. The highest-priority region must have a priority of 7. The lowest possible priority is 1.
-* `provider_name` - (Optional) Cloud service provider on which the servers are provisioned.
- The possible values are:
- - `AWS` - Amazon AWS
- - `GCP` - Google Cloud Platform
- - `AZURE` - Microsoft Azure
- - `TENANT` - M0 multi-tenant cluster. Use `replication_specs.[#].region_configs[#].backing_provider_name` to set the cloud service provider.
-* `read_only_specs` - (Optional) Hardware specifications for read-only nodes in the region. All `read_only_specs` in the `region_configs` of a `replication_specs` must have the same `instance_size` as `electable_specs`. Read-only nodes can become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary) and can enable local reads. If you don't specify this parameter, no read-only nodes are deployed to the region. See [below](#specs).
-* `region_name` - (Optional) Physical location of your MongoDB cluster. The region you choose can affect network latency for clients accessing your databases. Requires the **Atlas region name**, see the reference list for [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/).
-
-### electable_specs
-
-* `instance_size` - (Required) Hardware specification for the instance sizes in this region. Each instance size has a default storage and memory capacity. The instance size you select applies to all the data-bearing hosts in your instance size. Electable nodes and read-only nodes (known as "base nodes") within a single shard must use the same instance size. Analytics nodes can scale independently from base nodes within a shard. Both base nodes and analytics nodes can scale independently from their equivalents in other shards.
-* `disk_iops` - (Optional) Target IOPS (Input/Output Operations Per Second) desired for storage attached to this hardware. Define this attribute only if you selected AWS as your cloud service provider, `instance_size` is set to "M30" or greater (not including "Mxx_NVME" tiers), and `ebs_volume_type` is "PROVISIONED". You can't set this attribute for a multi-cloud cluster.
-* `ebs_volume_type` - (Optional) Type of storage you want to attach to your AWS-provisioned cluster. Set only if you selected AWS as your cloud service provider. You can't set this parameter for a multi-cloud cluster. Valid values are:
- * `STANDARD` volume types can't exceed the default IOPS rate for the selected volume size.
- * `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
-* `node_count` - (Optional) Number of nodes of the given type for MongoDB Atlas to deploy to the region.
-* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. This value must be equal for all shards and node types. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using `disk_size_gb` with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that `disk_size_gb` is used exclusively with Provisioned IOPS will help avoid these issues.
-
-
-### analytics_specs
-
-* `instance_size` - (Required) Hardware specification for the instance sizes in this region. Each instance size has a default storage and memory capacity. The instance size you select applies to all the data-bearing hosts in your instance size. Electable nodes and read-only nodes (known as "base nodes") within a single shard must use the same instance size. Analytics nodes can scale independently from base nodes within a shard. Both base nodes and analytics nodes can scale independently from their equivalents in other shards.
-* `disk_iops` - (Optional) Target IOPS (Input/Output Operations Per Second) desired for storage attached to this hardware. Define this attribute only if you selected AWS as your cloud service provider, `instance_size` is set to "M30" or greater (not including "Mxx_NVME" tiers), and `ebs_volume_type` is "PROVISIONED". You can't set this attribute for a multi-cloud cluster.
-* `ebs_volume_type` - (Optional) Type of storage you want to attach to your AWS-provisioned cluster. Set only if you selected AWS as your cloud service provider. You can't set this parameter for a multi-cloud cluster. Valid values are:
- * `STANDARD` volume types can't exceed the default IOPS rate for the selected volume size.
- * `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
-* `node_count` - (Optional) Number of nodes of the given type for MongoDB Atlas to deploy to the region.
-* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. This value must be equal for all shards and node types. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using `disk_size_gb` with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that `disk_size_gb` is used exclusively with Provisioned IOPS will help avoid these issues.
-
-### read_only_specs
-
-* `instance_size` - (Required) Hardware specification for the instance sizes in this region. Each instance size has a default storage and memory capacity. The instance size you select applies to all the data-bearing hosts in your instance size. Electable nodes and read-only nodes (known as "base nodes") within a single shard must use the same instance size. Analytics nodes can scale independently from base nodes within a shard. Both base nodes and analytics nodes can scale independently from their equivalents in other shards.
-* `disk_iops` - (Optional) Target IOPS (Input/Output Operations Per Second) desired for storage attached to this hardware. Define this attribute only if you selected AWS as your cloud service provider, `instance_size` is set to "M30" or greater (not including "Mxx_NVME" tiers), and `ebs_volume_type` is "PROVISIONED". You can't set this attribute for a multi-cloud cluster. This parameter defaults to the cluster tier's standard IOPS value.
-* `ebs_volume_type` - (Optional) Type of storage you want to attach to your AWS-provisioned cluster. Set only if you selected AWS as your cloud service provider. You can't set this parameter for a multi-cloud cluster. Valid values are:
- * `STANDARD` volume types can't exceed the default IOPS rate for the selected volume size.
- * `PROVISIONED` volume types must fall within the allowable IOPS range for the selected volume size.
-* `node_count` - (Optional) Number of nodes of the given type for MongoDB Atlas to deploy to the region.
-* `disk_size_gb` - (Optional) Storage capacity that the host's root volume possesses expressed in gigabytes. This value must be equal for all shards and node types. If disk size specified is below the minimum (10 GB), this parameter defaults to the minimum disk size value. Storage charge calculations depend on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier. **Note:** Using `disk_size_gb` with Standard IOPS could lead to errors and configuration issues. Therefore, it should be used only with the [Provisioned IOPS volume type](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster#PROVISIONED). When using Provisioned IOPS, the disk_size_gb parameter specifies the storage capacity, but the IOPS are set independently. Ensuring that `disk_size_gb` is used exclusively with Provisioned IOPS will help avoid these issues.
-
-### auto_scaling
-
-* `disk_gb_enabled` - (Optional) Flag that indicates whether this cluster enables disk auto-scaling. This parameter defaults to false.
-
-* `compute_enabled` - (Optional) Flag that indicates whether instance size auto-scaling is enabled. This parameter defaults to false. If a sharded cluster is making use of the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), auto-scaling of the instance size will be independent for each individual shard. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details. On the contrary, if a sharded cluster makes use of deprecated `num_shards` attribute (with values > 1), instance size auto-scaling will be performed uniformly across all shards in the cluster.
-
-~> **IMPORTANT:** If `disk_gb_enabled` or `compute_enabled` is true, Atlas automatically scales the cluster up or down.
-This will cause the value of `replication_specs[#].region_config[#].(electable_specs|read_only_specs).disk_size_gb` or `replication_specs[#].region_config[#].(electable_specs|read_only_specs).instance_size` returned to potentially be different than what is specified in the Terraform config. If you then apply a plan, not noting this, Terraform will scale the cluster back to the original values in the config.
-To prevent unintended changes when enabling autoscaling, use a lifecycle ignore customization as shown in the example below. To explicitly change `disk_size_gb` or `instance_size` values, comment out the `lifecycle` block and run `terraform apply`. Please be sure to uncomment the `lifecycle` block once done to prevent any accidental changes.
-
-```terraform
-// Example: ignore disk_size_gb and instance_size changes in a replica set
-lifecycle {
- ignore_changes = [
- replication_specs[0].region_configs[0].electable_specs.disk_size_gb,
- replication_specs[0].region_configs[0].electable_specs.instance_size,
- replication_specs[0].region_configs[0].electable_specs.disk_iops // instance_size change can affect disk_iops in case that you are using it
- ]
-}
-```
-
-* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs[#].region_configs[#].auto_scaling.compute_enabled` : true. If you enable this option, specify a value for `replication_specs[#].region_configs[#].auto_scaling.compute_min_instance_size`.
-* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs[#].region_configs[#].auto_scaling.compute_scale_down_enabled` is true.
-* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs[#].region_configs[#].auto_scaling.compute_enabled` is true.
-
-### analytics_auto_scaling
-
-* `disk_gb_enabled` - (Optional) Flag that indicates whether this cluster enables disk auto-scaling. This parameter defaults to false.
-* `compute_enabled` - (Optional) Flag that indicates whether analytics instance size auto-scaling is enabled. This parameter defaults to false. If a sharded cluster is making use of the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), auto-scaling of analytics instance size will be independent for each individual shard. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details. On the contrary, if a sharded cluster makes use of deprecated `num_shards` attribute (with values > 1), analytics instance size auto-scaling will be performed uniformily across all shards in the cluster.
-
-~> **IMPORTANT:** If `disk_gb_enabled` or `compute_enabled` is true, Atlas automatically scales the cluster up or down.
-This will cause the value of `replication_specs[#].region_config[#].analytics_specs.disk_size_gb` or `replication_specs[#].region_config[#].analytics_specs.instance_size` returned to potentially be different than what is specified in the Terraform config. If you then apply a plan, not noting this, Terraform will scale the cluster back to the original values in the config.
-To prevent unintended changes when enabling autoscaling, use a lifecycle ignore customization as shown in the example below. To explicitly change `disk_size_gb` or `instance_size` values, comment out the `lifecycle` block and run `terraform apply`. Please be sure to uncomment the `lifecycle` block once done to prevent any accidental changes.
-
-```terraform
-// Example: ignore disk_size_gb and instance_size changes in a replica set
-lifecycle {
- ignore_changes = [
- replication_specs[0].region_configs[0].analytics_specs.disk_size_gb,
- replication_specs[0].region_configs[0].analytics_specs.instance_size,
- replication_specs[0].region_configs[0].analytics_specs.disk_iops // instance_size change can affect disk_iops in case that you are using it
- ]
-}
-```
-
-* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_enabled` : true. If you enable this option, specify a value for `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_min_instance_size`.
-* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_scale_down_enabled` is true.
-* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_enabled` is true.
-
-### pinned_fcv
-
-* `expiration_date` - (Required) Expiration date of the fixed FCV. This value is in the ISO 8601 timestamp format (e.g. "2024-12-04T16:25:00Z"). Note that this field cannot exceed 4 weeks from the pinned date.
-* `version` - Feature compatibility version of the cluster.
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `cluster_id` - The cluster ID.
-* `mongo_db_version` - Version of MongoDB the cluster runs, in `major-version`.`minor-version` format.
-* `id` - The Terraform's unique identifier used internally for state management.
-* `connection_strings` - Set of connection strings that your applications use to connect to this cluster. More information in [Connection-strings](https://docs.mongodb.com/manual/reference/connection-string/). Use the parameters in this object to connect your applications to this cluster. To learn more about the formats of connection strings, see [Connection String Options](https://docs.atlas.mongodb.com/reference/faq/connection-changes/). NOTE: Atlas returns the contents of this object after the cluster is operational, not while it builds the cluster.
-
- **NOTE** Connection strings must be returned as a list, therefore to refer to a specific attribute value add index notation. Example: mongodbatlas_advanced_cluster.cluster-test.connection_strings.0.standard_srv
-
- Private connection strings are not available until the respective `mongodbatlas_privatelink_endpoint_service` resources are fully applied. Add a `depends_on = [mongodbatlas_privatelink_endpoint_service.example]` to ensure `connection_strings` are available following `terraform apply`. If the expected connection string(s) do not contain a value, a `terraform refresh` may need to be performed to obtain the value. One can also view the status of the peered connection in the [Atlas UI](https://docs.atlas.mongodb.com/security-vpc-peering/).
-
- - `connection_strings.standard` - Public mongodb:// connection string for this cluster.
- - `connection_strings.standard_srv` - Public mongodb+srv:// connection string for this cluster. The mongodb+srv protocol tells the driver to look up the seed list of hosts in DNS. Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don’t need to append the seed list or change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn’t , use connectionStrings.standard.
- - `connection_strings.private` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- - `connection_strings.private_srv` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb+srv://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- - `connection_strings.private_endpoint` - Private endpoint connection strings. Each object describes the connection strings you can use to connect to this cluster through a private endpoint. Atlas returns this parameter only if you deployed a private endpoint to all regions to which you deployed this cluster's nodes.
- - `connection_strings.private_endpoint[#].connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
- - `connection_strings.private_endpoint[#].srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
- - `connection_strings.private_endpoint[#].srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for `connection_strings.private_endpoint[#].srv_connection_string`.
- - `connection_strings.private_endpoint[#].type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
- - `connection_strings.private_endpoint[#].endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
- - `connection_strings.private_endpoint[#].endpoints[#].endpoint_id` - Unique identifier of the private endpoint.
- - `connection_strings.private_endpoint[#].endpoints[#].provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
- - `connection_strings.private_endpoint[#].endpoints[#].region` - Region to which you deployed the private endpoint.
-* `state_name` - Current state of the cluster. The possible states are:
- - IDLE
- - CREATING
- - UPDATING
- - DELETING
- - DELETED
- - REPAIRING
-* `replication_specs[#].container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container created when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
-* `config_server_type` Describes a sharded cluster's config server type. Valid values are `DEDICATED` and `EMBEDDED`. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-* `pinned_fcv.version` - Feature compatibility version of the cluster.
-
-
-## Import
-
-Clusters can be imported using project ID and cluster name, in the format `PROJECTID-CLUSTERNAME`, e.g.
-
-```
-$ terraform import mongodbatlas_advanced_cluster.my_cluster 1112222b3bf99403840e8934-Cluster0
-```
-
-See detailed information for arguments and attributes: [MongoDB API Advanced Clusters](https://docs.atlas.mongodb.com/reference/api/cluster-advanced/create-one-cluster-advanced/)
-
-~> **IMPORTANT:**
-
• When a cluster is imported, the resulting schema structure will always return the new schema including `replication_specs` per independent shards of the cluster.
-
-## Move
-
-`mongodbatlas__cluster` resources can be moved to `mongodbatlas_advanced_cluster` in Terraform v1.8 and later, e.g.:
-
-```terraform
-moved {
- from = mongodbatlas_cluster.cluster
- to = mongodbatlas_advanced_cluster.cluster
-}
-```
-
-More information about moving resources can be found in our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide) and in the Terraform documentation [here](https://developer.hashicorp.com/terraform/language/moved) and [here](https://developer.hashicorp.com/terraform/language/modules/develop/refactoring).
-
-## Considerations and Best Practices
-
-### "known after apply" verbosity
-
-When making changes to your cluster, it is expected that your Terraform plan might show `known after apply` entries in attributes that have not been modified and does not have any side effects. The reason why this is happening is because some of the changes you make can affect other values of the cluster, hence the provider plugin will show the inability to know the future value until MongoDB Atlas provides those value in the response. As an example, a change in `instance_size` can affect `disk_iops`. This behaviour is related to how [Terraform Plugin Framework](https://developer.hashicorp.com/terraform/plugin/framework) behaves when the resource schema makes use of computed attributes.
-
-If you want to reduce the `known after apply` verbosity in Terraform plan output, explicitly declare expected values for those attributes in your configuration where possible. This approach gives Terraform more information upfront, resulting in clearer, more predictable plan output.
-
-### Remove or disable functionality
-
-To disable or remove functionalities, we recommended to explicitly set those attributes to their intended value instead of removing them from the configuration. This will ensure no ambiguity in what the final terraform resource state will be. For example, if you have a `read_only_specs` block in your cluster definition like this one:
-```terraform
-...
-region_configs = [
- {
- read_only_specs = {
- instance_size = "M10"
- node_count = 1
- }
- electable_specs = {
- instance_size = "M10"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_1"
- }
-]
-...
-```
-and your intention is to delete the read-only nodes, you should set the `node_count` attribute to `0` instead of removing the block:
-```terraform
-...
-region_configs = [
- {
- read_only_specs = {
- instance_size = "M10"
- node_count = 0
- }
- electable_specs = {
- instance_size = "M10"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_1"
- }
-]
-...
-```
-Similarly, if you have compute and disk auto-scaling enabled:
-```terraform
-...
-auto_scaling = {
- disk_gb_enabled = true
- compute_enabled = true
- compute_scale_down_enabled = true
- compute_min_instance_size = "M30"
- compute_max_instance_size = "M50"
-}
-...
-```
-and you want to disable them, you should set the `disk_gb_enabled` and `compute_enabled` attributes to `false` instead of removing the block:
-```terraform
-...
-auto_scaling = {
- disk_gb_enabled = false
- compute_enabled = false
- compute_scale_down_enabled = false
-}
-...
-```
diff --git a/docs/resources/advanced_cluster.md b/docs/resources/advanced_cluster.md
index 64532f5aed..40b32b10c1 100644
--- a/docs/resources/advanced_cluster.md
+++ b/docs/resources/advanced_cluster.md
@@ -1,16 +1,20 @@
+---
+subcategory: "Clusters"
+---
+
# Resource: mongodbatlas_advanced_cluster
-`mongodbatlas_advanced_cluster` provides an Advanced Cluster resource. The resource lets you create, edit and delete advanced clusters. The resource requires your Project ID.
+`mongodbatlas_advanced_cluster` provides an Advanced Cluster resource. The resource lets you create, edit and delete advanced clusters.
+
-This page describes the current version of `mongodbatlas_advanced_cluster`, the page for the **Preview for MongoDB Atlas Provider 2.0.0** can be found [here](./advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+~> **IMPORTANT:** If upgrading from our provider versions 1.x.x to 2.0.0 or later, you will be required to update your `mongodbatlas_advanced_cluster` resource configuration. Please refer [this guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/migrate-to-advanced-cluster-2.0) for details. This new implementation uses the recommended Terraform Plugin Framework, which, in addition to providing a better user experience and other features, adds support for the `moved` block between different resource types.
-Please refer to our [Considerations and Best Practices](#considerations-and-best-practices) section for additional guidance on this resource.
-~> **IMPORTANT:** We recommend all new MongoDB Atlas Terraform users start with the [`mongodbatlas_advanced_cluster`](advanced_cluster) resource. Key differences between [`mongodbatlas_cluster`](cluster) and [`mongodbatlas_advanced_cluster`](advanced_cluster) include support for [Multi-Cloud Clusters](https://www.mongodb.com/blog/post/introducing-multicloud-clusters-on-mongodb-atlas), [Asymmetric Sharding](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), and [Independent Scaling of Analytics Node Tiers](https://www.mongodb.com/blog/post/introducing-ability-independently-scale-atlas-analytics-node-tiers). For existing [`mongodbatlas_cluster`](cluster) resource users see our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
+~> **IMPORTANT:** We recommend all new MongoDB Atlas Terraform users start with the [`mongodbatlas_advanced_cluster`](advanced_cluster) resource. Key differences between [`mongodbatlas_cluster`](cluster) and [`mongodbatlas_advanced_cluster`](advanced_cluster) include support for [Multi-Cloud Clusters](https://www.mongodb.com/blog/post/introducing-multicloud-clusters-on-mongodb-atlas), [Asymmetric Sharding](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), and [Independent Scaling of Analytics Node Tiers](https://www.mongodb.com/blog/post/introducing-ability-independently-scale-atlas-analytics-node-tiers). For existing [`mongodbatlas_cluster`](cluster) resource users see our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
-> **NOTE:** If Backup Compliance Policy is enabled for the project for which this backup schedule is defined, you cannot modify the backup schedule for an individual cluster below the minimum requirements set in the Backup Compliance Policy. See [Backup Compliance Policy Prohibited Actions and Considerations](https://www.mongodb.com/docs/atlas/backup/cloud-backup/backup-compliance-policy/#configure-a-backup-compliance-policy).
--> **NOTE:** A network container is created for each provider/region combination on the advanced cluster. This can be referenced via a computed attribute for use with other resources. Refer to the `replication_specs.#.container_id` attribute in the [Attributes Reference](#attributes_reference) for more information.
+-> **NOTE:** A network container is created for each provider/region combination on the advanced cluster. This can be referenced via a computed attribute for use with other resources. Refer to the `replication_specs[#].container_id` attribute in the [Attributes Reference](#attributes_reference) for more information.
-> **NOTE:** To enable Cluster Extended Storage Sizes use the `is_extended_storage_sizes_enabled` parameter in the [mongodbatlas_project resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/project).
@@ -22,7 +26,6 @@ Please refer to our [Considerations and Best Practices](#considerations-and-best
## Example Usage
-
### Example single provider and single region
```terraform
@@ -30,21 +33,25 @@ resource "mongodbatlas_advanced_cluster" "test" {
project_id = "PROJECT ID"
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ }
+ ]
}
- }
+ ]
}
```
@@ -56,21 +63,25 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M0"
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M0"
+ }
+ provider_name = "TENANT"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
```
-**NOTE**: Upgrading the tenant cluster to a Flex cluster or a dedicated cluster is supported. When upgrading to a Flex cluster, change the `provider_name` from "TENANT" to "FLEX". See [Example Tenant Cluster Upgrade to Flex](#example-tenant-cluster-upgrade-to-flex) below. When upgrading to a dedicated cluster, change the `provider_name` to your preferred provider (AWS, GCP or Azure) and remove the variable `backing_provider_name`. See the [Example Tenant Cluster Upgrade](#Example-Tenant-Cluster-Upgrade) below. You can upgrade a tenant cluster only to a single provider on an M10-tier cluster or greater.
+-> **NOTE** Upgrading the tenant cluster to a Flex cluster or a dedicated cluster is supported. When upgrading to a Flex cluster, change the `provider_name` from "TENANT" to "FLEX". See [Example Tenant Cluster Upgrade to Flex](#example-tenant-cluster-upgrade-to-flex) below. When upgrading to a dedicated cluster, change the `provider_name` to your preferred provider (AWS, GCP or Azure) and remove the variable `backing_provider_name`. See the [Example Tenant Cluster Upgrade](#Example-Tenant-Cluster-Upgrade) below. You can upgrade a tenant cluster only to a single provider on an M10-tier cluster or greater.
When upgrading from the tenant, *only* the upgrade changes will be applied. This helps avoid a corrupt state file in the event that the upgrade succeeds but subsequent updates fail within the same `terraform apply`. To apply additional cluster changes, run a secondary `terraform apply` after the upgrade succeeds.
@@ -83,16 +94,20 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- }
- provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ }
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
```
@@ -104,14 +119,18 @@ resource "mongodbatlas_advanced_cluster" "example-flex" {
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ provider_name = "FLEX"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
```
@@ -123,14 +142,18 @@ resource "mongodbatlas_advanced_cluster" "example-flex" {
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ provider_name = "FLEX"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
```
@@ -147,16 +170,20 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- }
- provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ }
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
}
```
@@ -167,30 +194,34 @@ resource "mongodbatlas_advanced_cluster" "test" {
name = "NAME OF CLUSTER"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ provider_name = "GCP"
+ priority = 6
+ region_name = "NORTH_AMERICA_NORTHEAST_1"
+ }
+ ]
}
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "GCP"
- priority = 6
- region_name = "NORTH_AMERICA_NORTHEAST_1"
- }
- }
+ ]
}
```
### Example of a Multi Cloud Sharded Cluster with 2 shards
@@ -202,51 +233,54 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "SHARDED"
backup_enabled = true
- replication_specs { # shard 1
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
+ replication_specs = [
+ { # shard 1
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
+ },
+ { # shard 2
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
}
- }
-
- replication_specs { # shard 2
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
+ ]
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = true
oplog_size_mb = 991
sample_refresh_interval_bi_connector = 300
@@ -262,103 +296,105 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "GEOSHARDED"
backup_enabled = true
- replication_specs { # shard 1 - zone n1
- zone_name = "zone n1"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # shard 2 - zone n1
- zone_name = "zone n1"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # shard 1 - zone n2
- zone_name = "zone n2"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
- }
- }
-
- replication_specs { # shard 2 - zone n2
- zone_name = "zone n2"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
+ replication_specs = [
+ { # shard 1 - zone n1
+ zone_name = "zone n1"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
+ },
+ { # shard 2 - zone n1
+ zone_name = "zone n1"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
+ },
+ { # shard 1 - zone n2
+ zone_name = "zone n2"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "EUROPE_NORTH"
+ }
+ ]
+ },
+ { # shard 2 - zone n2
+ zone_name = "zone n2"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs ={
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "EUROPE_NORTH"
+ }
+ ]
}
- }
+ ]
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = true
oplog_size_mb = 999
sample_refresh_interval_bi_connector = 300
@@ -371,28 +407,28 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
Standard
```terraform
output "standard" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard
}
# Example return string: standard = "mongodb://cluster-atlas-shard-00-00.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-01.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-02.ygo1m.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-12diht-shard-0"
```
Standard srv
```terraform
output "standard_srv" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard_srv
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard_srv
}
# Example return string: standard_srv = "mongodb+srv://cluster-atlas.ygo1m.mongodb.net"
```
Private with Network peering and Custom DNS AWS enabled
```terraform
output "private" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].private
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings.private
}
# Example return string: private = "mongodb://cluster-atlas-shard-00-00-pri.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-01-pri.ygo1m.mongodb.net:27017,cluster-atlas-shard-00-02-pri.ygo1m.mongodb.net:27017/?ssl=true&authSource=admin&replicaSet=atlas-12diht-shard-0"
```
Private srv with Network peering and Custom DNS AWS enabled
```terraform
output "private_srv" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].private_srv
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings.private_srv
}
# Example return string: private_srv = "mongodb+srv://cluster-atlas-pri.ygo1m.mongodb.net"
```
@@ -401,7 +437,7 @@ By endpoint_service_id
```terraform
locals {
endpoint_service_id = google_compute_network.default.name
- private_endpoints = try(flatten([for cs in data.mongodbatlas_advanced_cluster.cluster[0].connection_strings : cs.private_endpoint]), [])
+ private_endpoints = coalesce(mongodbatlas_advanced_cluster.cluster.connection_strings.private_endpoint, [])
connection_strings = [
for pe in local.private_endpoints : pe.srv_connection_string
if contains([for e in pe.endpoints : e.endpoint_id], local.endpoint_service_id)
@@ -418,6 +454,16 @@ Refer to the following for full privatelink endpoint connection string examples:
* [AWS, Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster)
* [AWS, Regionalized Private Endpoints](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded)
+
+### Further Examples
+- [Asymmetric Sharded Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster)
+- [Auto-Scaling Per Shard](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard)
+- [Global Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_advanced_cluster/global-cluster)
+- [Multi-Cloud](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_advanced_cluster/multi-cloud)
+- [Tenant Upgrade](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_advanced_cluster/tenant-upgrade)
+- [Version Upgrade with Pinned FCV](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv)
+- [Migrate Cluster to Advanced Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_cluster_to_advanced_cluster/basic)
+
## Argument Reference
* `project_id` - (Required) Unique ID for the project to create the cluster.
@@ -431,9 +477,9 @@ Refer to the following for full privatelink endpoint connection string examples:
[Flex Cluster Backups](https://www.mongodb.com/docs/atlas/backup/cloud-backup/flex-cluster-backup/) for flex clusters.
If "`backup_enabled`" is `false` (default), the cluster doesn't use Atlas backups.
-* `retain_backups_enabled` - (Optional) Set to true to retain backup snapshots for the deleted cluster. This parameter applies to the Delete operation and only affects M10 and above clusters. If you encounter the `CANNOT_DELETE_SNAPSHOT_WITH_BACKUP_COMPLIANCE_POLICY` error code, see [how to delete a cluster with Backup Compliance Policy](../guides/delete-cluster-with-backup-compliance-policy.md).
+- `retain_backups_enabled` - (Optional) Set to true to retain backup snapshots for the deleted cluster. This parameter applies to the Delete operation and only affects M10 and above clusters. If you encounter the `CANNOT_DELETE_SNAPSHOT_WITH_BACKUP_COMPLIANCE_POLICY` error code, see [how to delete a cluster with Backup Compliance Policy](../guides/delete-cluster-with-backup-compliance-policy.md).
-**NOTE** Prior version of provider had parameter as `bi_connector` state will migrate it to new value you only need to update parameter in your terraform file
+-> **NOTE** Prior version of provider had parameter as `bi_connector` state will migrate it to new value you only need to update parameter in your terraform file
* `bi_connector_config` - (Optional) Configuration settings applied to BI Connector for Atlas on this cluster. The MongoDB Connector for Business Intelligence for Atlas (BI Connector) is only available for M10 and larger clusters. The BI Connector is a powerful tool which provides users SQL-based access to their MongoDB databases. As a result, the BI Connector performs operations which may be CPU and memory intensive. Given the limited hardware resources on M10 and M20 cluster tiers, you may experience performance degradation of the cluster when enabling the BI Connector. If this occurs, upgrade to an M30 or larger cluster or disable the BI Connector. See [below](#bi_connector_config).
* `cluster_type` - (Required)Type of the cluster that you want to create.
@@ -442,14 +488,13 @@ Refer to the following for full privatelink endpoint connection string examples:
- `SHARDED` Sharded cluster
- `GEOSHARDED` Global Cluster
-* `disk_size_gb` - (Optional) Capacity, in gigabytes, of the host's root volume. Increase this number to add capacity, up to a maximum possible value of 4096 (4 TB). This value must be a positive number. You can't set this value with clusters with local [NVMe SSDs](https://docs.atlas.mongodb.com/cluster-tier/#std-label-nvme-storage). The minimum disk size for dedicated clusters is 10 GB for AWS and GCP. If you specify diskSizeGB with a lower disk size, Atlas defaults to the minimum disk size value. If your cluster includes Azure nodes, this value must correspond to an existing Azure disk type (8, 16, 32, 64, 128, 256, 512, 1024, 2048, or 4095). Atlas calculates storage charges differently depending on whether you choose the default value or a custom value. The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require additional storage space beyond this limitation, consider [upgrading your cluster](https://docs.atlas.mongodb.com/scale-cluster/#std-label-scale-cluster-instance) to a higher tier. If your cluster spans cloud service providers, this value defaults to the minimum default of the providers involved. **(DEPRECATED)** Use `replication_specs.#.region_configs.#.(analytics_specs|electable_specs|read_only_specs).disk_size_gb` instead. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide).
-* `encryption_at_rest_provider` - (Optional) Possible values are AWS, GCP, AZURE or NONE. Only needed if you desire to manage the keys, see [Encryption at Rest using Customer Key Management](https://docs.atlas.mongodb.com/security-kms-encryption/) for complete documentation. You must configure encryption at rest for the Atlas project before enabling it on any cluster in the project. For Documentation, see [AWS](https://docs.atlas.mongodb.com/security-aws-kms/), [GCP](https://docs.atlas.mongodb.com/security-kms-encryption/) and [Azure](https://docs.atlas.mongodb.com/security-azure-kms/#std-label-security-azure-kms). Requirements are if `replication_specs.#.region_configs.#.Specs.instance_size` is M10 or greater and `backup_enabled` is false or omitted.
+* `encryption_at_rest_provider` - (Optional) Possible values are AWS, GCP, AZURE or NONE. Only needed if you desire to manage the keys, see [Encryption at Rest using Customer Key Management](https://docs.atlas.mongodb.com/security-kms-encryption/) for complete documentation. You must configure encryption at rest for the Atlas project before enabling it on any cluster in the project. For Documentation, see [AWS](https://docs.atlas.mongodb.com/security-aws-kms/), [GCP](https://docs.atlas.mongodb.com/security-kms-encryption/) and [Azure](https://docs.atlas.mongodb.com/security-azure-kms/#std-label-security-azure-kms). Requirements are if `replication_specs[#].region_configs[#].Specs.instance_size` is M10 or greater and `backup_enabled` is false or omitted.
* `tags` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#tags).
* `labels` - (Optional) Set that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the cluster. See [below](#labels). **DEPRECATED** Use `tags` instead.
* `mongo_db_major_version` - (Optional) Version of the cluster to deploy. Atlas supports all the MongoDB versions that have **not** reached [End of Live](https://www.mongodb.com/legal/support-policy/lifecycles) for M10+ clusters. If omitted, Atlas deploys the cluster with the default version. For more details, see [documentation](https://www.mongodb.com/docs/atlas/reference/faq/database/#which-versions-of-mongodb-do-service-clusters-use-). Atlas always deploys the cluster with the latest stable release of the specified version. If you set a value to this parameter and set `version_release_system` `CONTINUOUS`, the resource returns an error. Either clear this parameter or set `version_release_system`: `LTS`.
* `pinned_fcv` - (Optional) Pins the Feature Compatibility Version (FCV) to the current MongoDB version with a provided expiration date. To unpin the FCV the `pinned_fcv` attribute must be removed. This operation can take several minutes as the request processes through the MongoDB data plane. Once FCV is unpinned it will not be possible to downgrade the `mongo_db_major_version`. It is advised that updates to `pinned_fcv` are done isolated from other cluster changes. If a plan contains multiple changes, the FCV change will be applied first. If FCV is unpinned past the expiration date the `pinned_fcv` attribute must be removed. The following [knowledge hub article](https://kb.corp.mongodb.com/article/000021785/) and [FCV documentation](https://www.mongodb.com/docs/atlas/tutorial/major-version-change/#manage-feature-compatibility--fcv--during-upgrades) can be referenced for more details. See [below](#pinned_fcv).
* `pit_enabled` - (Optional) Flag that indicates if the cluster uses Continuous Cloud Backup.
-* `replication_specs` - List of settings that configure your cluster regions. This attribute has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. If for each `replication_specs` a `num_shards` is configured with a value greater than 1 (using deprecated sharding configurations), then each object represents a zone with one or more shards. The `replication_specs` configuration for all shards within the same zone must be the same, with the exception of `instance_size` and `disk_iops` that can scale independently. Note that independent `disk_iops` values are only supported for AWS provisioned IOPS, or Azure regions that support Extended IOPS. See [below](#replication_specs).
+* `replication_specs` - List of settings that configure your cluster regions. This attribute has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations. The `replication_specs` configuration for all shards within the same zone must be the same, with the exception of `instance_size` and `disk_iops` that can scale independently. Note that independent `disk_iops` values are only supported for AWS provisioned IOPS, or Azure regions that support Extended IOPS. See [below](#replication_specs).
* `root_cert_type` - (Optional) - Certificate Authority that MongoDB Atlas clusters use. You can specify ISRGROOTX1 (for ISRG Root X1).
* `termination_protection_enabled` - Flag that indicates whether termination protection is enabled on the cluster. If set to true, MongoDB Cloud won't delete the cluster. If set to false, MongoDB Cloud will delete the cluster.
* `version_release_system` - (Optional) - Release cadence that Atlas uses for this cluster. This parameter defaults to `LTS`. If you set this field to `CONTINUOUS`, you must omit the `mongo_db_major_version` field. Atlas accepts:
@@ -466,14 +511,14 @@ Refer to the following for full privatelink endpoint connection string examples:
* `replica_set_scaling_strategy` - (Optional) Replica set scaling mode for your cluster. Valid values are `WORKLOAD_TYPE`, `SEQUENTIAL` and `NODE_TYPE`. By default, Atlas scales under `WORKLOAD_TYPE`. This mode allows Atlas to scale your analytics nodes in parallel to your operational nodes. When configured as `SEQUENTIAL`, Atlas scales all nodes sequentially. This mode is intended for steady-state workloads and applications performing latency-sensitive secondary reads. When configured as `NODE_TYPE`, Atlas scales your electable nodes in parallel with your read-only and analytics nodes. This mode is intended for large, dynamic workloads requiring frequent and timely cluster tier scaling. This is the fastest scaling strategy, but it might impact latency of workloads when performing extensive secondary reads. [Modify the Replica Set Scaling Mode](https://dochub.mongodb.org/core/scale-nodes)
* `redact_client_log_data` - (Optional) Flag that enables or disables log redaction, see the [manual](https://www.mongodb.com/docs/manual/administration/monitoring/#log-redaction) for more information. Use this in conjunction with Encryption at Rest and TLS/SSL (Transport Encryption) to assist compliance with regulatory requirements. **Note**: Changing this setting on a cluster will trigger a rolling restart as soon as the cluster is updated.
* `config_server_management_mode` - (Optional) Config Server Management Mode for creating or updating a sharded cluster. Valid values are `ATLAS_MANAGED` (default) and `FIXED_TO_DEDICATED`. When configured as `ATLAS_MANAGED`, Atlas may automatically switch the cluster's config server type for optimal performance and savings. When configured as `FIXED_TO_DEDICATED`, the cluster will always use a dedicated config server. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
-* `delete_on_create_timeout`- (Optional) Flag that indicates whether to delete the cluster if the cluster creation times out. Default is false.
+- `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
### bi_connector_config
Specifies BI Connector for Atlas configuration.
```terraform
-bi_connector_config {
+bi_connector_config = {
enabled = true
read_preference = "secondary"
}
@@ -496,23 +541,19 @@ bi_connector_config {
-> **NOTE:** Prior to setting these options please ensure you read https://docs.atlas.mongodb.com/cluster-config/additional-options/.
--> **NOTE:** This argument has been changed to type list so make sure that you have the proper syntax. The list can have only one item maximum.
-
-> **NOTE:** Once you set some `advanced_configuration` attributes, we recommended to explicitly set those attributes to their intended value instead of removing them from the configuration. For example, if you set `javascript_enabled` to `false`, and later you want to go back to the default value (true), you must set it back to `true` instead of removing it.
Include **desired options** within advanced_configuration:
```terraform
// Nest options within advanced_configuration
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = false
minimum_enabled_tls_protocol = "TLS1_2"
}
```
-* `default_read_concern` - (Optional) [Default level of acknowledgment requested from MongoDB for read operations](https://docs.mongodb.com/manual/reference/read-concern/) set for this cluster. **(DEPRECATED)** MongoDB 6.0 and later clusters default to `local`. To use a custom read concern level, please refer to your driver documentation.
* `default_write_concern` - (Optional) [Default level of acknowledgment requested from MongoDB for write operations](https://docs.mongodb.com/manual/reference/write-concern/) set for this cluster. MongoDB 6.0 clusters default to [majority](https://docs.mongodb.com/manual/reference/write-concern/).
-* `fail_index_key_too_long` - **(DEPRECATED)** (Optional) When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
* `javascript_enabled` - (Optional) When true (default), the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
* `minimum_enabled_tls_protocol` - (Optional) Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are:
- TLS1_0
@@ -534,13 +575,10 @@ Include **desired options** within advanced_configuration:
### tags
```terraform
- tags {
- key = "Key 1"
- value = "Value 1"
- }
- tags {
- key = "Key 2"
- value = "Value 2"
+ tags = {
+ "Key 1" = "Value 1"
+ "Key 2" = "Value 2"
+ Key3 = "Value 3"
}
```
@@ -554,13 +592,10 @@ To learn more, see [Resource Tags](https://dochub.mongodb.org/core/add-cluster-t
### labels
```terraform
- labels {
- key = "Key 1"
- value = "Value 1"
- }
- labels {
- key = "Key 2"
- value = "Value 2"
+ labels = {
+ "Key 1" = "Value 1"
+ "Key 2" = "Value 2"
+ Key3 = "Value 3"
}
```
@@ -574,38 +609,41 @@ Key-value pairs that categorize the cluster. Each key and value has a maximum le
### replication_specs
+~> **NOTE:** We recommend reviewing our [Best Practices](#remove-or-disable-functionality) before disabling or removing any elements of replication_specs.
+
```terraform
//Example Multicloud
-replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "GCP"
- priority = 6
- region_name = "NORTH_AMERICA_NORTHEAST_1"
+replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ provider_name = "GCP"
+ priority = 6
+ region_name = "NORTH_AMERICA_NORTHEAST_1"
+ }
+ ]
}
-}
+]
```
-* `id` - **(DEPRECATED)** Unique identifer of the replication document for a zone in a Global Cluster. This value corresponds to the legacy sharding schema (no independent shard scaling) and is different from the Shard ID you may see in the Atlas UI. This value is not populated (empty string) when a sharded cluster has independently scaled shards.
-* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI. When using old sharding configuration (replication spec with `num_shards` greater than 1) this value is not populated.
-* `num_shards` - (Optional) Provide this value if you set a `cluster_type` of SHARDED or GEOSHARDED. Omit this value if you selected a `cluster_type` of REPLICASET. This API resource accepts 1 through 50, inclusive. This parameter defaults to 1. If you specify a `num_shards` value of 1 and a `cluster_type` of SHARDED, Atlas deploys a single-shard [sharded cluster](https://docs.atlas.mongodb.com/reference/glossary/#std-term-sharded-cluster). Don't create a sharded cluster with a single shard for production environments. Single-shard sharded clusters don't provide the same benefits as multi-shard configurations.
-If you are upgrading a replica set to a sharded cluster, you cannot increase the number of shards in the same update request. You should wait until after the cluster has completed upgrading to sharded and you have reconnected all application clients to the MongoDB router before adding additional shards. Otherwise, your data might become inconsistent once MongoDB Cloud begins distributing data across shards. To learn more, see [Convert a replica set to a sharded cluster documentation](https://www.mongodb.com/docs/atlas/scale-cluster/#convert-a-replica-set-to-a-sharded-cluster) and [Convert a replica set to a sharded cluster tutorial](https://www.mongodb.com/docs/upcoming/tutorial/convert-replica-set-to-replicated-shard-cluster). **(DEPRECATED)** To learn more, see the [1.18.0 Upgrade Guide](../guides/1.18.0-upgrade-guide).
+* `external_id` - Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI.
* `region_configs` - (Optional) Configuration for the hardware specifications for nodes set for a given region. Each `region_configs` object describes the region's priority in elections and the number and type of MongoDB nodes that Atlas deploys to the region. Each `region_configs` object must have either an `analytics_specs` object, `electable_specs` object, or `read_only_specs` object. See [below](#region_configs).
* `zone_name` - (Optional) Name for the zone in a Global Cluster.
* `zone_id` - Unique 24-hexadecimal digit string that identifies the zone in a Global Cluster. If clusterType is GEOSHARDED, this value indicates the zone that the given shard belongs to and can be used to configure Global Cluster backup policies.
@@ -613,6 +651,8 @@ If you are upgrading a replica set to a sharded cluster, you cannot increase the
### region_configs
+~> **NOTE:** We recommend reviewing our [Best Practices](#remove-or-disable-functionality) before disabling or removing any elements of region_configs.
+
* `analytics_specs` - (Optional) Hardware specifications for [analytics nodes](https://docs.atlas.mongodb.com/reference/faq/deployment/#std-label-analytics-nodes-overview) needed in the region. Analytics nodes handle analytic data such as reporting queries from BI Connector for Atlas. Analytics nodes are read-only and can never become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary). If you don't specify this parameter, no analytics nodes deploy to this region. See [below](#specs).
* `auto_scaling` - (Optional) Configuration for the collection of settings that configures auto-scaling information for the cluster. The values for the `auto_scaling` attribute must be the same for all `region_configs` of a cluster. See [below](#auto_scaling).
* `analytics_auto_scaling` - (Optional) Configuration for the Collection of settings that configures analytics-auto-scaling information for the cluster. The values for the `analytics_auto_scaling` attribute must be the same for all `region_configs` of a cluster. See [below](#analytics_auto_scaling).
@@ -620,13 +660,13 @@ If you are upgrading a replica set to a sharded cluster, you cannot increase the
* `electable_specs` - (Optional) Hardware specifications for electable nodes in the region. All `electable_specs` in the `region_configs` of a `replication_specs` must have the same `instance_size`. Electable nodes can become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary) and can enable local reads. If you do not specify this option, no electable nodes are deployed to the region. See [below](#specs).
* `priority` - (Optional) Election priority of the region. For regions with only read-only nodes, set this value to 0.
* If you have multiple `region_configs` objects (your cluster is multi-region or multi-cloud), they must have priorities in descending order. The highest priority is 7.
- * If your region has set `region_configs.#.electable_specs.0.node_count` to 1 or higher, it must have a priority of exactly one (1) less than another region in the `replication_specs.#.region_configs.#` array. The highest-priority region must have a priority of 7. The lowest possible priority is 1.
+ * If your region has set `region_configs[#].electable_specs.node_count` to 1 or higher, it must have a priority of exactly one (1) less than another region in the `replication_specs[#].region_configs[#]` array. The highest-priority region must have a priority of 7. The lowest possible priority is 1.
* `provider_name` - (Optional) Cloud service provider on which the servers are provisioned.
The possible values are:
- `AWS` - Amazon AWS
- `GCP` - Google Cloud Platform
- `AZURE` - Microsoft Azure
- - `TENANT` - M0 multi-tenant cluster. Use `replication_specs.#.region_configs.#.backing_provider_name` to set the cloud service provider.
+ - `TENANT` - M0 multi-tenant cluster. Use `replication_specs.[#].region_configs[#].backing_provider_name` to set the cloud service provider.
* `read_only_specs` - (Optional) Hardware specifications for read-only nodes in the region. All `read_only_specs` in the `region_configs` of a `replication_specs` must have the same `instance_size` as `electable_specs`. Read-only nodes can become the [primary](https://docs.atlas.mongodb.com/reference/glossary/#std-term-primary) and can enable local reads. If you don't specify this parameter, no read-only nodes are deployed to the region. See [below](#specs).
* `region_name` - (Optional) Physical location of your MongoDB cluster. The region you choose can affect network latency for clients accessing your databases. Requires the **Atlas region name**, see the reference list for [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/).
@@ -665,51 +705,50 @@ If you are upgrading a replica set to a sharded cluster, you cannot increase the
* `disk_gb_enabled` - (Optional) Flag that indicates whether this cluster enables disk auto-scaling. This parameter defaults to false.
-* `compute_enabled` - (Optional) Flag that indicates whether instance size auto-scaling is enabled. This parameter defaults to false. If a sharded cluster is making use of the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), auto-scaling of the instance size will be independent for each individual shard. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details. On the contrary, if a sharded cluster makes use of deprecated `num_shards` attribute (with values > 1), instance size auto-scaling will be performed uniformly across all shards in the cluster.
+* `compute_enabled` - (Optional) Flag that indicates whether instance size auto-scaling is enabled. This parameter defaults to false. If a sharded cluster is making use of the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), auto-scaling of the instance size will be independent for each individual shard. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details.
~> **IMPORTANT:** If `disk_gb_enabled` or `compute_enabled` is true, Atlas automatically scales the cluster up or down.
-This will cause the value of `replication_specs.#.region_configs.#.(electable_specs|read_only_specs).disk_size_gb` or `replication_specs.#.region_configs.#.(electable_specs|read_only_specs).instance_size` returned to potentially be different than what is specified in the Terraform config. If you then apply a plan, not noting this, Terraform will scale the cluster back to the original values in the config.
+This will cause the value of `replication_specs[#].region_config[#].(electable_specs|read_only_specs).disk_size_gb` or `replication_specs[#].region_config[#].(electable_specs|read_only_specs).instance_size` returned to potentially be different than what is specified in the Terraform config. If you then apply a plan, not noting this, Terraform will scale the cluster back to the original values in the config.
To prevent unintended changes when enabling autoscaling, use a lifecycle ignore customization as shown in the example below. To explicitly change `disk_size_gb` or `instance_size` values, comment out the `lifecycle` block and run `terraform apply`. Please be sure to uncomment the `lifecycle` block once done to prevent any accidental changes.
```terraform
// Example: ignore disk_size_gb and instance_size changes in a replica set
lifecycle {
ignore_changes = [
- replication_specs[0].region_configs[0].electable_specs[0].disk_size_gb,
- replication_specs[0].region_configs[0].electable_specs[0].instance_size,
- replication_specs[0].region_configs[0].electable_specs[0].disk_iops // instance_size change can affect disk_iops in case that you are using it
+ replication_specs[0].region_configs[0].electable_specs.disk_size_gb,
+ replication_specs[0].region_configs[0].electable_specs.instance_size,
+ replication_specs[0].region_configs[0].electable_specs.disk_iops // instance_size change can affect disk_iops in case that you are using it
]
}
```
-* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs.#.region_configs.#.auto_scaling.0.compute_enabled` : true. If you enable this option, specify a value for `replication_specs.#.region_configs.#.auto_scaling.0.compute_min_instance_size`.
-* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs.#.region_configs.#.auto_scaling.0.compute_scale_down_enabled` is true.
-* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs.#.region_configs.#.auto_scaling.0.compute_enabled` is true.
+* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs[#].region_configs[#].auto_scaling.compute_enabled` : true. If you enable this option, specify a value for `replication_specs[#].region_configs[#].auto_scaling.compute_min_instance_size`.
+* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs[#].region_configs[#].auto_scaling.compute_scale_down_enabled` is true.
+* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs[#].region_configs[#].auto_scaling.compute_enabled` is true.
### analytics_auto_scaling
* `disk_gb_enabled` - (Optional) Flag that indicates whether this cluster enables disk auto-scaling. This parameter defaults to false.
-* `compute_enabled` - (Optional) Flag that indicates whether analytics instance size auto-scaling is enabled. This parameter defaults to false. If a sharded cluster is making use of the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), auto-scaling of analytics instance size will be independent for each individual shard. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details. On the contrary, if a sharded cluster makes use of deprecated `num_shards` attribute (with values > 1), analytics instance size auto-scaling will be performed uniformily across all shards in the cluster.
+* `compute_enabled` - (Optional) Flag that indicates whether analytics instance size auto-scaling is enabled. This parameter defaults to false. If a sharded cluster is making use of the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), auto-scaling of analytics instance size will be independent for each individual shard. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details.
~> **IMPORTANT:** If `disk_gb_enabled` or `compute_enabled` is true, Atlas automatically scales the cluster up or down.
-This will cause the value of `replication_specs.#.region_configs.#.analytics_specs.0.disk_size_gb` or `replication_specs.#.region_configs.#.analytics_specs.0.instance_size` returned to potentially be different than what is specified in the Terraform config. If you then apply a plan, not noting this, Terraform will scale the cluster back to the original values in the config.
+This will cause the value of `replication_specs[#].region_config[#].analytics_specs.disk_size_gb` or `replication_specs[#].region_config[#].analytics_specs.instance_size` returned to potentially be different than what is specified in the Terraform config. If you then apply a plan, not noting this, Terraform will scale the cluster back to the original values in the config.
To prevent unintended changes when enabling autoscaling, use a lifecycle ignore customization as shown in the example below. To explicitly change `disk_size_gb` or `instance_size` values, comment out the `lifecycle` block and run `terraform apply`. Please be sure to uncomment the `lifecycle` block once done to prevent any accidental changes.
```terraform
// Example: ignore disk_size_gb and instance_size changes in a replica set
lifecycle {
ignore_changes = [
- replication_specs[0].region_configs[0].analytics_specs[0].disk_size_gb,
- replication_specs[0].region_configs[0].analytics_specs[0].instance_size,
- replication_specs[0].region_configs[0].analytics_specs[0].disk_iops // instance_size change can affect disk_iops in case that you are using it
-
+ replication_specs[0].region_configs[0].analytics_specs.disk_size_gb,
+ replication_specs[0].region_configs[0].analytics_specs.instance_size,
+ replication_specs[0].region_configs[0].analytics_specs.disk_iops // instance_size change can affect disk_iops in case that you are using it
]
}
```
-* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_enabled` : true. If you enable this option, specify a value for `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_min_instance_size`.
-* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_scale_down_enabled` is true.
-* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs.#.region_configs.#.analytics_auto_scaling.0.compute_enabled` is true.
+* `compute_scale_down_enabled` - (Optional) Flag that indicates whether the instance size may scale down. Atlas requires this parameter if `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_enabled` : true. If you enable this option, specify a value for `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_min_instance_size`.
+* `compute_min_instance_size` - (Optional) Minimum instance size to which your cluster can automatically scale (such as M10). Atlas requires this parameter if `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_scale_down_enabled` is true.
+* `compute_max_instance_size` - (Optional) Maximum instance size to which your cluster can automatically scale (such as M40). Atlas requires this parameter if `replication_specs[#].region_configs[#].analytics_auto_scaling.compute_enabled` is true.
### pinned_fcv
@@ -722,7 +761,6 @@ In addition to all arguments above, the following attributes are exported:
* `cluster_id` - The cluster ID.
* `mongo_db_version` - Version of MongoDB the cluster runs, in `major-version`.`minor-version` format.
-* `id` - The Terraform's unique identifier used internally for state management.
* `connection_strings` - Set of connection strings that your applications use to connect to this cluster. More information in [Connection-strings](https://docs.mongodb.com/manual/reference/connection-string/). Use the parameters in this object to connect your applications to this cluster. To learn more about the formats of connection strings, see [Connection String Options](https://docs.atlas.mongodb.com/reference/faq/connection-changes/). NOTE: Atlas returns the contents of this object after the cluster is operational, not while it builds the cluster.
**NOTE** Connection strings must be returned as a list, therefore to refer to a specific attribute value add index notation. Example: mongodbatlas_advanced_cluster.cluster-test.connection_strings.0.standard_srv
@@ -734,14 +772,14 @@ In addition to all arguments above, the following attributes are exported:
- `connection_strings.private` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- `connection_strings.private_srv` - [Network-peering-endpoint-aware](https://docs.atlas.mongodb.com/security-vpc-peering/#vpc-peering) mongodb+srv://connection strings for each interface VPC endpoint you configured to connect to this cluster. Returned only if you created a network peering connection to this cluster.
- `connection_strings.private_endpoint` - Private endpoint connection strings. Each object describes the connection strings you can use to connect to this cluster through a private endpoint. Atlas returns this parameter only if you deployed a private endpoint to all regions to which you deployed this cluster's nodes.
- - `connection_strings.private_endpoint.#.connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
- - `connection_strings.private_endpoint.#.srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
- - `connection_strings.private_endpoint.#.srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for connectionStrings.privateEndpoint[#].srvConnectionString.
- - `connection_strings.private_endpoint.#.type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
- - `connection_strings.private_endpoint.#.endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
- - `connection_strings.private_endpoint.#.endpoints.#.endpoint_id` - Unique identifier of the private endpoint.
- - `connection_strings.private_endpoint.#.endpoints.#.provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
- - `connection_strings.private_endpoint.#.endpoints.#.region` - Region to which you deployed the private endpoint.
+ - `connection_strings.private_endpoint[#].connection_string` - Private-endpoint-aware `mongodb://`connection string for this private endpoint.
+ - `connection_strings.private_endpoint[#].srv_connection_string` - Private-endpoint-aware `mongodb+srv://` connection string for this private endpoint. The `mongodb+srv` protocol tells the driver to look up the seed list of hosts in DNS . Atlas synchronizes this list with the nodes in a cluster. If the connection string uses this URI format, you don't need to: Append the seed list or Change the URI if the nodes change. Use this URI format if your driver supports it. If it doesn't, use `connection_strings.private_endpoint[#].connection_string`
+ - `connection_strings.private_endpoint[#].srv_shard_optimized_connection_string` - Private endpoint-aware connection string optimized for sharded clusters that uses the `mongodb+srv://` protocol to connect to MongoDB Cloud through a private endpoint. If the connection string uses this Uniform Resource Identifier (URI) format, you don't need to change the Uniform Resource Identifier (URI) if the nodes change. Use this Uniform Resource Identifier (URI) format if your application and Atlas cluster support it. If it doesn't, use and consult the documentation for `connection_strings.private_endpoint[#].srv_connection_string`.
+ - `connection_strings.private_endpoint[#].type` - Type of MongoDB process that you connect to with the connection strings. Atlas returns `MONGOD` for replica sets, or `MONGOS` for sharded clusters.
+ - `connection_strings.private_endpoint[#].endpoints` - Private endpoint through which you connect to Atlas when you use `connection_strings.private_endpoint[#].connection_string` or `connection_strings.private_endpoint[#].srv_connection_string`
+ - `connection_strings.private_endpoint[#].endpoints[#].endpoint_id` - Unique identifier of the private endpoint.
+ - `connection_strings.private_endpoint[#].endpoints[#].provider_name` - Cloud provider to which you deployed the private endpoint. Atlas returns `AWS` or `AZURE`.
+ - `connection_strings.private_endpoint[#].endpoints[#].region` - Region to which you deployed the private endpoint.
* `state_name` - Current state of the cluster. The possible states are:
- IDLE
- CREATING
@@ -749,7 +787,7 @@ In addition to all arguments above, the following attributes are exported:
- DELETING
- DELETED
- REPAIRING
-* `replication_specs.#.container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container created when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
+* `replication_specs[#].container_id` - A key-value map of the Network Peering Container ID(s) for the configuration specified in `region_configs`. The Container ID is the id of the container created when the first cluster in the region (AWS/Azure) or project (GCP) was created. The syntax is `"providerName:regionName" = "containerId"`. Example `AWS:US_EAST_1" = "61e0797dde08fb498ca11a71`.
* `config_server_type` Describes a sharded cluster's config server type. Valid values are `DEDICATED` and `EMBEDDED`. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
* `pinned_fcv.version` - Feature compatibility version of the cluster.
@@ -767,50 +805,73 @@ See detailed information for arguments and attributes: [MongoDB API Advanced Clu
~> **IMPORTANT:**
• When a cluster is imported, the resulting schema structure will always return the new schema including `replication_specs` per independent shards of the cluster.
+## Move
+
+`mongodbatlas__cluster` resources can be moved to `mongodbatlas_advanced_cluster` in Terraform v1.8 and later, e.g.:
+
+```terraform
+moved {
+ from = mongodbatlas_cluster.cluster
+ to = mongodbatlas_advanced_cluster.cluster
+}
+```
+
+More information about moving resources can be found in our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide) and in the Terraform documentation [here](https://developer.hashicorp.com/terraform/language/moved) and [here](https://developer.hashicorp.com/terraform/language/modules/develop/refactoring).
+
## Considerations and Best Practices
+### "known after apply" verbosity
+
+When making changes to your cluster, it is expected that your Terraform plan might show `known after apply` entries in attributes that have not been modified and does not have any side effects. The reason why this is happening is because some of the changes you make can affect other values of the cluster, hence the provider plugin will show the inability to know the future value until MongoDB Atlas provides those value in the response. As an example, a change in `instance_size` can affect `disk_iops`. This behaviour is related to how [Terraform Plugin Framework](https://developer.hashicorp.com/terraform/plugin/framework) behaves when the resource schema makes use of computed attributes.
+
+If you want to reduce the `known after apply` verbosity in Terraform plan output, explicitly declare expected values for those attributes in your configuration where possible. This approach gives Terraform more information upfront, resulting in clearer, more predictable plan output.
+
### Remove or disable functionality
To disable or remove functionalities, we recommended to explicitly set those attributes to their intended value instead of removing them from the configuration. This will ensure no ambiguity in what the final terraform resource state will be. For example, if you have a `read_only_specs` block in your cluster definition like this one:
```terraform
...
-region_configs {
- read_only_specs {
- instance_size = "M10"
- node_count = 1
- }
- electable_specs {
- instance_size = "M10"
- node_count = 3
+region_configs = [
+ {
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_WEST_1"
}
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_1"
-}
+]
...
```
and your intention is to delete the read-only nodes, you should set the `node_count` attribute to `0` instead of removing the block:
```terraform
...
-region_configs {
- read_only_specs {
- instance_size = "M10"
- node_count = 0
- }
- electable_specs {
- instance_size = "M10"
- node_count = 3
+region_configs = [
+ {
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 0
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_WEST_1"
}
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_1"
-}
+]
...
```
Similarly, if you have compute and disk auto-scaling enabled:
```terraform
...
-auto_scaling {
+auto_scaling = {
disk_gb_enabled = true
compute_enabled = true
compute_scale_down_enabled = true
@@ -822,7 +883,7 @@ auto_scaling {
and you want to disable them, you should set the `disk_gb_enabled` and `compute_enabled` attributes to `false` instead of removing the block:
```terraform
...
-auto_scaling {
+auto_scaling = {
disk_gb_enabled = false
compute_enabled = false
compute_scale_down_enabled = false
diff --git a/docs/resources/alert_configuration.md b/docs/resources/alert_configuration.md
index c65671d7ce..56de0365dd 100644
--- a/docs/resources/alert_configuration.md
+++ b/docs/resources/alert_configuration.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Alert Configurations"
+---
+
# Resource: mongodbatlas_alert_configuration
`mongodbatlas_alert_configuration` provides an Alert Configuration resource to define the conditions that trigger an alert and the methods of notification within a MongoDB Atlas project.
@@ -132,6 +136,10 @@ resource "mongodbatlas_alert_configuration" "test" {
}
```
+### Further Examples
+- [Alert Configuration](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_alert_configuration)
+
+
## Argument Reference
* `project_id` - (Required) The ID of the project where the alert configuration will create.
diff --git a/docs/resources/api_key.md b/docs/resources/api_key.md
index 13c5d7b555..140e1334d0 100644
--- a/docs/resources/api_key.md
+++ b/docs/resources/api_key.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Resource: mongodbatlas_api_key
`mongodbatlas_api_key` provides a Organization API key resource. This allows an Organizational API key to be created.
@@ -14,6 +18,9 @@ resource "mongodbatlas_api_key" "test" {
}
```
+### Further Examples
+- [Create Programmatic API Key](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_api_key)
+
## Argument Reference
* `org_id` - Unique identifier for the organization whose API keys you want to retrieve. Use the /orgs endpoint to retrieve all organizations to which the authenticated user has access.
diff --git a/docs/resources/api_key_project_assignment.md b/docs/resources/api_key_project_assignment.md
index 1f3c8ea33d..b7f026a929 100644
--- a/docs/resources/api_key_project_assignment.md
+++ b/docs/resources/api_key_project_assignment.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Resource: mongodbatlas_api_key_project_assignment
`mongodbatlas_api_key_project_assignment` provides an API Key Project Assignment resource. The resource lets you create, edit, and delete Organization API keys assignments to projects.
@@ -41,6 +45,9 @@ resource "mongodbatlas_access_list_api_key" "this" {
}
```
+### Further Examples
+- [Assign API Key to Project](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_api_key_assignment)
+
## Schema
diff --git a/docs/resources/auditing.md b/docs/resources/auditing.md
index 444375770d..fe3420932b 100644
--- a/docs/resources/auditing.md
+++ b/docs/resources/auditing.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Auditing"
+---
+
# Resource: mongodbatlas_auditing
`mongodbatlas_auditing` provides an Auditing resource. This allows auditing to be created.
diff --git a/docs/resources/backup_compliance_policy.md b/docs/resources/backup_compliance_policy.md
index 5a6a2dd0dc..5b84a5729f 100644
--- a/docs/resources/backup_compliance_policy.md
+++ b/docs/resources/backup_compliance_policy.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Resource: mongodbatlas_backup_compliance_policy
`mongodbatlas_backup_compliance_policy` provides a resource that enables you to set up a Backup Compliance Policy resource. [Backup Compliance Policy ](https://www.mongodb.com/docs/atlas/backup/cloud-backup/backup-compliance-policy) prevents any user, regardless of role, from modifying or deleting specific cluster settings, backups, and backup configurations. When enabled, the Backup Compliance Policy will be applied as the minimum policy for all clusters and backups in the project. It can only be disabled by contacting MongoDB support. This feature is only supported for cluster tiers M10+.
@@ -22,17 +26,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = var.region
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
@@ -132,6 +136,9 @@ resource "mongodbatlas_backup_compliance_policy" "backup_policy" {
}
```
+### Further Examples
+- [Backup Compliance Policy](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_backup_compliance_policy/resource)
+
## Argument Reference
* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies your project.
@@ -211,5 +218,3 @@ $ terraform import mongodbatlas_backup_compliance_policy.backup_policy 5d0f1f73c
```
For more information see: [MongoDB Atlas API Reference](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Cloud-Backups/operation/updateDataProtectionSettings) and [Backup Compliance Policy Prohibited Actions](https://www.mongodb.com/docs/atlas/backup/cloud-backup/backup-compliance-policy/#prohibited-actions).
-
-
diff --git a/docs/resources/cloud_backup_schedule.md b/docs/resources/cloud_backup_schedule.md
index 8999b7064e..b21e0d85fe 100644
--- a/docs/resources/cloud_backup_schedule.md
+++ b/docs/resources/cloud_backup_schedule.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Resource: mongodbatlas_cloud_backup_schedule
`mongodbatlas_cloud_backup_schedule` provides a cloud backup schedule resource. The resource lets you create, read, update and delete a cloud backup schedule.
@@ -24,17 +28,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_CENTRAL_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
@@ -71,17 +75,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_CENTRAL_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
@@ -107,17 +111,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_CENTRAL_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
@@ -170,17 +174,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # must be enabled in order to use cloud_backup_schedule resource
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_CENTRAL_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_schedule" "test" {
@@ -212,6 +216,11 @@ resource "mongodbatlas_cloud_backup_schedule" "test" {
}
```
+
+
+### Further Examples
+- [Cloud Backup Schedule](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_schedule)
+
## Argument Reference
* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
@@ -228,12 +237,12 @@ resource "mongodbatlas_cloud_backup_schedule" "test" {
* `policy_item_weekly` - (Optional) Weekly policy item. See [below](#policy_item_weekly)
* `policy_item_monthly` - (Optional) Monthly policy item. See [below](#policy_item_monthly)
* `policy_item_yearly` - (Optional) Yearly policy item. See [below](#policy_item_yearly)
-* `auto_export_enabled` - Flag that indicates whether MongoDB Cloud automatically exports Cloud Backup Snapshots to the Export Bucket. Once enabled, it must be disabled by explicitly setting the value to `false`. Value can be one of the following:
+* `auto_export_enabled` - (Optional) Flag that indicates whether MongoDB Cloud automatically exports Cloud Backup Snapshots to the Export Bucket. Value can be one of the following:
* true - Enables automatic export of cloud backup snapshots to the Export Bucket.
* false - Disables automatic export of cloud backup snapshots to the Export Bucket. (default)
* `use_org_and_group_names_in_export_prefix` - Specify true to use organization and project names instead of organization and project UUIDs in the path for the metadata files that Atlas uploads to your bucket after it finishes exporting the snapshots. To learn more about the metadata files that Atlas uploads, see [Export Cloud Backup Snapshot](https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/#std-label-cloud-provider-snapshot-export).
* `copy_settings` - List that contains a document for each copy setting item in the desired backup policy. See [below](#copy_settings)
-* `export` - Policy for automatically exporting Cloud Backup Snapshots. `auto_export_enabled` must be set to true when defining this attribute. See [below](#export)
+* `export` - Policy for automatically exporting Cloud Backup Snapshots. See [below](#export)
### export
* `export_bucket_id` - Unique identifier of the mongodbatlas_cloud_backup_snapshot_export_bucket export_bucket_id value.
* `frequency_type` - Frequency associated with the export snapshot item: `weekly`, `monthly`, `yearly`, `daily` (requires reaching out to Customer Support)
@@ -281,7 +290,6 @@ resource "mongodbatlas_cloud_backup_schedule" "test" {
* `frequencies` - (Required) List that describes which types of snapshots to copy. i.e. "HOURLY" "DAILY" "WEEKLY" "MONTHLY" "ON_DEMAND"
* `region_name` - (Required) Target region to copy snapshots belonging to replicationSpecId to. Please supply the 'Atlas Region' which can be found under https://www.mongodb.com/docs/atlas/reference/cloud-providers/ 'regions' link
* `zone_id` - Unique 24-hexadecimal digit string that identifies the zone in a cluster. For global clusters, there can be multiple zones to choose from. For sharded clusters and replica set clusters, there is only one zone in the cluster. To find appropriate value for `zone_id`, do a GET request to Return One Cluster from One Project and consult the replicationSpecs array [Return One Cluster From One Project](#operation/getCluster). Alternately, use `mongodbatlas_advanced_cluster` data source or resource and reference `replication_specs.#.zone_id`.
-* `replication_spec_id` - Unique 24-hexadecimal digit string that identifies the replication object for a zone in a cluster. For global clusters, there can be multiple zones to choose from. For sharded clusters and replica set clusters, there is only one zone in the cluster. To find the Replication Spec Id, consult the replicationSpecs array returned from [Return One Multi-Cloud Cluster in One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getcluster). **(DEPRECATED)** Use `zone_id` instead. To learn more, see the [1.18.0 upgrade guide](../guides/1.18.0-upgrade-guide.md#transition-cloud-backup-schedules-for-clusters-to-use-zones).
* `should_copy_oplogs` - (Required) Flag that indicates whether to copy the oplogs to the target region. You can use the oplogs to perform point-in-time restores.
## Attributes Reference
diff --git a/docs/resources/cloud_backup_snapshot.md b/docs/resources/cloud_backup_snapshot.md
index 77b8ff17e2..524c84a60c 100644
--- a/docs/resources/cloud_backup_snapshot.md
+++ b/docs/resources/cloud_backup_snapshot.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Resource: mongodbatlas_cloud_backup_snapshot
`mongodbatlas_cloud_backup_snapshot` provides a resource to take a cloud backup snapshot on demand.
@@ -16,17 +20,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_WEST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_snapshot" "test" {
@@ -50,6 +54,10 @@ resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" {
}
```
+### Further Examples
+- [Restore from backup snapshot at point in time](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time)
+- [Restore from backup snapshot using an advanced cluster resource](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster)
+
## Argument Reference
* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
@@ -57,6 +65,7 @@ resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" {
* `description` - (Required) Description of the on-demand snapshot.
* `retention_in_days` - (Required) The number of days that Atlas should retain the on-demand snapshot. Must be at least 1.
* `timeouts`- (Optional) The duration of time to wait for Atlas to create a Cloud Backup Snapshot. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. Defaults to `1h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
## Attributes Reference
diff --git a/docs/resources/cloud_backup_snapshot_export_bucket.md b/docs/resources/cloud_backup_snapshot_export_bucket.md
index ee330b8943..31e81c0e74 100644
--- a/docs/resources/cloud_backup_snapshot_export_bucket.md
+++ b/docs/resources/cloud_backup_snapshot_export_bucket.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Resource: mongodbatlas_cloud_backup_snapshot_export_bucket
`mongodbatlas_cloud_backup_snapshot_export_bucket` allows you to create an export snapshot bucket for the specified project.
@@ -33,6 +37,10 @@ resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" {
}
```
+### Further Examples
+- [AWS Cloud Backup Snapshot Export Bucket](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/aws)
+- [Azure Cloud Backup Snapshot Export Bucket](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_export_bucket/azure)
+
## Argument Reference
* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
diff --git a/docs/resources/cloud_backup_snapshot_export_job.md b/docs/resources/cloud_backup_snapshot_export_job.md
index a013f7c1ac..2f3131b293 100644
--- a/docs/resources/cloud_backup_snapshot_export_job.md
+++ b/docs/resources/cloud_backup_snapshot_export_job.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Resource: mongodbatlas_cloud_backup_snapshot_export_job
`mongodbatlas_cloud_backup_snapshot_export_job` allows you to create a cloud backup snapshot export job for the specified project.
@@ -81,6 +85,9 @@ resource "mongodbatlas_cloud_backup_schedule" "backup" {
}
```
+### Further Examples
+- [Cloud Backup Snapshot Export Job](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_export_job)
+
## Argument Reference
* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies the project which contains the Atlas cluster whose snapshot you want to export.
diff --git a/docs/resources/cloud_backup_snapshot_restore_job.md b/docs/resources/cloud_backup_snapshot_restore_job.md
index 8db3c1592e..f7fe07de91 100644
--- a/docs/resources/cloud_backup_snapshot_restore_job.md
+++ b/docs/resources/cloud_backup_snapshot_restore_job.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Backups"
+---
+
# Resource: mongodbatlas_cloud_backup_snapshot_restore_job
`mongodbatlas_cloud_backup_snapshot_restore_job` provides a resource to create a new restore job from a cloud backup snapshot of a specified cluster. The restore job must define one of three delivery types:
@@ -25,20 +29,20 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_WEST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
+resource "mongodbatlas_cloud_backup_snapshot" "test" {
project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
description = "myDescription"
@@ -46,9 +50,9 @@ resource "mongodbatlas_cloud_provider_snapshot" "test" {
}
resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name
- snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id
+ project_id = mongodbatlas_cloud_backup_snapshot.test.project_id
+ cluster_name = mongodbatlas_cloud_backup_snapshot.test.cluster_name
+ snapshot_id = mongodbatlas_cloud_backup_snapshot.test.snapshot_id
delivery_type_config {
automated = true
target_cluster_name = "MyCluster"
@@ -66,20 +70,20 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_WEST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
+resource "mongodbatlas_cloud_backup_snapshot" "test" {
project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
description = "myDescription"
@@ -87,9 +91,9 @@ resource "mongodbatlas_cloud_provider_snapshot" "test" {
}
resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name
- snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id
+ project_id = mongodbatlas_cloud_backup_snapshot.test.project_id
+ cluster_name = mongodbatlas_cloud_backup_snapshot.test.cluster_name
+ snapshot_id = mongodbatlas_cloud_backup_snapshot.test.snapshot_id
delivery_type_config {
download = true
}
@@ -104,17 +108,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "EU_WEST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_snapshot" "test" {
@@ -139,9 +143,9 @@ resource "mongodbatlas_cloud_backup_snapshot_restore_job" "test" {
}
```
-### Available complete examples
-- [Restore from backup snapshot at point in time](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time)
-- [Restore from backup snapshot using an advanced cluster resource](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster)
+### Further Examples
+- [Restore from backup snapshot at point in time](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time)
+- [Restore from backup snapshot using an advanced cluster resource](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster)
## Argument Reference
diff --git a/docs/resources/cloud_provider_access.md b/docs/resources/cloud_provider_access.md
index c460c1f3e0..2a96f38af4 100644
--- a/docs/resources/cloud_provider_access.md
+++ b/docs/resources/cloud_provider_access.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cloud Provider Access"
+---
+
# Resource: Cloud Provider Access Configuration Paths
The Terraform MongoDB Atlas Provider offers the following path to perform an authorization for a cloud provider role -
@@ -55,6 +59,12 @@ resource "mongodbatlas_cloud_provider_access_setup" "test_role" {
```
+### Further Examples
+- [AWS Cloud Provider Access](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_provider_access/aws)
+- [Azure Cloud Provider Access](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_provider_access/azure)
+- [GCP Cloud Provider Access](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_provider_access/gcp)
+
+
## Argument Reference
* `project_id` - (Required) The unique ID for the project
@@ -63,6 +73,8 @@ resource "mongodbatlas_cloud_provider_access_setup" "test_role" {
* `atlas_azure_app_id` - Azure Active Directory Application ID of Atlas. This property is required when `provider_name = "AZURE".`
* `service_principal_id`- UUID string that identifies the Azure Service Principal. This property is required when `provider_name = "AZURE".`
* `tenant_id` - UUID String that identifies the Azure Active Directory Tenant ID. This property is required when `provider_name = "AZURE".`
+* `timeouts`- (Optional) The duration of time to wait for the resource to be created. The default timeout is `1h`. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
## Attributes Reference
diff --git a/docs/resources/cloud_provider_snapshot.md b/docs/resources/cloud_provider_snapshot.md
deleted file mode 100644
index 269e65abe6..0000000000
--- a/docs/resources/cloud_provider_snapshot.md
+++ /dev/null
@@ -1,87 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This resource is deprecated, use `mongodbatlas_cloud_backup_snapshot`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Resource: mongodbatlas_cloud_provider_snapshot
-
-`mongodbatlas_cloud_provider_snapshot` provides a resource to take a cloud backup snapshot on demand.
-On-demand snapshots happen immediately, unlike scheduled snapshots which occur at regular intervals. If there is already an on-demand snapshot with a status of queued or inProgress, you must wait until Atlas has completed the on-demand snapshot before taking another.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "MyCluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # enable cloud backup snapshots
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_WEST_2"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
- description = "myDescription"
- retention_in_days = 1
- timeout = "10m"
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name
- snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id
- delivery_type_config {
- download = true
- }
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
-* `cluster_name` - (Required) The name of the Atlas cluster that contains the snapshots you want to retrieve.
-* `description` - (Required) Description of the on-demand snapshot.
-* `retention_in_days` - (Required) The number of days that Atlas should retain the on-demand snapshot. Must be at least 1.
-* `timeout`- (Optional) The duration of time to wait to finish the on-demand snapshot. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. Default value for the timeout is `10m`
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `snapshot_id` - Unique identifier of the snapshot.
-* `id` - Unique identifier used for terraform for internal manages.
-* `created_at` - UTC ISO 8601 formatted point in time when Atlas took the snapshot.
-* `description` - Description of the snapshot. Only present for on-demand snapshots.
-* `expires_at` - UTC ISO 8601 formatted point in time when Atlas will delete the snapshot.
-* `master_key_uuid` - Unique ID of the AWS KMS Customer Master Key used to encrypt the snapshot. Only visible for clusters using Encryption at Rest via Customer KMS.
-* `mongod_version` - Version of the MongoDB server.
-* `snapshot_type` - Specified the type of snapshot. Valid values are onDemand and scheduled.
-* `status` - Current status of the snapshot. One of the following values will be returned: queued, inProgress, completed, failed.
-* `storage_size_bytes` - Specifies the size of the snapshot in bytes.
-* `type` - Specifies the type of cluster: replicaSet or shardedCluster.
-
-## Import
-
-Cloud Backup Snapshot entries can be imported using project project_id, cluster_name and snapshot_id (Unique identifier of the snapshot), in the format `PROJECTID-CLUSTERNAME-SNAPSHOTID`, e.g.
-
-```
-$ terraform import mongodbatlas_cloud_provider_snapshot.test 5d0f1f73cf09a29120e173cf-MyClusterTest-5d116d82014b764445b2f9b5
-```
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/backup/backups/)
diff --git a/docs/resources/cloud_provider_snapshot_backup_policy.md b/docs/resources/cloud_provider_snapshot_backup_policy.md
deleted file mode 100644
index f28a17a553..0000000000
--- a/docs/resources/cloud_provider_snapshot_backup_policy.md
+++ /dev/null
@@ -1,274 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This resource is deprecated, use `mongodbatlas_cloud_backup_schedule`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Resource: mongodbatlas_cloud_provider_snapshot_backup_policy
-
-`mongodbatlas_cloud_provider_snapshot_backup_policy` provides a resource that enables you to view and modify the snapshot schedule and retention settings for an Atlas cluster with Cloud Backup enabled. A default policy is created automatically when Cloud Backup is enabled for the cluster.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-# Examples - Modifying Polices
-When Cloud Backup is enabled for a cluster MongoDB Atlas automatically creates a default Cloud Backup schedule for the cluster with four policy items; hourly, daily, weekly, and monthly. Because of this default creation this provider automatically saves the Cloud Backup Snapshot Policy into the Terraform state when a cluster is created/modified to use Cloud Backup. If the default works well for you then you do not need to do anything other than create a cluster with Cloud Backup enabled and your Terraform state will have this information if you need it. However, if you want the policy to be different than the default we've provided some examples to help below.
-
-## Example Usage - Create a Cluster and Modify the 4 Default Policies Simultaneously
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "MyCluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # must be enabled in order to use cloud_provider_snapshot_backup_policy resource
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_CENTRAL_1"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
-
- reference_hour_of_day = 3
- reference_minute_of_hour = 45
- restore_window_days = 4
-
- //Keep all 4 default policies but modify the units and values
- //Could also just reflect the policy defaults here for later management
- policies {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id
- frequency_interval = 1
- frequency_type = "hourly"
- retention_unit = "days"
- retention_value = 1
- }
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id
- frequency_interval = 1
- frequency_type = "daily"
- retention_unit = "days"
- retention_value = 2
- }
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id
- frequency_interval = 4
- frequency_type = "weekly"
- retention_unit = "weeks"
- retention_value = 3
- }
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id
- frequency_interval = 5
- frequency_type = "monthly"
- retention_unit = "months"
- retention_value = 4
- }
- }
-}
-```
-
-~> **IMPORTANT:** `policies.#.policy_item.#.id` is obtained when the cluster is created. The example here shows the default order of the default policy when Cloud Backup is enabled (`cloud_backup` is set to true). The default policy is viewable in the Terraform State file.
-
-## Example Usage - Create a Cluster and Modify 3 Default Policies and Remove 1 Default Policy Simultaneously
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "MyCluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # must be enabled in order to use cloud_provider_snapshot_backup_policy resource
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_CENTRAL_1"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
-
- reference_hour_of_day = 3
- reference_minute_of_hour = 45
- restore_window_days = 4
-
-
- policies {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id
- frequency_interval = 1
- frequency_type = "hourly"
- retention_unit = "days"
- retention_value = 1
- }
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id
- frequency_interval = 1
- frequency_type = "daily"
- retention_unit = "days"
- retention_value = 2
- }
-
- # Item removed
- # policy_item {
- # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id
- # frequency_interval = 4
- # frequency_type = "weekly"
- # retention_unit = "weeks"
- # retention_value = 3
- # }
-
- policy_item {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id
- frequency_interval = 5
- frequency_type = "monthly"
- retention_unit = "months"
- retention_value = 4
- }
- }
-}
-```
-
--> **NOTE:** If you want the Cloud Backup Snapshot Policy to vary in the number of policies from the default when creating the cluster, perhaps you want to remove one policy item and modify the remaining three, simply follow this example here to remove a policy and modify three.
-
-~> **IMPORTANT:** If we decide to remove the 3rd item, as in our above example marked with `#`, we need to consider that once the cluster is modified or `terraform refresh` is run the item `2` in the array will be replaced with content of the 4th item, so it could cause an inconsistency. This may be avoided by using hardcoded id values which will better handle this situation. (See below for an example of a hardcoded value)
-
-## Example Usage - Remove 3 Default Policies Items After the Cluster Has Already Been Created and Modify One Policy
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "MyCluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # must be enabled in order to use cloud_provider_snapshot_backup_policy resource
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_CENTRAL_1"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_backup_policy" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
-
- reference_hour_of_day = 3
- reference_minute_of_hour = 45
- restore_window_days = 4
-
-
- policies {
- id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.id
-
- # Item removed
- # policy_item {
- # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.0.id
- # frequency_interval = 1
- # frequency_type = "hourly"
- # retention_unit = "days"
- # retention_value = 1
- # }
-
- # Item removed
- # policy_item {
- # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.1.id
- # frequency_interval = 1
- # frequency_type = "daily"
- # retention_unit = "days"
- # retention_value = 2
- # }
-
- # Item removed
- # policy_item {
- # id = mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.2.id
- # frequency_interval = 4
- # frequency_type = "weekly"
- # retention_unit = "weeks"
- # retention_value = 3
- # }
-
- policy_item {
- id = 5f0747cad187d8609a72f546
- frequency_interval = 5
- frequency_type = "monthly"
- retention_unit = "months"
- retention_value = 4
- }
- }
-}
-```
-
--> **NOTE:** In this example we decided to remove the first 3 items so we can't use `mongodbatlas_advanced_cluster.my_cluster.snapshot_backup_policy.0.policies.0.policy_item.3.id` to retrieve the monthly id value of the cluster state due to once the cluster being modified or makes a `terraform refresh` will cause that the three items will remove from the state, so we will get an error due to the index 3 doesn't exists any more and our monthly policy item is moved to the first place of the array. So we use `5f0747cad187d8609a72f546`, which is an example of an id MongoDB Atlas returns for the policy item we want to keep. Here it is hard coded because you need to either use the actual value from the Terraform state or look to map the policy item you want to keep to it's current placement in the state file array.
-
-## Argument Reference
-
-* `project_id` - (Required) The unique identifier of the project for the Atlas cluster.
-* `cluster_name` - (Required) The name of the Atlas cluster that contains the snapshot backup policy you want to retrieve.
-* `reference_hour_of_day` - (Optional) UTC Hour of day between 0 and 23, inclusive, representing which hour of the day that Atlas takes snapshots for backup policy items.
-* `reference_minute_of_hour` - (Optional) UTC Minutes after referenceHourOfDay that Atlas takes snapshots for backup policy items. Must be between 0 and 59, inclusive.
-* `restore_window_days` - (Optional) Number of days back in time you can restore to with point-in-time accuracy. Must be a positive, non-zero integer.
-* `update_snapshots` - (Optional) Specify true to apply the retention changes in the updated backup policy to snapshots that Atlas took previously.
-
-### Policies
-* `policies` - (Required) Contains a document for each backup policy item in the desired updated backup policy.
-* `policies.#.id` - (Required) Unique identifier of the backup policy that you want to update. policies.#.id is a value obtained via the mongodbatlas_advanced_cluster resource. `cloud_backup` of the mongodbatlas_advanced_cluster resource must be set to true. See the example above for how to refer to the mongodbatlas_advanced_cluster resource for policies.#.id
-
-#### Policy Item
-* `policies.#.policy_item` - (Required) Array of backup policy items.
-* `policies.#.policy_item.#.id` - (Required) Unique identifier of the backup policy item. `policies.#.policy_item.#.id` is a value obtained via the mongodbatlas_advanced_cluster resource. `cloud_backup` of the mongodbatlas_advanced_cluster resource must be set to true. See the example above for how to refer to the mongodbatlas_advanced_cluster resource for policies.#.policy_item.#.id
-* `policies.#.policy_item.#.frequency_interval` - (Required) Desired frequency of the new backup policy item specified by frequencyType.
-* `policies.#.policy_item.#.frequency_type` - (Required) Frequency associated with the backup policy item. One of the following values: hourly, daily, weekly or monthly.
-* `policies.#.policy_item.#.retention_unit` - (Required) Scope of the backup policy item: days, weeks, or months.
-* `policies.#.policy_item.#.retention_value` - (Required) Value to associate with retentionUnit.
-
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `cluster_id` - Unique identifier of the Atlas cluster.
-* `next_snapshot` - Timestamp in the number of seconds that have elapsed since the UNIX epoch when Atlas takes the next snapshot.
-
-## Import
-
-Cloud Backup Snapshot Policy entries can be imported using project project_id and cluster_name, in the format `PROJECTID-CLUSTERNAME`, e.g.
-
-```
-$ terraform import mongodbatlas_cloud_provider_snapshot_backup_policy.test 5d0f1f73cf09a29120e173cf-MyClusterTest
-```
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/schedule/modify-one-schedule/)
\ No newline at end of file
diff --git a/docs/resources/cloud_provider_snapshot_restore_job.md b/docs/resources/cloud_provider_snapshot_restore_job.md
deleted file mode 100644
index 00d0f36875..0000000000
--- a/docs/resources/cloud_provider_snapshot_restore_job.md
+++ /dev/null
@@ -1,161 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This resource is deprecated, use `mongodbatlas_cloud_backup_snapshot_restore_job`
-**Note:** This resource have now been fully deprecated as part of v1.10.0 release
-
-# Resource: mongodbatlas_cloud_provider_snapshot_restore_job
-
-`mongodbatlas_cloud_provider_snapshot_restore_job` provides a resource to create a new restore job from a cloud backup snapshot of a specified cluster. The restore job can be one of three types:
-* **automated:** Atlas automatically restores the snapshot with snapshotId to the Atlas cluster with name targetClusterName in the Atlas project with targetGroupId.
-
-* **download:** Atlas provides a URL to download a .tar.gz of the snapshot with snapshotId. The contents of the archive contain the data files for your Atlas cluster.
-
-* **pointInTime:** Atlas performs a Continuous Cloud Backup restore.
-
--> **Important:** If you specify `deliveryType` : `automated` or `deliveryType` : `pointInTime` in your request body to create an automated restore job, Atlas removes all existing data on the target cluster prior to the restore.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
-## Example Usage
-
-### Example automated delivery type.
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "MyCluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # enable cloud backup snapshots
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_WEST_2"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
- description = "myDescription"
- retention_in_days = 1
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name
- snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id
- delivery_type_config {
- automated = true
- target_cluster_name = "MyCluster"
- target_project_id = "5cf5a45a9ccf6400e60981b6"
- }
- depends_on = [mongodbatlas_cloud_provider_snapshot.test]
-}
-```
-
-### Example download delivery type.
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "my_cluster" {
- project_id = ""
- name = "MyCluster"
- cluster_type = "REPLICASET"
- backup_enabled = true # enable cloud backup snapshots
-
- replication_specs {
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "EU_WEST_2"
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- }
- }
-}
-
-resource "mongodbatlas_cloud_provider_snapshot" "test" {
- project_id = mongodbatlas_advanced_cluster.my_cluster.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
- description = "myDescription"
- retention_in_days = 1
-}
-
-resource "mongodbatlas_cloud_provider_snapshot_restore_job" "test" {
- project_id = mongodbatlas_cloud_provider_snapshot.test.project_id
- cluster_name = mongodbatlas_cloud_provider_snapshot.test.cluster_name
- snapshot_id = mongodbatlas_cloud_provider_snapshot.test.snapshot_id
- delivery_type_config {
- download = true
- }
-}
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique identifier of the project for the Atlas cluster whose snapshot you want to restore.
-* `cluster_name` - (Required) The name of the Atlas cluster whose snapshot you want to restore.
-* `snapshot_id` - (Required) Unique identifier of the snapshot to restore.
-
-### Download
-Atlas provides a URL to download a .tar.gz of the snapshot with snapshotId.
-
-### Automated
-Atlas automatically restores the snapshot with snapshotId to the Atlas cluster with name targetClusterName in the Atlas project with targetGroupId. if you want to use automated delivery type, you must to set the following arguments:
-
-* `target_cluster_name` - (Required) Name of the target Atlas cluster to which the restore job restores the snapshot. Only required if deliveryType is automated.
-* `target_project_id` - (Required) Unique ID of the target Atlas project for the specified targetClusterName. Only required if deliveryType is automated.
-
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `snapshot_restore_job_id` - The unique identifier of the restore job.
-* `cancelled` - Indicates whether the restore job was canceled.
-* `created_at` - UTC ISO 8601 formatted point in time when Atlas created the restore job.
-* `delivery_type_config` - Type of restore job to create. Possible values are: automated and download.
-* `delivery_url` - One or more URLs for the compressed snapshot files for manual download. Only visible if deliveryType is download.
-* `expired` - Indicates whether the restore job expired.
-* `expires_at` - UTC ISO 8601 formatted point in time when the restore job expires.
-* `finished_at` - UTC ISO 8601 formatted point in time when the restore job completed.
-* `id` - The Terraform's unique identifier used internally for state management.
-* `links` - One or more links to sub-resources and/or related resources. The relations between URLs are explained in the Web Linking Specification.
-* `snapshot_id` - Unique identifier of the source snapshot ID of the restore job.
-* `target_project_id` - Name of the target Atlas project of the restore job. Only visible if deliveryType is automated.
-* `target_cluster_name` - Name of the target Atlas cluster to which the restore job restores the snapshot. Only visible if deliveryType is automated.
-* `timestamp` - Timestamp in ISO 8601 date and time format in UTC when the snapshot associated to snapshotId was taken.
-* `oplogTs` - Timestamp in the number of seconds that have elapsed since the UNIX epoch from which to you want to restore this snapshot.
- Three conditions apply to this parameter:
- * Enable Continuous Cloud Backup on your cluster.
- * Specify oplogInc.
- * Specify either oplogTs and oplogInc or pointInTimeUTCSeconds, but not both.
-* `oplogInc` - Oplog operation number from which to you want to restore this snapshot. This is the second part of an Oplog timestamp.
- Three conditions apply to this parameter:
- * Enable Continuous Cloud Backup on your cluster.
- * Specify oplogTs.
- * Specify either oplogTs and oplogInc or pointInTimeUTCSeconds, but not both.
-* `pointInTimeUTCSeconds` - Timestamp in the number of seconds that have elapsed since the UNIX epoch from which you want to restore this snapshot.
- Two conditions apply to this parameter:
- * Enable Continuous Cloud Backup on your cluster.
- * Specify either pointInTimeUTCSeconds or oplogTs and oplogInc, but not both.
-
-## Import
-
-Cloud Backup Snapshot Restore Job entries can be imported using project project_id, cluster_name and snapshot_id (Unique identifier of the snapshot), in the format `PROJECTID-CLUSTERNAME-JOBID`, e.g.
-
-```
-$ terraform import mongodbatlas_cloud_provider_snapshot_restore_job.test 5cf5a45a9ccf6400e60981b6-MyCluster-5d1b654ecf09a24b888f4c79
-```
-
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/cloud-backup/restore/restores/)
\ No newline at end of file
diff --git a/docs/resources/cloud_user_org_assignment.md b/docs/resources/cloud_user_org_assignment.md
new file mode 100644
index 0000000000..bd2b5ebae4
--- /dev/null
+++ b/docs/resources/cloud_user_org_assignment.md
@@ -0,0 +1,91 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# Resource: mongodbatlas_cloud_user_org_assignment
+
+`mongodbatlas_cloud_user_org_assignment` provides a Cloud User Organization Assignment resource. The resource lets you import, assign, remove, or update a user to an organization.
+
+**NOTE**: Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint cannot be managed with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser) for details.
+To manage such users with this resource, refer to our [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_cloud_user_org_assignment" "example" {
+ org_id = var.org_id
+ username = var.user_email
+ roles = {
+ org_roles = ["ORG_MEMBER"]
+ }
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "example_username" {
+ org_id = var.org_id
+ username = mongodbatlas_cloud_user_org_assignment.example.username
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "example_user_id" {
+ org_id = var.org_id
+ user_id = mongodbatlas_cloud_user_org_assignment.example.user_id
+}
+```
+
+### Further Examples
+- [Cloud User Organization Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_org_assignment)
+
+
+## Schema
+
+### Required
+
+- `org_id` (String) Unique 24-hexadecimal digit string that identifies the organization that contains your projects. Use the [/orgs](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-organizations) endpoint to retrieve all organizations to which the authenticated user has access.
+- `roles` (Attributes) Organization and project level roles to assign the MongoDB Cloud user within one organization. (see [below for nested schema](#nestedatt--roles))
+- `username` (String) Email address that represents the username of the MongoDB Cloud user.
+
+### Read-Only
+
+- `country` (String) Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+- `created_at` (String) Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+- `first_name` (String) First or given name that belongs to the MongoDB Cloud user.
+- `invitation_created_at` (String) Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `invitation_expires_at` (String) Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `inviter_username` (String) Username of the MongoDB Cloud user who sent the invitation to join the organization.
+- `last_auth` (String) Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+- `last_name` (String) Last name, family name, or surname that belongs to the MongoDB Cloud user.
+- `mobile_number` (String) Mobile phone number that belongs to the MongoDB Cloud user.
+- `org_membership_status` (String) String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+- `team_ids` (Set of String) List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+- `user_id` (String) Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+
+
+### Nested Schema for `roles`
+
+Optional:
+
+- `org_roles` (Set of String) One or more organization level roles to assign the MongoDB Cloud user.
+
+Read-Only:
+
+- `project_role_assignments` (Attributes List) List of project level role assignments to assign the MongoDB Cloud user. (see [below for nested schema](#nestedatt--roles--project_role_assignments))
+
+
+### Nested Schema for `roles.project_role_assignments`
+
+Read-Only:
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies the project to which these roles belong.
+- `project_roles` (Set of String) One or more project-level roles assigned to the MongoDB Cloud user.
+
+## Import
+
+Cloud User Org Assignment resource can be imported using the Org ID & Username OR Org ID & User ID, in the format `ORG_ID/USERNAME` OR `ORG_ID/USER_ID`.
+
+```
+$ terraform import mongodbatlas_cloud_user_org_assignment.test 63cfbf302333a3011d98592e/test-user@example.com
+OR
+$ terraform import mongodbatlas_cloud_user_org_assignment.test 63cfbf302333a3011d98592e/5f18367ccb7a503a2b481b7a
+```
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createorganizationuser) Documentation.
diff --git a/docs/resources/cloud_user_project_assignment.md b/docs/resources/cloud_user_project_assignment.md
new file mode 100644
index 0000000000..6f3397fded
--- /dev/null
+++ b/docs/resources/cloud_user_project_assignment.md
@@ -0,0 +1,76 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# Resource: mongodbatlas_cloud_user_project_assignment
+
+`mongodbatlas_cloud_user_project_assignment` provides a Cloud User Project Assignment resource. It lets you manage the association between a cloud user and a project, enabling you to import, assign, remove, or update the user's membership.
+
+Depending on the user's current membership status in the project's organization, MongoDB Cloud handles invitations and access in different ways:
+- If the user has a pending invitation to join the project's organization, MongoDB Cloud modifies it and grants project access.
+- If the user doesn't have an invitation to join the organization, MongoDB Cloud sends a new invitation that grants the user organization and project access.
+- If the user is already active in the project's organization, MongoDB Cloud grants access to the project.
+
+-> **NOTE:** Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint cannot be managed with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectteam) for details.
+To manage such users with this resource, refer to our [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_cloud_user_project_assignment" "example" {
+ project_id = var.project_id
+ username = var.user_email
+ roles = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_cloud_user_project_assignment" "example_username" {
+ project_id = var.project_id
+ username = mongodbatlas_cloud_user_project_assignment.example.username
+}
+
+data "mongodbatlas_cloud_user_project_assignment" "example_user_id" {
+ project_id = var.project_id
+ user_id = mongodbatlas_cloud_user_project_assignment.example.user_id
+}
+```
+
+### Further Examples
+- [Cloud User Project Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_project_assignment)
+
+
+## Schema
+
+### Required
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listprojects) endpoint to retrieve all projects to which the authenticated user has access.
+
+**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.
+- `roles` (Set of String) One or more project-level roles to assign the MongoDB Cloud user.
+- `username` (String) Email address that represents the username of the MongoDB Cloud user.
+
+### Read-Only
+
+- `country` (String) Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+- `created_at` (String) Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+- `first_name` (String) First or given name that belongs to the MongoDB Cloud user.
+- `invitation_created_at` (String) Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `invitation_expires_at` (String) Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `inviter_username` (String) Username of the MongoDB Cloud user who sent the invitation to join the organization.
+- `last_auth` (String) Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+- `last_name` (String) Last name, family name, or surname that belongs to the MongoDB Cloud user.
+- `mobile_number` (String) Mobile phone number that belongs to the MongoDB Cloud user.
+- `org_membership_status` (String) String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+- `user_id` (String) Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+
+## Import
+
+Cloud User Project Assignment resource can be imported using the Project ID & Username OR Project ID & User ID, in the format `PROJECT_ID/USERNAME` OR `PROJECT_ID/USER_ID`.
+
+```
+$ terraform import mongodbatlas_cloud_user_project_assignment.test 9f3a7c2e54b8d1a0e6f4b3c2/test-user@example.com
+OR
+$ terraform import mongodbatlas_cloud_user_project_assignment.test 9f3a7c2e54b8d1a0e6f4b3c2/5f18367ccb7a503a2b481b7a
+```
+
+For more information, see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addprojectuser) Documentation.
diff --git a/docs/resources/cloud_user_team_assignment.md b/docs/resources/cloud_user_team_assignment.md
new file mode 100644
index 0000000000..b6991dc266
--- /dev/null
+++ b/docs/resources/cloud_user_team_assignment.md
@@ -0,0 +1,89 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# Resource: mongodbatlas_cloud_user_team_assignment
+
+`mongodbatlas_cloud_user_team_assignment` provides a Cloud User Team Assignment resource. It lets you manage the association between a cloud user and a team, enabling you to import, assign, remove, or update the user's membership.
+
+-> **NOTE**Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint cannot be managed with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listteamusers) for details.
+To manage such users with this resource, refer to our [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](../guides/atlas-user-management).
+
+## Example Usages
+
+```terraform
+resource "mongodbatlas_cloud_user_team_assignment" "example" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = var.user_id
+}
+
+data "mongodbatlas_cloud_user_team_assignment" "example_user_id" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = mongodbatlas_cloud_user_team_assignment.example.user_id
+}
+
+data "mongodbatlas_cloud_user_team_assignment" "example_username" {
+ org_id = var.org_id
+ team_id = var.team_id
+ username = mongodbatlas_cloud_user_team_assignment.example.username
+}
+```
+
+### Further Examples
+- [Cloud User Team Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_team_assignment)
+
+
+## Schema
+
+### Required
+
+- `org_id` (String) Unique 24-hexadecimal digit string that identifies the organization that contains your projects. Use the [/orgs](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-organizations) endpoint to retrieve all organizations to which the authenticated user has access.
+- `team_id` (String) Unique 24-hexadecimal digit string that identifies the team to which you want to assign the MongoDB Cloud user. Use the [/teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-teams) endpoint to retrieve all teams to which the authenticated user has access.
+- `user_id` (String) Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.
+
+### Read-Only
+
+- `country` (String) Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.
+- `created_at` (String) Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.
+- `first_name` (String) First or given name that belongs to the MongoDB Cloud user.
+- `invitation_created_at` (String) Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `invitation_expires_at` (String) Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.
+- `inviter_username` (String) Username of the MongoDB Cloud user who sent the invitation to join the organization.
+- `last_auth` (String) Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.
+- `last_name` (String) Last name, family name, or surname that belongs to the MongoDB Cloud user.
+- `mobile_number` (String) Mobile phone number that belongs to the MongoDB Cloud user.
+- `org_membership_status` (String) String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.
+- `roles` (Attributes) Organization and project level roles to assign the MongoDB Cloud user within one organization. (see [below for nested schema](#nestedatt--roles))
+- `team_ids` (Set of String) List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.
+- `username` (String) Email address that represents the username of the MongoDB Cloud user.
+
+
+### Nested Schema for `roles`
+
+Read-Only:
+
+- `org_roles` (Set of String) One or more organization level roles to assign the MongoDB Cloud user.
+- `project_role_assignments` (Attributes Set) List of project level role assignments to assign the MongoDB Cloud user. (see [below for nested schema](#nestedatt--roles--project_role_assignments))
+
+
+### Nested Schema for `roles.project_role_assignments`
+
+Read-Only:
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies the project to which these roles belong.
+- `project_roles` (Set of String) One or more project-level roles assigned to the MongoDB Cloud user.
+
+## Import
+
+Cloud User Team Assignment resource can be imported using the Org ID & Team ID & User ID OR Org ID & Team ID & Username, in the format `ORG_ID/TEAM_ID/USER_ID` OR `ORG_ID/TEAM_ID/USERNAME`.
+
+```
+$ terraform import mongodbatlas_cloud_user_team_assignment.test 63cfbf302333a3011d98592e/9f3c1e7a4d8b2f6051acde47/5f18367ccb7a503a2b481b7a
+OR
+$ terraform import mongodbatlas_cloud_user_team_assignment.test 63cfbf302333a3011d98592e/9f3c1e7a4d8b2f6051acde47/test-user@example.com
+```
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addusertoteam) Documentation.
diff --git a/docs/resources/cluster.md b/docs/resources/cluster.md
index 63ccd6b25c..df8ff9959e 100644
--- a/docs/resources/cluster.md
+++ b/docs/resources/cluster.md
@@ -1,8 +1,12 @@
+---
+subcategory: "Clusters"
+---
+
# Resource: mongodbatlas_cluster
`mongodbatlas_cluster` provides a Cluster resource. The resource lets you create, edit and delete clusters. The resource requires your Project ID.
-~> **IMPORTANT:** We recommend all new MongoDB Atlas Terraform users start with the [`mongodbatlas_advanced_cluster`](advanced_cluster) resource. Key differences between [`mongodbatlas_cluster`](cluster) and [`mongodbatlas_advanced_cluster`](advanced_cluster) include support for [Multi-Cloud Clusters](https://www.mongodb.com/blog/post/introducing-multicloud-clusters-on-mongodb-atlas), [Asymmetric Sharding](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema), and [Independent Scaling of Analytics Node Tiers](https://www.mongodb.com/blog/post/introducing-ability-independently-scale-atlas-analytics-node-tiers). For existing [`mongodbatlas_cluster`](cluster) resource users see our [Migration Guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
+~> **DEPRECATION:** This resource is deprecated and will be removed in the next major release. Please use `mongodbatlas_advanced_cluster`. For more details, see [our migration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
-> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
@@ -252,6 +256,12 @@ Refer to the following for full privatelink endpoint connection string examples:
* [AWS, Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster)
* [AWS, Regionalized Private Endpoints](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded)
+
+### Further Examples
+- [NVMe Upgrade (Dedicated Cluster)](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cluster/nvme-upgrade)
+- [Tenant to Dedicated Upgrade (Cluster)](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cluster/tenant-upgrade)
+
+
## Argument Reference
* `project_id` - (Required) The unique ID for the project to create the cluster.
diff --git a/docs/resources/cluster_outage_simulation.md b/docs/resources/cluster_outage_simulation.md
index ee2a5bc3d3..6627d93592 100644
--- a/docs/resources/cluster_outage_simulation.md
+++ b/docs/resources/cluster_outage_simulation.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Cluster Outage Simulation"
+---
+
# Resource: mongodbatlas_cluster_outage_simulation
`mongodbatlas_cluster_outage_simulation` provides a Cluster Outage Simulation resource. For more details see https://www.mongodb.com/docs/atlas/tutorial/test-resilience/simulate-regional-outage/
@@ -33,6 +37,9 @@ resource "mongodbatlas_cluster_outage_simulation" "outage_simulation" {
}
```
+### Further Examples
+- [Cluster Outage Simulation](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cluster_outage_simulation)
+
## Argument Reference
* `project_id` - (Required) The unique ID for the project that contains the cluster that is/will undergoing outage simulation.
@@ -43,6 +50,8 @@ resource "mongodbatlas_cluster_outage_simulation" "outage_simulation" {
* `GCP`
* `AZURE`
* `region_name` - (Required) The Atlas name of the region to undergo an outage simulation.
+* `timeouts`- (Optional) The duration of time to wait for Cluster Outage Simulation to be created or deleted. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Cluster Outage Simulation create and delete is `25m`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
## Attributes Reference
diff --git a/docs/resources/custom_db_role.md b/docs/resources/custom_db_role.md
index b0548bb6f5..466c0d7213 100644
--- a/docs/resources/custom_db_role.md
+++ b/docs/resources/custom_db_role.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Custom Database Roles"
+---
+
# Resource: mongodbatlas_custom_db_role
`mongodbatlas_custom_db_role` provides a Custom DB Role resource. The customDBRoles resource lets you retrieve, create and modify the custom MongoDB roles in your cluster. Use custom MongoDB roles to specify custom sets of actions which cannot be described by the built-in Atlas database user privileges.
diff --git a/docs/resources/custom_dns_configuration_cluster_aws.md b/docs/resources/custom_dns_configuration_cluster_aws.md
index 12294f8156..fdc40e7df7 100644
--- a/docs/resources/custom_dns_configuration_cluster_aws.md
+++ b/docs/resources/custom_dns_configuration_cluster_aws.md
@@ -1,3 +1,7 @@
+---
+subcategory: "AWS Clusters DNS"
+---
+
# Resource: mongodbatlas_custom_dns_configuration_cluster_aws
`mongodbatlas_custom_dns_configuration_cluster_aws` provides a Custom DNS Configuration for Atlas Clusters on AWS resource. This represents a Custom DNS Configuration for Atlas Clusters on AWS that can be updated in an Atlas project.
diff --git a/docs/resources/data_lake_pipeline.md b/docs/resources/data_lake_pipeline.md
index f0b828a40b..4bcba7ec7f 100644
--- a/docs/resources/data_lake_pipeline.md
+++ b/docs/resources/data_lake_pipeline.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Data Lake Pipelines"
---
-**WARNING:** Data Lake is deprecated. To learn more, see
+~> **DEPRECATION:** Data Lake is deprecated. To learn more, see
# Resource: mongodbatlas_data_lake_pipeline
@@ -25,17 +25,17 @@ resource "mongodbatlas_advanced_cluster" "automated_backup_test" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "GCP"
region_name = "US_EAST_4"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_data_lake_pipeline" "pipeline" {
@@ -68,6 +68,9 @@ resource "mongodbatlas_data_lake_pipeline" "pipeline" {
}
```
+### Further Examples
+- [Data Lake Pipeline](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_data_lake_pipeline)
+
## Argument Reference
* `project_id` - (Required) The unique ID for the project to create a data lake pipeline.
diff --git a/docs/resources/database_user.md b/docs/resources/database_user.md
index 8e690dd797..5c8c8dd99a 100644
--- a/docs/resources/database_user.md
+++ b/docs/resources/database_user.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Database Users"
+---
+
# Resource: mongodbatlas_database_user
`mongodbatlas_database_user` provides a Database User resource. This represents a database user which will be applied to all clusters within the project.
@@ -115,6 +119,10 @@ resource "mongodbatlas_database_user" "test" {
Note: OIDC support is only avalible starting in [MongoDB 7.0](https://www.mongodb.com/evolved#mdbsevenzero) or later. To learn more, see the [MongoDB Atlas documentation](https://www.mongodb.com/docs/atlas/security-oidc/).
+### Further Examples
+- [Database User](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_database_user)
+
+
## Argument Reference
* `auth_database_name` - (Required) Database against which Atlas authenticates the user. A user must provide both a username and authentication database to log into MongoDB.
diff --git a/docs/resources/encryption_at_rest.md b/docs/resources/encryption_at_rest.md
index b2f99fe2d5..769d5d187f 100644
--- a/docs/resources/encryption_at_rest.md
+++ b/docs/resources/encryption_at_rest.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# Resource: mongodbatlas_encryption_at_rest
`mongodbatlas_encryption_at_rest` allows management of Encryption at Rest for an Atlas project using Customer Key Management configuration. The following providers are supported:
@@ -67,17 +71,17 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
backup_enabled = true
encryption_at_rest_provider = "AWS"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
data "mongodbatlas_encryption_at_rest" "test" {
@@ -165,7 +169,11 @@ resource "mongodbatlas_encryption_at_rest" "test" {
}
```
-For a complete example that includes GCP KMS resource creation and IAM binding setup, see the [GCP encryption at rest example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/gcp/).
+### Further Examples
+- [AWS KMS Encryption at Rest](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/aws)
+- [Azure Key Vault Encryption at Rest](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/azure)
+- [GCP KMS Encryption at Rest](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/gcp/)
+
## Schema
diff --git a/docs/resources/encryption_at_rest_private_endpoint.md b/docs/resources/encryption_at_rest_private_endpoint.md
index 0accc86d81..a40ae13f28 100644
--- a/docs/resources/encryption_at_rest_private_endpoint.md
+++ b/docs/resources/encryption_at_rest_private_endpoint.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# Resource: mongodbatlas_encryption_at_rest_private_endpoint
`mongodbatlas_encryption_at_rest_private_endpoint` provides a resource for managing a private endpoint used for encryption at rest with customer-managed keys. This ensures all traffic between Atlas and customer key management systems take place over private network interfaces.
@@ -90,6 +94,10 @@ resource "mongodbatlas_encryption_at_rest_private_endpoint" "endpoint" {
}
```
+### Further Examples
+- [AWS KMS Encryption at Rest Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/aws)
+- [Azure Key Vault Encryption at Rest Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure)
+
## Schema
@@ -99,6 +107,11 @@ resource "mongodbatlas_encryption_at_rest_private_endpoint" "endpoint" {
- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project.
- `region_name` (String) Cloud provider region in which the Encryption At Rest private endpoint is located.
+### Optional
+
+- `delete_on_create_timeout` (Boolean) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
+- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))
+
### Read-Only
- `error_message` (String) Error message for failures associated with the Encryption At Rest private endpoint.
@@ -106,6 +119,14 @@ resource "mongodbatlas_encryption_at_rest_private_endpoint" "endpoint" {
- `private_endpoint_connection_name` (String) Connection name of the Azure Private Endpoint.
- `status` (String) State of the Encryption At Rest private endpoint.
+
+### Nested Schema for `timeouts`
+
+Optional:
+
+- `create` (String) A string that can be [parsed as a duration](https://pkg.go.dev/time#ParseDuration) consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
+- `delete` (String) A string that can be [parsed as a duration](https://pkg.go.dev/time#ParseDuration) consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.
+
## Import
Encryption At Rest Private Endpoint resource can be imported using the project ID, cloud provider, and private endpoint ID. The format must be `{project_id}-{cloud_provider}-{private_endpoint_id}` e.g.
diff --git a/docs/resources/event_trigger.md b/docs/resources/event_trigger.md
index be19c74613..c4b425775a 100644
--- a/docs/resources/event_trigger.md
+++ b/docs/resources/event_trigger.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Event Trigger"
+---
+
# Resource: mongodbatlas_event_trigger
`mongodbatlas_event_trigger` provides a Event Trigger resource.
diff --git a/docs/resources/federated_database_instance.md b/docs/resources/federated_database_instance.md
index cceb9bfbb9..0a98c880ce 100644
--- a/docs/resources/federated_database_instance.md
+++ b/docs/resources/federated_database_instance.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Resource: mongodbatlas_federated_database_instance
`mongodbatlas_federated_database_instance` provides a Federated Database Instance resource.
@@ -35,6 +39,10 @@ resource "mongodbatlas_federated_database_instance" "test" {
}
```
+### Further Examples
+- [AWS Federated Database Instance](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_database_instance/aws)
+- [Azure Federated Database Instance](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_database_instance/azure)
+
## Example Usages with Amazon S3 bucket as storage database
diff --git a/docs/resources/federated_query_limit.md b/docs/resources/federated_query_limit.md
index de011327c2..fdbbfa4f1c 100644
--- a/docs/resources/federated_query_limit.md
+++ b/docs/resources/federated_query_limit.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Resource: mongodbatlas_federated_query_limit
`mongodbatlas_federated_query_limit` provides a Federated Database Instance Query Limits resource. To learn more about Atlas Data Federation see https://www.mongodb.com/docs/atlas/data-federation/overview/.
@@ -18,6 +22,9 @@ resource "mongodbatlas_federated_query_limit" "test" {
}
```
+### Further Examples
+- [Federated Query Limit](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_query_limit)
+
## Argument Reference
* `project_id` - (Required) The unique ID for the project to create a Federated Database Instance.
diff --git a/docs/resources/federated_settings_identity_provider.md b/docs/resources/federated_settings_identity_provider.md
index cef768d251..b9b4f4a8e5 100644
--- a/docs/resources/federated_settings_identity_provider.md
+++ b/docs/resources/federated_settings_identity_provider.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Resource: mongodbatlas_federated_settings_identity_provider
`mongodbatlas_federated_settings_identity_provider` provides an Atlas federated settings identity provider resource provides a subset of settings to be maintained post import of the existing resource.
@@ -37,6 +41,8 @@ resource "mongodbatlas_federated_settings_identity_provider" "oidc" {
user_claim = "sub"
}
```
+### Further Examples
+- [Azure Federated Settings Identity Provider](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_settings_identity_provider/azure)
## Argument Reference
@@ -80,4 +86,4 @@ Identity Provider **must** be imported before using federation_settings_id-idp_i
$ terraform import mongodbatlas_federated_settings_identity_provider.identity_provider 6287a663c660f52b1c441c6c-0oad4fas87jL5Xnk12971234
```
-For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api/federation-configuration/)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api/federation-configuration/)
diff --git a/docs/resources/federated_settings_org_config.md b/docs/resources/federated_settings_org_config.md
index 924c5a3252..977af6257c 100644
--- a/docs/resources/federated_settings_org_config.md
+++ b/docs/resources/federated_settings_org_config.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Resource: mongodbatlas_federated_settings_org_config
`mongodbatlas_federated_settings_org_config` provides an Federated Settings Identity Providers datasource. Atlas Cloud Federated Settings Identity Providers provides federated settings outputs for the configured Identity Providers.
@@ -22,6 +26,10 @@ data "mongodbatlas_federated_settings_org_configs" "org_configs_ds" {
}
```
+### Further Examples
+- [Azure Federated Identity Provider with Org Config](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_settings_identity_provider/azure)
+- [Federated Settings Org Role Mappings](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_settings_org_role_mapping)
+
## Argument Reference
* `federation_settings_id` - (Required) Unique 24-hexadecimal digit string that identifies the federated authentication configuration.
@@ -57,4 +65,3 @@ $ terraform import mongodbatlas_federated_settings_org_config.org_connection 627
```
For more information see: [MongoDB Atlas API Reference.](https://www.mongodb.com/docs/atlas/reference/api/federation-configuration/)
-
diff --git a/docs/resources/federated_settings_org_role_mapping.md b/docs/resources/federated_settings_org_role_mapping.md
index 79b63ce7bc..ccbccec90d 100644
--- a/docs/resources/federated_settings_org_role_mapping.md
+++ b/docs/resources/federated_settings_org_role_mapping.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Federated Authentication"
+---
+
# Resource: mongodbatlas_federated_settings_org_role_mapping
`mongodbatlas_federated_settings_org_role_mapping` provides an Role Mapping resource. This allows organization role mapping to be created.
@@ -27,6 +31,9 @@ resource "mongodbatlas_federated_settings_org_role_mapping" "org_group_role_mapp
}
```
+### Further Examples
+- [Okta and MongoDB Atlas Federated Settings Configuration](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_federated_settings_org_role_mapping)
+
## Argument Reference
* `federation_settings_id` - (Required) Unique 24-hexadecimal digit string that identifies the federated authentication configuration.
diff --git a/docs/resources/flex_cluster.md b/docs/resources/flex_cluster.md
index a70a883a9c..f059aec571 100644
--- a/docs/resources/flex_cluster.md
+++ b/docs/resources/flex_cluster.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Clusters"
+---
+
# Resource: mongodbatlas_flex_cluster
`mongodbatlas_flex_cluster` provides a Flex Cluster resource. The resource lets you create, update, delete and import a flex cluster.
@@ -35,6 +39,9 @@ output "mongodbatlas_flex_clusters_names" {
}
```
+### Further Examples
+- [Flex Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_flex_cluster)
+
## Schema
@@ -46,8 +53,10 @@ output "mongodbatlas_flex_clusters_names" {
### Optional
+- `delete_on_create_timeout` (Boolean) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
- `tags` (Map of String) Map that contains key-value pairs between 1 to 255 characters in length for tagging and categorizing the instance.
- `termination_protection_enabled` (Boolean) Flag that indicates whether termination protection is enabled on the cluster. If set to `true`, MongoDB Cloud won't delete the cluster. If set to `false`, MongoDB Cloud will delete the cluster.
+- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))
### Read-Only
@@ -74,6 +83,16 @@ Read-Only:
- `provider_name` (String) Human-readable label that identifies the cloud service provider.
+
+### Nested Schema for `timeouts`
+
+Optional:
+
+- `create` (String) A string that can be [parsed as a duration](https://pkg.go.dev/time#ParseDuration) consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
+- `delete` (String) A string that can be [parsed as a duration](https://pkg.go.dev/time#ParseDuration) consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours). Setting a timeout for a Delete operation is only applicable if changes are saved into state before the destroy operation occurs.
+- `update` (String) A string that can be [parsed as a duration](https://pkg.go.dev/time#ParseDuration) consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
+
+
### Nested Schema for `backup_settings`
diff --git a/docs/resources/global_cluster_config.md b/docs/resources/global_cluster_config.md
index 7357109515..da6db530d9 100644
--- a/docs/resources/global_cluster_config.md
+++ b/docs/resources/global_cluster_config.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Global Clusters"
+---
+
# Resource: mongodbatlas_global_cluster_config
`mongodbatlas_global_cluster_config` provides a Global Cluster Configuration resource.
@@ -19,33 +23,32 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "GEOSHARDED"
backup_enabled = true
- replication_specs {
+ replication_specs = [{
zone_name = "Zone 1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_CENTRAL_1"
- }
- }
-
- replication_specs {
+ }]
+ },
+ {
zone_name = "Zone 2"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_2"
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_global_cluster_config" "config" {
@@ -93,7 +96,6 @@ In addition to all arguments above, the following attributes are exported:
* `id` - The Terraform's unique identifier used internally for state management.
* `custom_zone_mapping_zone_id` - A map of all custom zone mappings defined for the Global Cluster to `replication_specs.*.zone_id`. Atlas automatically maps each location code to the closest geographical zone. Custom zone mappings allow administrators to override these automatic mappings. If your Global Cluster does not have any custom zone mappings, this document is empty.
-* `custom_zone_mapping` - (Deprecated) A map of all custom zone mappings defined for the Global Cluster to `replication_specs.*.id`. This attribute is deprecated, use `custom_zone_mapping_zone_id` instead. This attribute is not set when a cluster uses independent shard scaling. To learn more, see the [Sharding Configuration guide](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).
## Import
diff --git a/docs/resources/ldap_configuration.md b/docs/resources/ldap_configuration.md
index a6d2642801..e6e171710e 100644
--- a/docs/resources/ldap_configuration.md
+++ b/docs/resources/ldap_configuration.md
@@ -1,3 +1,7 @@
+---
+subcategory: "LDAP Configuration"
+---
+
# Resource: mongodbatlas_ldap_configuration
`mongodbatlas_ldap_configuration` provides an LDAP Configuration resource. This allows an LDAP configuration for an Atlas project to be created and managed. This endpoint doesn’t verify connectivity using the provided LDAP over TLS configuration details. To verify a configuration before saving it, use the resource to [verify](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/resources/ldap_verify.md) the LDAP configuration.
diff --git a/docs/resources/ldap_verify.md b/docs/resources/ldap_verify.md
index 681a2d0223..90195ef238 100644
--- a/docs/resources/ldap_verify.md
+++ b/docs/resources/ldap_verify.md
@@ -1,3 +1,7 @@
+---
+subcategory: "LDAP Configuration"
+---
+
# Resource: mongodbatlas_ldap_verify
`mongodbatlas_ldap_verify` provides an LDAP Verify resource. This allows a a verification of an LDAP configuration over TLS for an Atlas project. Atlas retains only the most recent request for each project.
@@ -16,17 +20,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud backup snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_ldap_verify" "test" {
@@ -66,4 +70,4 @@ LDAP Configuration must be imported using project ID and request ID, e.g.
$ terraform import mongodbatlas_ldap_verify.test 5d09d6a59ccf6445652a444a-5d09d6a59ccf6445652a444a
```
-For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
\ No newline at end of file
+For more information see: [MongoDB Atlas API Reference.](https://docs.atlas.mongodb.com/reference/api/ldaps-configuration-request-verification)
diff --git a/docs/resources/maintenance_window.md b/docs/resources/maintenance_window.md
index bea1a9c461..899de34343 100644
--- a/docs/resources/maintenance_window.md
+++ b/docs/resources/maintenance_window.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Maintenance Windows"
+---
+
# Resource: mongodbatlas_maintenance_window
`mongodbatlas_maintenance_window` provides a resource to schedule the maintenance window for your MongoDB Atlas Project and/or set to defer a scheduled maintenance up to two times. Please refer to [Maintenance Windows](https://www.mongodb.com/docs/atlas/tutorial/cluster-maintenance-window/#configure-maintenance-window) documentation for more details.
@@ -37,19 +41,19 @@ Once maintenance is scheduled for your cluster, you cannot change your maintenan
}
```
+### Further Examples
+- [Configure Maintenance Window](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_maintenance_window)
+
## Argument Reference
* `project_id` - The unique identifier of the project for the Maintenance Window.
* `day_of_week` - (Required) Day of the week when you would like the maintenance window to start as a 1-based integer: Su=1, M=2, T=3, W=4, T=5, F=6, Sa=7.
-* `hour_of_day` - Hour of the day when you would like the maintenance window to start. This parameter uses the 24-hour clock, where midnight is 0, noon is 12 (Time zone is UTC). Defaults to 0.
-* `start_asap` - Flag indicating whether project maintenance has been directed to start immediately. If you request that maintenance begin immediately, this field returns true from the time the request was made until the time the maintenance event completes.
+* `hour_of_day` - (Required) Hour of the day when you would like the maintenance window to start. This parameter uses the 24-hour clock, where midnight is 0, noon is 12 (Time zone is UTC).
* `defer` - Defer the next scheduled maintenance for the given project for one week.
* `auto_defer` - Defer any scheduled maintenance for the given project for one week.
* `auto_defer_once_enabled` - Flag that indicates whether you want to defer all maintenance windows one week they would be triggered.
* `protected_hours` - (Optional) Defines the time period during which there will be no standard updates to the clusters. See [Protected Hours](#protected-hours).
--> **NOTE:** The `start_asap` attribute can't be used because of breaks the Terraform flow, but you can enable via API.
-
### Protected Hours
* `start_hour_of_day` - Zero-based integer that represents the beginning hour of the day for the protected hours window.
- `end_hour_of_day` - Zero-based integer that represents the end hour of the day for the protected hours window.
@@ -60,6 +64,9 @@ In addition to all arguments above, the following attributes are exported:
* `number_of_deferrals` - Number of times the current maintenance event for this project has been deferred, there can be a maximum of 2 deferrals.
* `time_zone_id` - Identifier for the current time zone of the maintenance window. This can only be updated via the Project Settings UI.
+* `start_asap` - Flag indicating whether project maintenance has been directed to start immediately. If requested, this field returns true from the time the request was made until the time the maintenance event completes.
+
+-> **NOTE:** The `start_asap` attribute can only be enabled via API.
## Import
diff --git a/docs/resources/mongodb_employee_access_grant.md b/docs/resources/mongodb_employee_access_grant.md
index bd93c4e1a3..8a4d2b9183 100644
--- a/docs/resources/mongodb_employee_access_grant.md
+++ b/docs/resources/mongodb_employee_access_grant.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Clusters"
+---
+
# Resource: mongodbatlas_mongodb_employee_access_grant
`mongodbatlas_mongodb_employee_access_grant` provides a MongoDB Employee Access Grant resource. The resource lets you create, delete, update and import a MongoDB employee access grant.
@@ -26,6 +30,9 @@ output "expiration_time" {
}
```
+### Further Examples
+- [Grant log access to MongoDB employees](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_mongodb_employee_access_grant)
+
## Schema
diff --git a/docs/resources/network_container.md b/docs/resources/network_container.md
index bfe429aa9a..166fc65593 100644
--- a/docs/resources/network_container.md
+++ b/docs/resources/network_container.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Network Peering"
+---
+
# Resource: mongodbatlas_network_container
`mongodbatlas_network_container` provides a Network Peering Container resource. The resource lets you create, edit and delete network peering containers. You must delete network peering containers before creating clusters in your project. You can't delete a network peering container if your project contains clusters. The resource requires your Project ID. Each cloud provider requires slightly different attributes so read the argument reference carefully.
@@ -47,6 +51,9 @@ resource "mongodbatlas_network_container" "test" {
}
```
+### Further Examples
+- [GCP and MongoDB Atlas VPC Peering](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_network_peering/gcp)
+
## Argument Reference
* `project_id` - (Required) Unique identifier for the Atlas project for this Network Peering Container.
diff --git a/docs/resources/network_peering.md b/docs/resources/network_peering.md
index 5851617ccc..d156a5837a 100644
--- a/docs/resources/network_peering.md
+++ b/docs/resources/network_peering.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Network Peering"
+---
+
# Resource: mongodbatlas_network_peering
`mongodbatlas_network_peering` provides a Network Peering Connection resource. The resource lets you create, edit and delete network peering connections. The resource requires your Project ID.
@@ -108,17 +112,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "GCP"
region_name = "US_EAST_4"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
depends_on = [ google_compute_network_peering.peering ]
}
@@ -167,17 +171,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AZURE"
region_name = "US_EAST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
depends_on = [ mongodbatlas_network_peering.test ]
}
@@ -196,17 +200,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
# the following assumes an AWS provider is configured
@@ -248,17 +252,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "GCP"
region_name = "US_EAST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
# Create the peering connection request
@@ -300,17 +304,17 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AZURE"
region_name = "US_EAST_2"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
# Create the peering connection request
@@ -325,11 +329,19 @@ resource "mongodbatlas_network_peering" "test" {
}
```
+### Further Examples
+- [AWS Network Peering](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_network_peering/aws)
+- [Azure Network Peering](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_network_peering/azure)
+- [GCP Network Peering](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_network_peering/gcp)
+
+
## Argument Reference
* `project_id` - (Required) The unique ID for the MongoDB Atlas project.
* `container_id` - (Required) Unique identifier of the MongoDB Atlas container for the provider (GCP) or provider/region (AWS, AZURE). You can create an MongoDB Atlas container using the network_container resource or it can be obtained from the cluster returned values if a cluster has been created before the first container.
* `provider_name` - (Required) Cloud provider to whom the peering connection is being made. (Possible Values `AWS`, `AZURE`, `GCP`).
+* `timeouts`- (Optional) The duration of time to wait for the resource to be created, updated, or deleted. The default timeout is `1h`. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
**AWS ONLY:**
diff --git a/docs/resources/online_archive.md b/docs/resources/online_archive.md
index 7e83b09913..b3b0277060 100644
--- a/docs/resources/online_archive.md
+++ b/docs/resources/online_archive.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Online Archive"
+---
+
# Resource: mongodbatlas_online_archive
`mongodbatlas_online_archive` resource provides access to create, edit, pause and resume an online archive for a collection.
@@ -102,6 +106,8 @@ resource "mongodbatlas_online_archive" "test" {
}
}
```
+### Further Examples
+- [Online Archive Example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_online_archive)
## Argument Reference
* `project_id` - (Required) The unique ID for the project
@@ -116,6 +122,8 @@ resource "mongodbatlas_online_archive" "test" {
* `partition_fields` - (Recommended) Fields to use to partition data. You can specify up to two frequently queried fields (or up to three fields when one of them is `date_field`) to use for partitioning data. Queries that don’t contain the specified fields require a full collection scan of all archived documents, which takes longer and increases your costs. To learn more about how partition improves query performance, see [Data Structure in S3](https://docs.mongodb.com/datalake/admin/optimize-query-performance/#data-structure-in-s3). The value of a partition field can be up to a maximum of 700 characters. Documents with values exceeding 700 characters are not archived. See [partition fields](#partition).
* `paused` - (Optional) State of the online archive. This is required for pausing an active online archive or resuming a paused online archive. If the collection has another active online archive, the resume request fails.
* `sync_creation` - (Optional) Flag that indicates whether the provider will wait for the state of the online archive to reach `IDLE` or `ACTIVE` when creating an online archive. Defaults to `false`.
+* `timeouts`- (Optional) The duration of time to wait for Online Archive to be created. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Online Archive create is `3h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
### Criteria
diff --git a/docs/resources/org_invitation.md b/docs/resources/org_invitation.md
index e5d69cda04..889c7f6d71 100644
--- a/docs/resources/org_invitation.md
+++ b/docs/resources/org_invitation.md
@@ -1,7 +1,13 @@
+---
+subcategory: "Organizations"
+---
+
# Resource: mongodbatlas_org_invitation
`mongodbatlas_org_invitation` invites a user to join an Atlas organization.
+~> **DEPRECATION:** This resource is deprecated. Migrate to `mongodbatlas_cloud_user_org_assignment` for managing organization membership. See the [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
Each invitation for an Atlas user includes roles that Atlas grants the user when they accept the invitation.
The [MongoDB Documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/#organization-roles) describes the roles a user can have.
@@ -38,6 +44,9 @@ resource "mongodbatlas_org_invitation" "test1" {
}
```
+### Further Examples
+- [Migrate Org Invitation to Cloud User Org Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_org_invitation_to_cloud_user_org_assignment)
+
## Argument Reference
* `org_id` - (Required) Unique 24-hexadecimal digit string that identifies the organization to which you want to invite a user.
diff --git a/docs/resources/organization.md b/docs/resources/organization.md
index 4b3902ffb2..34d055a245 100644
--- a/docs/resources/organization.md
+++ b/docs/resources/organization.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Organizations"
+---
+
# Resource: mongodbatlas_organization
`mongodbatlas_organization` provides programmatic management (including creation) of a MongoDB Atlas Organization resource.
@@ -17,6 +21,11 @@ resource "mongodbatlas_organization" "test" {
}
```
+### Further Examples
+- [Organization setup - step 1](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_organization/organization-step-1)
+- [Organization setup - step 2](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_organization/organization-step-2)
+- [Organization import](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_organization/organization-import)
+
## Argument Reference
* `name` - (Required) The name of the organization.
diff --git a/docs/resources/private_endpoint_regional_mode.md b/docs/resources/private_endpoint_regional_mode.md
index 754c4c1e56..3e37a2ec8b 100644
--- a/docs/resources/private_endpoint_regional_mode.md
+++ b/docs/resources/private_endpoint_regional_mode.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Resource: private_endpoint_regional_mode
`mongodbatlas_private_endpoint_regional_mode` provides a Private Endpoint Regional Mode resource. This represents a regionalized private endpoint setting for a Project. Enable it to allow region specific private endpoints.
@@ -22,53 +26,50 @@ resource "mongodbatlas_advanced_cluster" "cluster_atlas" {
cluster_type = "GEOSHARDED"
backup_enabled = true
- replication_specs { # Shard 1
+ replication_specs = [{ # Shard 1
zone_name = "Zone 1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = var.atlas_region_east
- }
-
- region_configs {
- electable_specs {
+ },
+ {
+ electable_specs = {
instance_size = "M30"
node_count = 2
}
provider_name = "AWS"
priority = 6
region_name = var.atlas_region_west
- }
- }
-
- replication_specs { # Shard 2
+ }]
+ },
+ { # Shard 2
zone_name = "Zone 1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = var.atlas_region_east
- }
-
- region_configs {
- electable_specs {
+ },
+ {
+ electable_specs = {
instance_size = "M30"
node_count = 2
}
provider_name = "AWS"
priority = 6
region_name = var.atlas_region_west
- }
- }
+ }]
+ }]
depends_on = [
mongodbatlas_privatelink_endpoint_service.test_west,
@@ -123,6 +124,9 @@ resource "aws_vpc_endpoint" "test_east" {
```
+### Further Examples
+- [AWS PrivateLink Geosharded Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded)
+
## Argument Reference
* `project_id` - (Required) Unique identifier for the project.
* `enabled` - (Optional) Flag that indicates whether the regionalized private endpoint setting is enabled for the project. Set this value to true to create more than one private endpoint in a cloud provider region to connect to multi-region and global Atlas sharded clusters. You can enable this setting only if your Atlas project contains no replica sets. You can't disable this setting if you have:
diff --git a/docs/resources/privatelink_endpoint.md b/docs/resources/privatelink_endpoint.md
index 9c0a03d310..7584e55273 100644
--- a/docs/resources/privatelink_endpoint.md
+++ b/docs/resources/privatelink_endpoint.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Resource: mongodbatlas_privatelink_endpoint
`mongodbatlas_privatelink_endpoint` provides a Private Endpoint resource. This represents a [Private Endpoint Service](https://www.mongodb.com/docs/atlas/security-private-endpoint/#private-endpoint-concepts) that can be created in an Atlas project.
@@ -30,8 +34,10 @@ resource "mongodbatlas_privatelink_endpoint" "test" {
}
```
-### Available complete examples
-- [Setup private connection to a MongoDB Atlas Cluster with AWS VPC](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster)
+### Further Examples
+- [AWS PrivateLink Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws)
+- [Azure PrivateLink Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/azure)
+- [GCP Private Service Connect Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/gcp)
## Argument Reference
@@ -40,7 +46,7 @@ resource "mongodbatlas_privatelink_endpoint" "test" {
* `region` - (Required) Cloud provider region in which you want to create the private endpoint connection.
Accepted values are: [AWS regions](https://docs.atlas.mongodb.com/reference/amazon-aws/#amazon-aws), [AZURE regions](https://docs.atlas.mongodb.com/reference/microsoft-azure/#microsoft-azure) and [GCP regions](https://docs.atlas.mongodb.com/reference/google-gcp/#std-label-google-gcp)
* `timeouts`- (Optional) The duration of time to wait for Private Endpoint to be created or deleted. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Private Endpoint create & delete is `1h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
-
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
## Attributes Reference
diff --git a/docs/resources/privatelink_endpoint_serverless.md b/docs/resources/privatelink_endpoint_serverless.md
deleted file mode 100644
index 9f1f534a29..0000000000
--- a/docs/resources/privatelink_endpoint_serverless.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This resource is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide)
-
-# Resource: privatelink_endpoint_serverless
-
-`privatelink_endpoint_serverless` Provides a Serverless PrivateLink Endpoint resource.
-This is the first of two resources required to configure PrivateLink for Serverless, the second is [mongodbatlas_privatelink_endpoint_service_serverless](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/privatelink_endpoint_service_serverless).
-
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
-
-## Example Usage
-
-### AWS Example
-```terraform
-
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AWS"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AWS"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST_1"
- continuous_backup_enabled = true
-}
-```
-
-
-## Argument Reference
-
-* `project_id` - (Required) Unique 24-digit hexadecimal string that identifies the project.
-* `instance_name` - (Required) Human-readable label that identifies the serverless instance.
-* `provider_name` - (Required) Cloud provider name; AWS is currently supported
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-* `endpoint_id` - Unique 24-hexadecimal digit string that identifies the private endpoint.
-* `endpoint_service_name` - Unique string that identifies the PrivateLink endpoint service.
-* `private_link_service_resource_id` - Root-relative path that identifies the Azure Private Link Service that MongoDB Cloud manages.
-* `cloud_provider_endpoint_id` - Unique string that identifies the private endpoint's network interface.
-* `comment` - Human-readable string to associate with this private endpoint.
-* `status` - Human-readable label that indicates the current operating status of the private endpoint. Values include: RESERVATION_REQUESTED, RESERVED, INITIATING, AVAILABLE, FAILED, DELETING.
-* `timeouts`- (Optional) The duration of time to wait for Private Endpoint Service to be created or deleted. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Private Endpoint create & delete is `2h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
-
-## Import
-
-Serverless privatelink endpoint can be imported using project ID and endpoint ID, in the format `project_id`--`endpoint_id`, e.g.
-
-```
-$ terraform import mongodbatlas_privatelink_endpoint_serverless.test 1112222b3bf99403840e8934--serverless_name--vpce-jjg5e24qp93513h03
-```
-
-For more information see: [MongoDB Atlas API - Serverless Private Endpoints](https://www.mongodb.com/docs/atlas/reference/api/serverless-private-endpoints-get-one/).
diff --git a/docs/resources/privatelink_endpoint_service.md b/docs/resources/privatelink_endpoint_service.md
index 9e6b0948d1..90d189dd47 100644
--- a/docs/resources/privatelink_endpoint_service.md
+++ b/docs/resources/privatelink_endpoint_service.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Private Endpoint Services"
+---
+
# Resource: mongodbatlas_privatelink_endpoint_service
`mongodbatlas_privatelink_endpoint_service` provides a Private Endpoint Interface Link resource. This represents a Private Endpoint Interface Link, which adds one [Interface Endpoint](https://www.mongodb.com/docs/atlas/security-private-endpoint/#private-endpoint-concepts) to a private endpoint connection in an Atlas project.
@@ -140,8 +144,10 @@ resource "mongodbatlas_privatelink_endpoint_service" "test" {
```
-### Available complete examples
-- [Setup private connection to a MongoDB Atlas Cluster with AWS VPC](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster)
+### Further Examples
+- [AWS PrivateLink Endpoint and Service](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/cluster)
+- [Azure Private Link Endpoint and Service](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/azure)
+- [GCP Private Service Connect Endpoint and Service](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/gcp)
## Argument Reference
@@ -153,6 +159,7 @@ resource "mongodbatlas_privatelink_endpoint_service" "test" {
* `gcp_project_id` - (Optional) Unique identifier of the GCP project in which you created your endpoints. Only for `GCP`.
* `endpoints` - (Optional) Collection of individual private endpoints that comprise your endpoint group. Only for `GCP`. See below.
* `timeouts`- (Optional) The duration of time to wait for Private Endpoint Service to be created or deleted. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Private Endpoint create & delete is `2h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
+* `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
### `endpoints`
* `ip_address` - (Optional) Private IP address of the endpoint you created in GCP.
diff --git a/docs/resources/privatelink_endpoint_service_data_federation_online_archive.md b/docs/resources/privatelink_endpoint_service_data_federation_online_archive.md
index fe04beaaf5..65edd45fb5 100644
--- a/docs/resources/privatelink_endpoint_service_data_federation_online_archive.md
+++ b/docs/resources/privatelink_endpoint_service_data_federation_online_archive.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Data Federation"
+---
+
# Resource: mongodbatlas_privatelink_endpoint_service_data_federation_online_archive
`mongodbatlas_privatelink_endpoint_service_data_federation_online_archive` provides a Private Endpoint Service resource for Data Federation and Online Archive. The resource allows you to create and manage a private endpoint for Federated Database Instances and Online Archives to the specified project.
@@ -35,6 +39,9 @@ resource "mongodbatlas_privatelink_endpoint_service_data_federation_online_archi
The `service_name` value for the region in question can be found in the [MongoDB Atlas Administration](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createdatafederationprivateendpoint) documentation.
+### Further Examples
+- [AWS PrivateLink for Data Federation and Online Archive](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_privatelink_endpoint/aws/data-federation-online-archive)
+
## Argument Reference
* `project_id` (Required) - Unique 24-hexadecimal digit string that identifies your project.
@@ -60,4 +67,3 @@ $ terraform import mongodbatlas_privatelink_endpoint_service_data_federation_onl
```
See [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Data-Federation/operation/createDataFederationPrivateEndpoint) Documentation for more information.
-
diff --git a/docs/resources/privatelink_endpoint_service_serverless.md b/docs/resources/privatelink_endpoint_service_serverless.md
deleted file mode 100644
index 9835d96f5f..0000000000
--- a/docs/resources/privatelink_endpoint_service_serverless.md
+++ /dev/null
@@ -1,130 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This resource is deprecated and will be removed in March 2025. For more datails see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide)
-
-# Resource: privatelink_endpoint_service_serverless
-
-`privatelink_endpoint_service_serverless` Provides a Serverless PrivateLink Endpoint Service resource.
-This is the second of two resources required to configure PrivateLink for Serverless, the first is [mongodbatlas_privatelink_endpoint_serverless](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/privatelink_endpoint_serverless).
-
--> **NOTE:** Groups and projects are synonymous terms. You may find group_id in the official documentation.
--> **NOTE:** Create waits for all serverless instances on the project to IDLE in order for their operations to complete. This ensures the latest connection strings can be retrieved following creation of this resource. Default timeout is 2hrs.
-
-## Example Usage
-
-## Example with AWS
-```terraform
-
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AWS"
-}
-
-
-resource "aws_vpc_endpoint" "ptfe_service" {
- vpc_id = "vpc-7fc0a543"
- service_name = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_service_name
- vpc_endpoint_type = "Interface"
- subnet_ids = ["subnet-de0406d2"]
- security_group_ids = ["sg-3f238186"]
-}
-
-resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = ""
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- cloud_provider_endpoint_id = aws_vpc_endpoint.ptfe_service.id
- provider_name = "AWS"
- comment = "New serverless endpoint"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AWS"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST_1"
- continuous_backup_enabled = true
-}
-```
-
-## Example with AZURE
-```terraform
-resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = var.project_id
- provider_name = "AZURE"
-}
-
-resource "azurerm_private_endpoint" "test" {
- name = "endpoint-test"
- location = data.azurerm_resource_group.test.location
- resource_group_name = var.resource_group_name
- subnet_id = azurerm_subnet.test.id
- private_service_connection {
- name = mongodbatlas_privatelink_endpoint_serverless.test.private_link_service_name
- private_connection_resource_id = mongodbatlas_privatelink_endpoint_serverless.test.private_link_service_resource_id
- is_manual_connection = true
- request_message = "Azure Private Link test"
- }
-
-}
-
-resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- cloud_provider_endpoint_id = azurerm_private_endpoint.test.id
- private_endpoint_ip_address = azurerm_private_endpoint.test.private_service_connection.0.private_ip_address
- provider_name = "AZURE"
- comment = "test"
-}
-
-resource "mongodbatlas_serverless_instance" "test" {
- project_id = ""
- name = "test-db"
- provider_settings_backing_provider_name = "AZURE"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST"
- continuous_backup_enabled = true
-}
-```
-
-### Available complete examples
-- [Setup private connection to a MongoDB Atlas Serverless Instance with AWS VPC](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/aws-privatelink-endpoint/serverless-instance)
-
-
-## Argument Reference
-
-* `project_id` - (Required) Unique 24-digit hexadecimal string that identifies the project.
-* `instance_name` - (Required) Human-readable label that identifies the serverless instance.
-* `endpoint_id` - (Required) Unique 24-hexadecimal digit string that identifies the private endpoint.
-* `cloud_provider_endpoint_id` - (Optional) Unique string that identifies the private endpoint's network interface.
-* `private_endpoint_ip_address` - (Optional) IPv4 address of the private endpoint in your Azure VNet that someone added to this private endpoint service.
-* `provider_name` - (Required) Cloud provider for which you want to create a private endpoint. Atlas accepts `AWS`, `AZURE`.
-* `comment` - (Optional) Human-readable string to associate with this private endpoint.
-* `timeouts`- (Optional) The duration of time to wait for Private Endpoint Service to be created or deleted. The timeout value is defined by a signed sequence of decimal numbers with a time unit suffix such as: `1h45m`, `300s`, `10m`, etc. The valid time units are: `ns`, `us` (or `µs`), `ms`, `s`, `m`, `h`. The default timeout for Private Endpoint create & delete is `2h`. Learn more about timeouts [here](https://www.terraform.io/plugin/sdkv2/resources/retries-and-customizable-timeouts).
-
-## Attributes Reference
-
-In addition to all arguments above, the following attributes are exported:
-
-* `endpoint_service_name` - Unique string that identifies the PrivateLink endpoint service.
-* `private_link_service_resource_id` - Root-relative path that identifies the Azure Private Link Service that MongoDB Cloud manages.
-* `private_endpoint_ip_address` - IPv4 address of the private endpoint in your Azure VNet that someone added to this private endpoint service.
-* `cloud_provider_endpoint_id` - Unique string that identifies the private endpoint's network interface.
-* `comment` - Human-readable string to associate with this private endpoint.
-* `error_message` - Human-readable error message that indicates the error condition associated with establishing the private endpoint connection.
-* `status` - Human-readable label that indicates the current operating status of the private endpoint. Values include: RESERVATION_REQUESTED, RESERVED, INITIATING, AVAILABLE, FAILED, DELETING.
-
-## Import
-
-Serverless privatelink endpoint can be imported using project ID and endpoint ID, in the format `project_id`--`endpoint_id`, e.g.
-
-```
-$ terraform import mongodbatlas_privatelink_endpoint_service_serverless.test 1112222b3bf99403840e8934--serverless_name--vpce-jjg5e24qp93513h03
-```
-
-For more information see: [MongoDB Atlas API - Serverless Private Endpoints](https://www.mongodb.com/docs/atlas/reference/api/serverless-private-endpoints-get-one/).
diff --git a/docs/resources/project.md b/docs/resources/project.md
index 542d65d639..537ccd08c1 100644
--- a/docs/resources/project.md
+++ b/docs/resources/project.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Projects"
+---
+
# Resource: mongodbatlas_project
`mongodbatlas_project` provides a Project resource. This allows project to be created.
@@ -45,6 +49,9 @@ resource "mongodbatlas_project" "test" {
}
```
+### Further Examples
+- [Atlas Project with custom limits](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_project)
+
## Argument Reference
* `name` - (Required) The name of the project you want to create.
@@ -84,6 +91,8 @@ To learn more, see [Resource Tags](https://www.mongodb.com/docs/atlas/tags/).
### Teams
Teams attribute is optional
+~> **DEPRECATION:** This attribute is deprecated and will be removed in the next major release. Please transition to `mongodbatlas_team_project_assignment`. For more details, see [Migration Guide: Project Teams Attribute to Team Project Assignment Resource](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management).
+
~> **NOTE:** Atlas limits the number of users to a maximum of 100 teams per project and a maximum of 250 teams per organization.
* `team_id` - (Required) The unique identifier of the team you want to associate with the project. The team and project must share the same parent organization.
diff --git a/docs/resources/project_api_key.md b/docs/resources/project_api_key.md
index 1105e42f8f..091c498ef1 100644
--- a/docs/resources/project_api_key.md
+++ b/docs/resources/project_api_key.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# Resource: mongodbatlas_project_api_key
`mongodbatlas_project_api_key` provides a Project API Key resource. This allows project API Key to be created.
@@ -37,6 +41,9 @@ resource "mongodbatlas_project_api_key" "test" {
}
```
+### Further Examples
+- [Legacy Module: Create and Assign Project API Key](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_api_key_assignment/module/old_module)
+
## Argument Reference
* `description` - (Required) Description of this Project API key.
diff --git a/docs/resources/project_invitation.md b/docs/resources/project_invitation.md
index 06ae619a4c..b7ae5dd093 100644
--- a/docs/resources/project_invitation.md
+++ b/docs/resources/project_invitation.md
@@ -1,7 +1,13 @@
+---
+subcategory: "Projects"
+---
+
# Resource: mongodbatlas_project_invitation
`mongodbatlas_project_invitation` invites a user to join an Atlas project.
+~> **DEPRECATION:** This resource is deprecated. Migrate to `mongodbatlas_cloud_user_project_assignment` for managing project membership. See the [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management).
+
Each invitation for an Atlas user includes roles that Atlas grants the user when they accept the invitation.
The [MongoDB Documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/#project-roles) describes the roles which can be assigned to a user.
@@ -31,6 +37,9 @@ resource "mongodbatlas_project_invitation" "test" {
}
```
+### Further Examples
+- [Migrate Project Invitation to Cloud User Project Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_project_invitation_to_cloud_user_project_assignment)
+
## Argument Reference
* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies the project to which you want to invite a user.
diff --git a/docs/resources/project_ip_access_list.md b/docs/resources/project_ip_access_list.md
index 5566f23b43..5aeea71710 100644
--- a/docs/resources/project_ip_access_list.md
+++ b/docs/resources/project_ip_access_list.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Project IP Access List"
+---
+
# Resource: mongodbatlas_project_ip_access_list
`mongodbatlas_project_ip_access_list` provides an IP Access List entry resource. The access list grants access from IPs, CIDRs or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project.
@@ -58,6 +62,11 @@ resource "mongodbatlas_project_ip_access_list" "test" {
~> **IMPORTANT:** In order to use AWS Security Group(s) VPC Peering must be enabled like above example.
+
+### Further Examples
+- [Project IP Access List](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_project_ip_access_list)
+
+
## Argument Reference
* `project_id` - (Required) Unique identifier for the project to which you want to add one or more access list entries.
diff --git a/docs/resources/push_based_log_export.md b/docs/resources/push_based_log_export.md
index 893ee19ca7..0e73f9e3ce 100644
--- a/docs/resources/push_based_log_export.md
+++ b/docs/resources/push_based_log_export.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Push-Based Log Export"
+---
+
# Resource: mongodbatlas_push_based_log_export
`mongodbatlas_push_based_log_export` provides a resource for push-based log export feature. The resource lets you configure, enable & disable the project level settings for the push-based log export feature. Using this resource you
@@ -46,6 +50,9 @@ output "test" {
}
```
+### Further Examples
+- [Push-Based Log Export](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_push_based_log_export)
+
## Schema
@@ -59,6 +66,7 @@ output "test" {
### Optional
+- `delete_on_create_timeout` (Boolean) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
- `prefix_path` (String) S3 directory in which vector writes in order to store the logs. An empty string denotes the root directory.
- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))
diff --git a/docs/resources/resource_policy.md b/docs/resources/resource_policy.md
index 12addc3c4f..3b4b9261e9 100644
--- a/docs/resources/resource_policy.md
+++ b/docs/resources/resource_policy.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Resource Policies"
+---
+
# Resource: mongodbatlas_resource_policy
`mongodbatlas_resource_policy` provides a Resource Policy resource. The resource lets you create, edit and delete resource policies to prevent misconfigurations and reduce the need for corrective interventions in your organization.
@@ -88,6 +92,9 @@ output "policy_ids" {
}
```
+### Further Examples
+- [Atlas Resource Policy](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_resource_policy)
+
## Schema
diff --git a/docs/resources/search_deployment.md b/docs/resources/search_deployment.md
index 3cc419144f..f5574b08f4 100644
--- a/docs/resources/search_deployment.md
+++ b/docs/resources/search_deployment.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# Resource: mongodbatlas_search_deployment
`mongodbatlas_search_deployment` provides a Search Deployment resource. The resource lets you create, edit and delete dedicated search nodes in a cluster.
@@ -19,17 +23,17 @@ resource "mongodbatlas_advanced_cluster" "example" {
name = "ClusterExample"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_search_deployment" "example" {
@@ -57,6 +61,9 @@ output "mongodbatlas_search_deployment_encryption_at_rest_provider" {
}
```
+### Further Examples
+- [Atlas Cluster with dedicated Search Nodes Deployment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_search_deployment)
+
## Schema
@@ -68,7 +75,7 @@ output "mongodbatlas_search_deployment_encryption_at_rest_provider" {
### Optional
-- `delete_on_create_timeout` (Boolean) Flag that indicates whether to delete the search deployment if the creation times out, default is false.
+- `delete_on_create_timeout` (Boolean) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
- `skip_wait_on_update` (Boolean) If true, the resource update is executed without waiting until the [state](#state_name-1) is `IDLE`, making the operation faster. This might cause update errors to go unnoticed and lead to non-empty plans at the next terraform execution.
- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))
diff --git a/docs/resources/search_index.md b/docs/resources/search_index.md
index 14a8f33e36..cc7617cc83 100644
--- a/docs/resources/search_index.md
+++ b/docs/resources/search_index.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# Resource: mongodbatlas_search_index
`mongodbatlas_search_index` provides a Search Index resource. This allows indexes to be created.
diff --git a/docs/resources/serverless_instance.md b/docs/resources/serverless_instance.md
index ecb7c113b3..bf7d8cab66 100644
--- a/docs/resources/serverless_instance.md
+++ b/docs/resources/serverless_instance.md
@@ -1,8 +1,8 @@
---
-subcategory: "Deprecated"
+subcategory: "Serverless Instances"
---
-**WARNING:** This resource is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
+~> **DEPRECATION:** This resource is deprecated and will be removed in January 2026. For more details, see [Migration Guide: Transition out of Serverless Instances and Shared-tier clusters](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide).
# Resource: mongodbatlas_serverless_instance
@@ -25,15 +25,6 @@ resource "mongodbatlas_serverless_instance" "test" {
}
```
-**NOTE:** `mongodbatlas_serverless_instance` and `mongodbatlas_privatelink_endpoint_service_serverless` resources have a circular dependency in some respects.\
-That is, the `serverless_instance` must exist before the `privatelink_endpoint_service` can be created,\
-and the `privatelink_endpoint_service` must exist before the `serverless_instance` gets its respective `connection_strings_private_endpoint_srv` values.
-
-Because of this, the `serverless_instance` data source has particular value as a source of the `connection_strings_private_endpoint_srv`.\
-When using the data_source in-tandem with the afforementioned resources, we can create and retrieve the `connection_strings_private_endpoint_srv` in a single `terraform apply`.
-
-Follow this example to [setup private connection to a serverless instance using aws vpc](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/aws-privatelink-endpoint/serverless-instance) and get the connection strings in a single `terraform apply`
-
## Argument Reference
* `name` - (Required) Human-readable label that identifies the serverless instance.
diff --git a/docs/resources/stream_connection.md b/docs/resources/stream_connection.md
index 5fd86e4a27..2ad24458e6 100644
--- a/docs/resources/stream_connection.md
+++ b/docs/resources/stream_connection.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Resource: mongodbatlas_stream_connection
`mongodbatlas_stream_connection` provides a Stream Connection resource. The resource lets you create, edit, and delete stream instance connections.
@@ -19,6 +23,9 @@ resource "mongodbatlas_stream_connection" "test" {
}
```
+### Further Examples
+- [Atlas Stream Connection](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_connection)
+
### Example Cross Project Cluster Connection
```terraform
diff --git a/docs/resources/stream_instance.md b/docs/resources/stream_instance.md
index 149de90b8e..c9bf2a357b 100644
--- a/docs/resources/stream_instance.md
+++ b/docs/resources/stream_instance.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Resource: mongodbatlas_stream_instance
`mongodbatlas_stream_instance` provides a Stream Instance resource. The resource lets you create, edit, and delete stream instances in a project.
@@ -15,6 +19,9 @@ resource "mongodbatlas_stream_instance" "test" {
}
```
+### Further Examples
+- [Atlas Stream Instance](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_instance)
+
## Argument Reference
* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies your project.
diff --git a/docs/resources/stream_privatelink_endpoint.md b/docs/resources/stream_privatelink_endpoint.md
index aeac4488d0..43c4bb1358 100644
--- a/docs/resources/stream_privatelink_endpoint.md
+++ b/docs/resources/stream_privatelink_endpoint.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Resource: mongodbatlas_stream_privatelink_endpoint
`mongodbatlas_stream_privatelink_endpoint` describes a Privatelink Endpoint for Streams.
@@ -249,6 +253,13 @@ output "privatelink_endpoint_id" {
}
```
+### Further Examples
+- [AWS Confluent PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/confluent_serverless)
+- [Confluent Dedicated Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/confluent_dedicated_cluster)
+- [AWS MSK PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/aws_msk_cluster)
+- [AWS S3 PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/s3)
+- [Azure PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/azure)
+
## Schema
diff --git a/docs/resources/stream_processor.md b/docs/resources/stream_processor.md
index d2f73ed1d7..2f86250401 100644
--- a/docs/resources/stream_processor.md
+++ b/docs/resources/stream_processor.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# Resource: mongodbatlas_stream_processor
`mongodbatlas_stream_processor` provides a Stream Processor resource. The resource lets you create, delete, import, start and stop a stream processor in a stream instance.
@@ -118,6 +122,9 @@ output "stream_processors_results" {
}
```
+### Further Examples
+- [Atlas Stream Processor](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_processor)
+
## Schema
@@ -132,10 +139,12 @@ output "stream_processors_results" {
### Optional
+- `delete_on_create_timeout` (Boolean) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
- `options` (Attributes) Optional configuration for the stream processor. (see [below for nested schema](#nestedatt--options))
- `state` (String) The state of the stream processor. Commonly occurring states are 'CREATED', 'STARTED', 'STOPPED' and 'FAILED'. Used to start or stop the Stream Processor. Valid values are `CREATED`, `STARTED` or `STOPPED`. When a Stream Processor is created without specifying the state, it will default to `CREATED` state. When a Stream Processor is updated without specifying the state, it will default to the Previous state.
**NOTE** When a Stream Processor is updated without specifying the state, it is stopped and then restored to previous state upon update completion.
+- `timeouts` (Attributes) (see [below for nested schema](#nestedatt--timeouts))
### Read-Only
@@ -158,6 +167,15 @@ Required:
- `connection_name` (String) Name of the connection to write DLQ messages to. Must be an Atlas connection.
- `db` (String) Name of the database to use for the DLQ.
+
+
+
+### Nested Schema for `timeouts`
+
+Optional:
+
+- `create` (String) A string that can be [parsed as a duration](https://pkg.go.dev/time#ParseDuration) consisting of numbers and unit suffixes, such as "30s" or "2h45m". Valid time units are "s" (seconds), "m" (minutes), "h" (hours).
+
## Import
Stream Processor resource can be imported using the Project ID, Stream Instance name and Stream Processor name, in the format `INSTANCE_NAME-PROJECT_ID-PROCESSOR_NAME`, e.g.
```
diff --git a/docs/resources/team.md b/docs/resources/team.md
index 5b7a0e7368..73a7b38728 100644
--- a/docs/resources/team.md
+++ b/docs/resources/team.md
@@ -1,3 +1,7 @@
+---
+subcategory: "Teams"
+---
+
# Resource: mongodbatlas_team
`mongodbatlas_team` provides a Team resource. The resource lets you create, edit and delete Teams. Also, Teams can be assigned to multiple projects, and team members’ access to the project is determined by the team’s project role.
@@ -16,11 +20,15 @@ resource "mongodbatlas_team" "test" {
}
```
+### Further Examples
+- [Team and user assignment (module maintainer) v1](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_user_team_assignment/module_maintainer/v1)
+- [Team and user assignment (module maintainer) v2](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/migrate_user_team_assignment/module_maintainer/v2)
+
## Argument Reference
* `org_id` - (Required) The unique identifier for the organization you want to associate the team with.
* `name` - (Required) The name of the team you want to create.
-* `usernames` - (Required) The Atlas usernames (email address). You can only add Atlas users who are part of the organization. Users who have not accepted an invitation to join the organization cannot be added as team members. There is a maximum of 250 Atlas users per team.
+* `usernames` - **(DEPRECATED)** (Optional) The Atlas usernames (email address). You can only add Atlas users who are part of the organization. Users who have not accepted an invitation to join the organization cannot be added as team members. There is a maximum of 250 Atlas users per team. This attribute is deprecated and will be removed in the next major release. Please transition to `mongodbatlas_cloud_user_team_assignment`. For more details, see [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management.md).
## Attributes Reference
diff --git a/docs/resources/team_project_assignment.md b/docs/resources/team_project_assignment.md
new file mode 100644
index 0000000000..e4221bb8d6
--- /dev/null
+++ b/docs/resources/team_project_assignment.md
@@ -0,0 +1,45 @@
+---
+subcategory: "Teams"
+---
+
+# Resource: mongodbatlas_team_project_assignment
+
+`mongodbatlas_team_project_assignment` provides a Team Project Assignment resource. It lets you manage the association between a team and a project, enabling you to import, assign, remove, or update the team's membership.
+## Example Usages
+
+```terraform
+resource "mongodbatlas_team_project_assignment" "this" {
+ project_id = var.project_id
+ team_id = var.team_id
+ role_names = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_team_project_assignment" "this" {
+ project_id = mongodbatlas_team_project_assignment.this.project_id
+ team_id = mongodbatlas_team_project_assignment.this.team_id
+}
+```
+
+### Further Examples
+- [Team Project Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_team_project_assignment)
+
+
+## Schema
+
+### Required
+
+- `project_id` (String) Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listprojects) endpoint to retrieve all projects to which the authenticated user has access.
+
+**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.
+- `role_names` (Set of String) One or more project-level roles assigned to the team.
+- `team_id` (String) Unique 24-hexadecimal character string that identifies the team.
+
+## Import
+
+Team Project Assignment resource can be imported using the Project ID & TeamID, in the format `PROJECT_ID/TEAM_ID`.
+
+```
+$ terraform import mongodbatlas_team_project_assignment.test 9f3a7c2e54b8d1a0e6f4b3c2/a4d9f7b18e52c0fa36b7e9cd
+```
+
+For more information, see: [MongoDB Atlas API - Teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addallteamstoproject) Documentation.
diff --git a/docs/resources/teams.md b/docs/resources/teams.md
deleted file mode 100644
index 5db231b8c1..0000000000
--- a/docs/resources/teams.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-subcategory: "Deprecated"
----
-
-**WARNING:** This resource is deprecated, use `mongodbatlas_team`
-
-# Resource: mongodbatlas_teams
-
-This resource is deprecated. Please transition to using `mongodbatlas_team` which defines the same underlying implementation, aligning the name of the resource with the implementation which manages a single team.
diff --git a/docs/resources/third_party_integration.md b/docs/resources/third_party_integration.md
index 65130f62f5..2b24604066 100644
--- a/docs/resources/third_party_integration.md
+++ b/docs/resources/third_party_integration.md
@@ -1,80 +1,87 @@
-# Resource: mongodbatlas_third_party_integration
-
-`mongodbatlas_third_party_integration` Provides a Third-Party Integration Settings for the given type.
-
--> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
-
--> **NOTE:** Slack integrations now use the OAuth2 verification method and must be initially configured, or updated from a legacy integration, through the Atlas third-party service integrations page. Legacy tokens will soon no longer be supported.[Read more about slack setup](https://docs.atlas.mongodb.com/tutorial/third-party-service-integrations/)
-
-~> **IMPORTANT** Each project can only have one configuration per {INTEGRATION-TYPE}.
-
-~> **IMPORTANT:** All arguments including the secrets will be stored in the raw state as plain-text. [Read more about sensitive data in state.](https://www.terraform.io/docs/state/sensitive-data.html)
-
-
-## Example Usage
-
-```terraform
-
-resource "mongodbatlas_third_party_integration" "test_datadog" {
- project_id = ""
- type = "DATADOG"
- api_key = ""
- region = ""
-}
-
-```
-
-## Argument Reference
-
-* `project_id` - (Required) The unique ID for the project to get all Third-Party service integrations
-* `type` - (Required) Third-Party Integration Settings type
- * PAGER_DUTY
- * DATADOG
- * OPS_GENIE
- * VICTOR_OPS
- * WEBHOOK
- * MICROSOFT_TEAMS
- * PROMETHEUS
-
-
-* `PAGER_DUTY`
- * `service_key` - Your Service Key.
- * `region` (Required) - PagerDuty region that indicates the API Uniform Resource Locator (URL) to use, either "US" or "EU". PagerDuty will use "US" by default.
-* `DATADOG`
- * `api_key` - Your API Key.
- * `region` (Required) - Two-letter code that indicates which API URL to use. See the `region` request parameter of [MongoDB API Third-Party Service Integration documentation](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createthirdpartyintegration) for more details.
- * `send_collection_latency_metrics` - Toggle sending collection latency metrics that includes database names and collection names and latency metrics on reads, writes, commands, and transactions. Default: `false`.
- * `send_database_metrics` - Toggle sending database metrics that includes database names and metrics on the number of collections, storage size, and index size. Default: `false`.
- * `send_user_provided_resource_tags` - Toggle sending user provided group and cluster resource tags with the datadog metrics. Default: `false`.
-* `OPS_GENIE`
- * `api_key` - Your API Key.
- * `region` (Required) - Two-letter code that indicates which API URL to use. See the `region` request parameter of [MongoDB API Third-Party Service Integration documentation](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createthirdpartyintegration) for more details.
-* `VICTOR_OPS`
- * `api_key` - Your API Key.
- * `routing_key` - An optional field for your Routing Key.
-* `WEBHOOK`
- * `url` - Your webhook URL.
- * `secret` - An optional field for your webhook secret.
-* `MICROSOFT_TEAMS`
- * `microsoft_teams_webhook_url` - Your Microsoft Teams incoming webhook URL.
-* `PROMETHEUS`
- * `user_name` - Your Prometheus username.
- * `password` - Your Prometheus password.
- * `service_discovery` - Indicates which service discovery method is used, either file or http.
- * `enabled` - Whether your cluster has Prometheus enabled.
-
--> **NOTE:** For certain attributes with default values, it's recommended to explicitly set them back to their default instead of removing them from the configuration. For example, if `send_collection_latency_metrics` is set to `true` and you want to revert to the default (`false`), set it to `false` rather than removing it.
-
-## Attributes Reference
-
-* `id` - Unique identifier of the integration.
-
-## Import
-
-Third-Party Integration Settings can be imported using project ID and the integration type, in the format `project_id`-`type`, e.g.
-
-```
-$ terraform import mongodbatlas_third_party_integration.test_datadog 1112222b3bf99403840e8934-DATADOG
-```
-
-See [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Third-Party-Integrations/operation/createThirdPartyIntegration) Documentation for more information.
+---
+subcategory: "Third-Party Integrations"
+---
+
+# Resource: mongodbatlas_third_party_integration
+
+`mongodbatlas_third_party_integration` Provides a Third-Party Integration Settings for the given type.
+
+-> **NOTE:** Groups and projects are synonymous terms. You may find `groupId` in the official documentation.
+
+-> **NOTE:** Slack integrations now use the OAuth2 verification method and must be initially configured, or updated from a legacy integration, through the Atlas third-party service integrations page. Legacy tokens will soon no longer be supported.[Read more about slack setup](https://docs.atlas.mongodb.com/tutorial/third-party-service-integrations/)
+
+~> **IMPORTANT** Each project can only have one configuration per {INTEGRATION-TYPE}.
+
+~> **IMPORTANT:** All arguments including the secrets will be stored in the raw state as plain-text. [Read more about sensitive data in state.](https://www.terraform.io/docs/state/sensitive-data.html)
+
+
+## Example Usage
+
+```terraform
+
+resource "mongodbatlas_third_party_integration" "test_datadog" {
+ project_id = ""
+ type = "DATADOG"
+ api_key = ""
+ region = ""
+}
+
+```
+
+### Further Examples
+- [Third-Party Integration Examples](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_third_party_integration)
+
+## Argument Reference
+
+* `project_id` - (Required) The unique ID for the project to get all Third-Party service integrations
+* `type` - (Required) Third-Party Integration Settings type
+ * PAGER_DUTY
+ * DATADOG
+ * OPS_GENIE
+ * VICTOR_OPS
+ * WEBHOOK
+ * MICROSOFT_TEAMS
+ * PROMETHEUS
+
+
+* `PAGER_DUTY`
+ * `service_key` - Your Service Key.
+ * `region` (Required) - PagerDuty region that indicates the API Uniform Resource Locator (URL) to use, either "US" or "EU". PagerDuty will use "US" by default.
+* `DATADOG`
+ * `api_key` - Your API Key.
+ * `region` (Required) - Two-letter code that indicates which API URL to use. See the `region` request parameter of [MongoDB API Third-Party Service Integration documentation](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createthirdpartyintegration) for more details.
+ * `send_collection_latency_metrics` - Toggle sending collection latency metrics that includes database names and collection names and latency metrics on reads, writes, commands, and transactions. Default: `false`.
+ * `send_database_metrics` - Toggle sending database metrics that includes database names and metrics on the number of collections, storage size, and index size. Default: `false`.
+ * `send_user_provided_resource_tags` - Toggle sending user provided group and cluster resource tags with the datadog metrics. Default: `false`.
+* `OPS_GENIE`
+ * `api_key` - Your API Key.
+ * `region` (Required) - Two-letter code that indicates which API URL to use. See the `region` request parameter of [MongoDB API Third-Party Service Integration documentation](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createthirdpartyintegration) for more details.
+* `VICTOR_OPS`
+ * `api_key` - Your API Key.
+ * `routing_key` - An optional field for your Routing Key.
+* `WEBHOOK`
+ * `url` - Your webhook URL.
+ * `secret` - An optional field for your webhook secret.
+* `MICROSOFT_TEAMS`
+ * `microsoft_teams_webhook_url` - Your Microsoft Teams incoming webhook URL.
+* `PROMETHEUS`
+ * `user_name` - Your Prometheus username.
+ * `password` - Your Prometheus password.
+ * `service_discovery` - Indicates which service discovery method is used, either file or http.
+ * `enabled` - Whether your cluster has Prometheus enabled.
+
+-> **NOTE:** For certain attributes with default values, it's recommended to explicitly set them back to their default instead of removing them from the configuration. For example, if `send_collection_latency_metrics` is set to `true` and you want to revert to the default (`false`), set it to `false` rather than removing it.
+
+## Attributes Reference
+
+* `id` - Unique identifier of the integration.
+
+## Import
+
+Third-Party Integration Settings can be imported using project ID and the integration type, in the format `project_id`-`type`, e.g.
+
+```
+$ terraform import mongodbatlas_third_party_integration.test_datadog 1112222b3bf99403840e8934-DATADOG
+```
+
+See [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Third-Party-Integrations/operation/createThirdPartyIntegration) Documentation for more information.
diff --git a/docs/resources/x509_authentication_database_user.md b/docs/resources/x509_authentication_database_user.md
index b7ff380cb4..91a1124ef4 100644
--- a/docs/resources/x509_authentication_database_user.md
+++ b/docs/resources/x509_authentication_database_user.md
@@ -1,3 +1,7 @@
+---
+subcategory: "X.509 Authentication"
+---
+
# Resource: mongodbatlas_x509_authentication_database_user
`mongodbatlas_x509_authentication_database_user` provides a X509 Authentication Database User resource. The mongodbatlas_x509_authentication_database_user resource lets you manage MongoDB users who authenticate using X.509 certificates. You can manage these X.509 certificates or let Atlas do it for you.
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index cc2cf7f891..3a2cd38c57 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -18,27 +18,30 @@ resource "mongodbatlas_advanced_cluster" "main" {
project_id = "64258fba5c9...e5e94617e"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M20"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M20"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M20"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 6
+ region_name = "EU_WEST_1"
+ }
+ ]
}
-
- region_configs {
- electable_specs {
- instance_size = "M20"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 6
- region_name = "EU_WEST_1"
- }
- }
+ ]
}
```
diff --git a/examples/migrate_atlas_user_and_atlas_users/README.md b/examples/migrate_atlas_user_and_atlas_users/README.md
new file mode 100644
index 0000000000..25c4343f1f
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/README.md
@@ -0,0 +1,55 @@
+# Migration Example: Atlas User to Cloud User Org Assignment
+
+This example demonstrates how to migrate from the deprecated `mongodbatlas_atlas_user` and `mongodbatlas_atlas_users` data sources to their replacements.
+
+## Migration Phases
+
+### v1: Initial State (Deprecated Data Sources)
+Shows the original configuration using deprecated data sources:
+- `mongodbatlas_atlas_user` for single user reads
+- `mongodbatlas_atlas_users` for user lists
+
+### v2: Migration Phase (Both Old and New)
+Demonstrates the migration approach:
+- Adds new data sources alongside old ones
+- Shows attribute mapping examples
+- Validates new data sources work before removing old ones
+
+### v3: Final State (New Data Sources Only)
+Clean final configuration using only:
+- `mongodbatlas_cloud_user_org_assignment` for single user reads
+- `mongodbatlas_organization.users`, `mongodbatlas_project.users`, `mongodbatlas_team.users` for user lists
+
+## Usage
+
+1. Start with v1 to understand the original setup
+2. Apply v2 configuration to add new data sources
+3. Verify the new data sources return expected data
+4. Update your references using the attribute mappings shown
+5. Apply v3 configuration for the final clean state
+
+## Prerequisites
+
+- MongoDB Atlas Terraform Provider 2.0.0 or later
+- Valid MongoDB Atlas organization, project, and team IDs
+- Existing users in your organization
+
+## Variables
+
+Set these variables for all versions:
+
+```terraform
+public_key = "your-mongodb-atlas-public-key" # Optional, can use env vars
+private_key = "your-mongodb-atlas-private-key" # Optional, can use env vars
+org_id = "your-organization-id"
+project_id = "your-project-id"
+team_id = "your-team-id"
+user_id = "existing-user-id"
+username = "existing-user@example.com"
+```
+
+Alternatively, set environment variables:
+```bash
+export MONGODB_ATLAS_PUBLIC_KEY="your-public-key"
+export MONGODB_ATLAS_PRIVATE_KEY="your-private-key"
+```
diff --git a/examples/migrate_atlas_user_and_atlas_users/v1/main.tf b/examples/migrate_atlas_user_and_atlas_users/v1/main.tf
new file mode 100644
index 0000000000..a103fcd24a
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v1/main.tf
@@ -0,0 +1,49 @@
+############################################################
+# v1: Original configuration using deprecated data sources
+############################################################
+
+# Single user read using deprecated data source
+data "mongodbatlas_atlas_user" "single_user_by_id" {
+ user_id = var.user_id
+}
+
+data "mongodbatlas_atlas_user" "single_user_by_username" {
+ username = var.username
+}
+
+# User lists using deprecated data source
+data "mongodbatlas_atlas_users" "org_users" {
+ org_id = var.org_id
+}
+
+data "mongodbatlas_atlas_users" "project_users" {
+ project_id = var.project_id
+}
+
+data "mongodbatlas_atlas_users" "team_users" {
+ team_id = var.team_id
+ org_id = var.org_id
+}
+
+# Example usage of deprecated data sources
+locals {
+ # Single user examples
+ user_email_by_id = data.mongodbatlas_atlas_user.single_user_by_id.email_address
+ user_email_by_username = data.mongodbatlas_atlas_user.single_user_by_username.email_address
+
+ # User list examples
+ org_user_emails = data.mongodbatlas_atlas_users.org_users.results[*].email_address
+ project_user_emails = data.mongodbatlas_atlas_users.project_users.results[*].email_address
+ team_user_emails = data.mongodbatlas_atlas_users.team_users.results[*].email_address
+
+ # Role filtering examples (complex expressions)
+ user_org_roles = [
+ for r in data.mongodbatlas_atlas_user.single_user_by_id.roles : r.role_name
+ if r.org_id == var.org_id
+ ]
+
+ user_project_roles = [
+ for r in data.mongodbatlas_atlas_user.single_user_by_id.roles : r.role_name
+ if r.group_id == var.project_id
+ ]
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v1/outputs.tf b/examples/migrate_atlas_user_and_atlas_users/v1/outputs.tf
new file mode 100644
index 0000000000..6ecd0fedc7
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v1/outputs.tf
@@ -0,0 +1,52 @@
+# Single user outputs
+output "user_email_by_id" {
+ description = "User email retrieved by user ID"
+ value = local.user_email_by_id
+}
+
+output "user_email_by_username" {
+ description = "User email retrieved by username"
+ value = local.user_email_by_username
+}
+
+output "user_org_roles" {
+ description = "User's organization roles (filtered from consolidated roles)"
+ value = local.user_org_roles
+}
+
+output "user_project_roles" {
+ description = "User's project roles (filtered from consolidated roles)"
+ value = local.user_project_roles
+}
+
+# User list outputs
+output "org_user_emails" {
+ description = "All organization user emails"
+ value = local.org_user_emails
+}
+
+output "project_user_emails" {
+ description = "All project user emails"
+ value = local.project_user_emails
+}
+
+output "team_user_emails" {
+ description = "All team user emails"
+ value = local.team_user_emails
+}
+
+# Count outputs
+output "org_user_count" {
+ description = "Number of organization users"
+ value = length(data.mongodbatlas_atlas_users.org_users.results)
+}
+
+output "project_user_count" {
+ description = "Number of project users"
+ value = length(data.mongodbatlas_atlas_users.project_users.results)
+}
+
+output "team_user_count" {
+ description = "Number of team users"
+ value = length(data.mongodbatlas_atlas_users.team_users.results)
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v1/provider.tf b/examples/migrate_atlas_user_and_atlas_users/v1/provider.tf
new file mode 100644
index 0000000000..e5aeda8033
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v1/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
\ No newline at end of file
diff --git a/examples/migrate_atlas_user_and_atlas_users/v1/variables.tf b/examples/migrate_atlas_user_and_atlas_users/v1/variables.tf
new file mode 100644
index 0000000000..6de7eb3b82
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v1/variables.tf
@@ -0,0 +1,33 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "org_id" {
+ description = "MongoDB Atlas Organization ID"
+ type = string
+}
+
+variable "project_id" {
+ description = "MongoDB Atlas Project ID"
+ type = string
+}
+
+variable "team_id" {
+ description = "MongoDB Atlas Team ID"
+ type = string
+}
+
+variable "user_id" {
+ description = "MongoDB Atlas User ID"
+ type = string
+}
+
+variable "username" {
+ description = "MongoDB Atlas Username (email)"
+ type = string
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v1/versions.tf b/examples/migrate_atlas_user_and_atlas_users/v1/versions.tf
new file mode 100644
index 0000000000..ef1e7bbb88
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v1/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.5.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v2/README.md b/examples/migrate_atlas_user_and_atlas_users/v2/README.md
new file mode 100644
index 0000000000..5427ef1420
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v2/README.md
@@ -0,0 +1,36 @@
+# v2: Migration Phase
+
+This configuration demonstrates the migration approach by running both old and new data sources side-by-side.
+
+## What this shows
+
+- **Attribute mapping**: Direct comparison between old and new attribute structures
+- **Validation**: Outputs that verify the new data sources return equivalent data
+- **Migration readiness**: Checks to confirm you're ready to move to v3
+
+## Key comparisons
+
+### Single User Reads
+- `email_address` → `username`
+- Complex role filtering → Structured `roles.org_roles` and `roles.project_role_assignments`
+
+### User Lists
+- `results[*].email_address` → `users[*].username`
+- `results` → `users`
+
+## Usage
+
+1. Apply this configuration: `terraform apply`
+2. Review the comparison outputs to verify data consistency
+3. Check `migration_validation.ready_for_v3` is `true`
+4. Once validated, proceed to v3
+
+## Expected outputs
+
+The outputs will show side-by-side comparisons of:
+- Email retrieval methods
+- Role access patterns
+- User list structures
+- Count validations
+
+If `migration_validation.ready_for_v3` is `true`, you can safely proceed to the final v3 configuration.
diff --git a/examples/migrate_atlas_user_and_atlas_users/v2/main.tf b/examples/migrate_atlas_user_and_atlas_users/v2/main.tf
new file mode 100644
index 0000000000..3b88ab4de2
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v2/main.tf
@@ -0,0 +1,99 @@
+############################################################
+# v2: Migration phase - both old and new data sources
+############################################################
+
+# OLD: Single user reads (keep temporarily for comparison)
+data "mongodbatlas_atlas_user" "single_user_by_id" {
+ user_id = var.user_id
+}
+
+data "mongodbatlas_atlas_user" "single_user_by_username" {
+ username = var.username
+}
+
+# NEW: Single user reads using cloud_user_org_assignment
+data "mongodbatlas_cloud_user_org_assignment" "user_by_id" {
+ org_id = var.org_id
+ user_id = var.user_id
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "user_by_username" {
+ org_id = var.org_id
+ username = var.username
+}
+
+# OLD: User lists (keep temporarily for comparison)
+data "mongodbatlas_atlas_users" "org_users" {
+ org_id = var.org_id
+}
+
+data "mongodbatlas_atlas_users" "project_users" {
+ project_id = var.project_id
+}
+
+data "mongodbatlas_atlas_users" "team_users" {
+ team_id = var.team_id
+ org_id = var.org_id
+}
+
+# NEW: User lists using organization/project/team data sources
+data "mongodbatlas_organization" "org" {
+ org_id = var.org_id
+}
+
+data "mongodbatlas_project" "proj" {
+ project_id = var.project_id
+}
+
+data "mongodbatlas_team" "team" {
+ team_id = var.team_id
+ org_id = var.org_id
+}
+
+# Migration examples showing attribute mapping
+locals {
+ # Single user attribute mapping examples
+
+ # Email address mapping
+ old_user_email_by_id = data.mongodbatlas_atlas_user.single_user_by_id.email_address
+ new_user_email_by_id = data.mongodbatlas_cloud_user_org_assignment.user_by_id.username
+
+ # Organization roles mapping
+ old_user_org_roles = [
+ for r in data.mongodbatlas_atlas_user.single_user_by_id.roles : r.role_name
+ if r.org_id == var.org_id
+ ]
+ new_user_org_roles = data.mongodbatlas_cloud_user_org_assignment.user_by_id.roles.org_roles
+
+ # Project roles mapping (more complex for old, simpler for new)
+ old_user_project_roles = [
+ for r in data.mongodbatlas_atlas_user.single_user_by_id.roles : r.role_name
+ if r.group_id == var.project_id
+ ]
+ # Find project role assignments that match the project_id
+ matching_project_roles = [
+ for pra in data.mongodbatlas_cloud_user_org_assignment.user_by_id.roles.project_role_assignments :
+ pra.project_roles if pra.project_id == var.project_id
+ ]
+
+ # Use the first match if available, otherwise empty list
+ new_user_project_roles = length(local.matching_project_roles) > 0 ? local.matching_project_roles[0] : []
+
+ # User list attribute mapping examples
+
+ # Organization users
+ old_org_user_emails = data.mongodbatlas_atlas_users.org_users.results[*].email_address
+ new_org_user_emails = data.mongodbatlas_organization.org.users[*].username
+
+ # Project users
+ old_project_user_emails = data.mongodbatlas_atlas_users.project_users.results[*].email_address
+ new_project_user_emails = data.mongodbatlas_project.proj.users[*].username
+
+ # Team users
+ old_team_user_emails = data.mongodbatlas_atlas_users.team_users.results[*].email_address
+ new_team_user_emails = data.mongodbatlas_team.team.users[*].username
+
+ # Validation: Compare old vs new results
+ email_mapping_matches = local.old_user_email_by_id == local.new_user_email_by_id
+ org_users_count_matches = length(local.old_org_user_emails) == length(local.new_org_user_emails)
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v2/outputs.tf b/examples/migrate_atlas_user_and_atlas_users/v2/outputs.tf
new file mode 100644
index 0000000000..62ba8850ae
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v2/outputs.tf
@@ -0,0 +1,76 @@
+# Comparison outputs to validate migration
+output "email_mapping_comparison" {
+ description = "Compare old vs new email retrieval"
+ value = {
+ old_email = local.old_user_email_by_id
+ new_email = local.new_user_email_by_id
+ matches = local.email_mapping_matches
+ }
+}
+
+output "org_roles_comparison" {
+ description = "Compare old vs new organization roles"
+ value = {
+ old_roles = local.old_user_org_roles
+ new_roles = local.new_user_org_roles
+ }
+}
+
+output "project_roles_comparison" {
+ description = "Compare old vs new project roles"
+ value = {
+ old_roles = local.old_user_project_roles
+ new_roles = local.new_user_project_roles
+ }
+}
+
+output "org_users_comparison" {
+ description = "Compare old vs new organization user lists"
+ value = {
+ old_emails = local.old_org_user_emails
+ new_emails = local.new_org_user_emails
+ old_count = length(local.old_org_user_emails)
+ new_count = length(local.new_org_user_emails)
+ count_matches = local.org_users_count_matches
+ }
+}
+
+output "project_users_comparison" {
+ description = "Compare old vs new project user lists"
+ value = {
+ old_emails = local.old_project_user_emails
+ new_emails = local.new_project_user_emails
+ old_count = length(local.old_project_user_emails)
+ new_count = length(local.new_project_user_emails)
+ }
+}
+
+output "team_users_comparison" {
+ description = "Compare old vs new team user lists"
+ value = {
+ old_emails = local.old_team_user_emails
+ new_emails = local.new_team_user_emails
+ old_count = length(local.old_team_user_emails)
+ new_count = length(local.new_team_user_emails)
+ }
+}
+
+# Migration validation
+output "migration_validation" {
+ description = "Overall migration validation results"
+ value = {
+ email_mapping_works = local.email_mapping_matches
+ org_users_count_matches = local.org_users_count_matches
+ ready_for_v3 = local.email_mapping_matches && local.org_users_count_matches
+ }
+}
+
+# Additional comparisons using username-based queries
+output "username_based_comparison" {
+ description = "Compare username-based queries between old and new data sources"
+ value = {
+ old_user_by_username = data.mongodbatlas_atlas_user.single_user_by_username.email_address
+ new_user_by_username = data.mongodbatlas_cloud_user_org_assignment.user_by_username.username
+ matches = data.mongodbatlas_atlas_user.single_user_by_username.email_address == data.mongodbatlas_cloud_user_org_assignment.user_by_username.username
+ }
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v2/provider.tf b/examples/migrate_atlas_user_and_atlas_users/v2/provider.tf
new file mode 100644
index 0000000000..e5aeda8033
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v2/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
\ No newline at end of file
diff --git a/examples/migrate_atlas_user_and_atlas_users/v2/variables.tf b/examples/migrate_atlas_user_and_atlas_users/v2/variables.tf
new file mode 100644
index 0000000000..6de7eb3b82
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v2/variables.tf
@@ -0,0 +1,33 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "org_id" {
+ description = "MongoDB Atlas Organization ID"
+ type = string
+}
+
+variable "project_id" {
+ description = "MongoDB Atlas Project ID"
+ type = string
+}
+
+variable "team_id" {
+ description = "MongoDB Atlas Team ID"
+ type = string
+}
+
+variable "user_id" {
+ description = "MongoDB Atlas User ID"
+ type = string
+}
+
+variable "username" {
+ description = "MongoDB Atlas Username (email)"
+ type = string
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v2/versions.tf b/examples/migrate_atlas_user_and_atlas_users/v2/versions.tf
new file mode 100644
index 0000000000..ef1e7bbb88
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v2/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.5.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v3/README.md b/examples/migrate_atlas_user_and_atlas_users/v3/README.md
new file mode 100644
index 0000000000..aa2db2df68
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v3/README.md
@@ -0,0 +1,30 @@
+# v3: Final State
+
+This is the clean, final configuration using only the new data sources.
+
+## What changed from v1
+
+### Simpler attribute access
+- `email_address` → `username`
+- Complex role filtering → Direct access via `roles.org_roles` and `roles.project_role_assignments`
+- `results[*]` → `users[*]`
+
+### Cleaner code
+- No complex list comprehensions for basic role access
+- Structured role data instead of flat lists
+- More intuitive attribute names
+
+### Better performance
+- Organization context required for user reads (more efficient API calls)
+- Structured data reduces client-side filtering
+
+## Key improvements
+
+1. **Structured roles**: Organization and project roles are clearly separated
+2. **Direct access**: No need to filter consolidated role lists
+3. **Consistent naming**: `username` instead of `email_address`
+4. **Better organization**: User lists come from their natural containers (org/project/team)
+
+## Usage
+
+This configuration represents the target state after migration. All references to deprecated data sources have been removed and replaced with their modern equivalents.
diff --git a/examples/migrate_atlas_user_and_atlas_users/v3/main.tf b/examples/migrate_atlas_user_and_atlas_users/v3/main.tf
new file mode 100644
index 0000000000..0c72108c08
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v3/main.tf
@@ -0,0 +1,57 @@
+############################################################
+# v3: Final state - only new data sources
+############################################################
+
+# Single user reads using cloud_user_org_assignment
+data "mongodbatlas_cloud_user_org_assignment" "user_by_id" {
+ org_id = var.org_id
+ user_id = var.user_id
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "user_by_username" {
+ org_id = var.org_id
+ username = var.username
+}
+
+# User lists using organization/project/team data sources
+data "mongodbatlas_organization" "org" {
+ org_id = var.org_id
+}
+
+data "mongodbatlas_project" "proj" {
+ project_id = var.project_id
+}
+
+data "mongodbatlas_team" "team" {
+ team_id = var.team_id
+ org_id = var.org_id
+}
+
+# Clean, simplified local values using new data sources
+locals {
+ # Single user examples (simplified)
+ user_email_by_id = data.mongodbatlas_cloud_user_org_assignment.user_by_id.username
+ user_email_by_username = data.mongodbatlas_cloud_user_org_assignment.user_by_username.username
+
+ # User list examples (simplified)
+ org_user_emails = data.mongodbatlas_organization.org.users[*].username
+ project_user_emails = data.mongodbatlas_project.proj.users[*].username
+ team_user_emails = data.mongodbatlas_team.team.users[*].username
+
+ # Role examples (much cleaner than v1)
+ user_org_roles = data.mongodbatlas_cloud_user_org_assignment.user_by_id.roles.org_roles
+
+ # Find project role assignments that match the project_id
+ matching_project_roles = [
+ for pra in data.mongodbatlas_cloud_user_org_assignment.user_by_id.roles.project_role_assignments :
+ pra.project_roles if pra.project_id == var.project_id
+ ]
+ # Use the first match if available, otherwise empty list
+ user_project_roles = length(local.matching_project_roles) > 0 ? local.matching_project_roles[0] : []
+
+ # All project role assignments
+ user_all_project_roles = {
+ for pra in data.mongodbatlas_cloud_user_org_assignment.user_by_id.roles.project_role_assignments :
+ pra.project_id => pra.project_roles
+ }
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v3/outputs.tf b/examples/migrate_atlas_user_and_atlas_users/v3/outputs.tf
new file mode 100644
index 0000000000..71539d8f4c
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v3/outputs.tf
@@ -0,0 +1,81 @@
+# Single user outputs
+output "user_email_by_id" {
+ description = "User email retrieved by user ID"
+ value = local.user_email_by_id
+}
+
+output "user_email_by_username" {
+ description = "User email retrieved by username"
+ value = local.user_email_by_username
+}
+
+output "user_org_roles" {
+ description = "User's organization roles (structured)"
+ value = local.user_org_roles
+}
+
+output "user_project_roles" {
+ description = "User's roles for specific project"
+ value = local.user_project_roles
+}
+
+output "user_all_project_roles" {
+ description = "User's roles across all projects"
+ value = local.user_all_project_roles
+}
+
+# User list outputs
+output "org_user_emails" {
+ description = "All organization user emails"
+ value = local.org_user_emails
+}
+
+output "project_user_emails" {
+ description = "All project user emails"
+ value = local.project_user_emails
+}
+
+output "team_user_emails" {
+ description = "All team user emails"
+ value = local.team_user_emails
+}
+
+# Count outputs
+output "org_user_count" {
+ description = "Number of organization users"
+ value = length(data.mongodbatlas_organization.org.users)
+}
+
+output "project_user_count" {
+ description = "Number of project users"
+ value = length(data.mongodbatlas_project.proj.users)
+}
+
+output "team_user_count" {
+ description = "Number of team users"
+ value = length(data.mongodbatlas_team.team.users)
+}
+
+# User details from different scopes
+output "org_users_with_roles" {
+ description = "Organization users with their roles"
+ value = [
+ for user in data.mongodbatlas_organization.org.users : {
+ username = user.username
+ user_id = user.id
+ org_roles = user.roles.org_roles
+ project_assignments = user.roles.project_role_assignments
+ }
+ ]
+}
+
+output "project_users_with_roles" {
+ description = "Project users with their roles"
+ value = [
+ for user in data.mongodbatlas_project.proj.users : {
+ username = user.username
+ user_id = user.id
+ roles = user.roles
+ }
+ ]
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v3/provider.tf b/examples/migrate_atlas_user_and_atlas_users/v3/provider.tf
new file mode 100644
index 0000000000..e5aeda8033
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v3/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
\ No newline at end of file
diff --git a/examples/migrate_atlas_user_and_atlas_users/v3/variables.tf b/examples/migrate_atlas_user_and_atlas_users/v3/variables.tf
new file mode 100644
index 0000000000..6de7eb3b82
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v3/variables.tf
@@ -0,0 +1,33 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "org_id" {
+ description = "MongoDB Atlas Organization ID"
+ type = string
+}
+
+variable "project_id" {
+ description = "MongoDB Atlas Project ID"
+ type = string
+}
+
+variable "team_id" {
+ description = "MongoDB Atlas Team ID"
+ type = string
+}
+
+variable "user_id" {
+ description = "MongoDB Atlas User ID"
+ type = string
+}
+
+variable "username" {
+ description = "MongoDB Atlas Username (email)"
+ type = string
+}
diff --git a/examples/migrate_atlas_user_and_atlas_users/v3/versions.tf b/examples/migrate_atlas_user_and_atlas_users/v3/versions.tf
new file mode 100644
index 0000000000..ef1e7bbb88
--- /dev/null
+++ b/examples/migrate_atlas_user_and_atlas_users/v3/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.5.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_cluster_to_advanced_cluster/basic/README.md b/examples/migrate_cluster_to_advanced_cluster/basic/README.md
index fcd88d9592..39df456874 100644
--- a/examples/migrate_cluster_to_advanced_cluster/basic/README.md
+++ b/examples/migrate_cluster_to_advanced_cluster/basic/README.md
@@ -7,14 +7,14 @@ This example demonstrates how to migrate a `mongodbatlas_cluster` resource to `m
In this example we use specific files, but the same approach can be applied to any configuration file with `mongodbatlas_cluster` resource(s).
The main steps are:
-1. [Enable the `mongodbatlas_advanced_cluster` preview for MongoDB Atlas Provider 2.0.0](#enable-the-mongodbatlas_advanced_cluster-preview)
+1. *(Only required if using version lower than 2.0)* [Enable the `mongodbatlas_advanced_cluster` preview for MongoDB Atlas Provider 2.0.0](#enable-the-mongodbatlas_advanced_cluster-preview)
2. [Create the `mongodbatlas_cluster`](#create-the-mongodbatlas_cluster) (skip if you already have a configuration with one or more `mongodbatlas_cluster` resources).
3. [Use the Atlas CLI Plugin Terraform to create the `mongodbatlas_advanced_cluster` configuration](#use-the-atlas-cli-plugin-terraform-to-create-the-mongodbatlas_advanced_cluster-resource).
4. [Manually update the Terraform configuration](#manual-updates-to-the-terraform-configuration).
5. [Perform the Move](#perform-the-move).
- [Troubleshooting](#troubleshooting).
-## Enable the `mongodbatlas_advanced_cluster` preview
+## Enable the `mongodbatlas_advanced_cluster` preview *(Only required if using version lower than 2.0)*
Enable the `mongodbatlas_advanced_cluster` preview for MongoDB Atlas Provider 2.0.0 by setting the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`. More information can be found in the [resource documentation page](../resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
@@ -66,7 +66,7 @@ moved {
## Perform the Move
1. Ensure you are using the MongoDB Atlas Terraform provider 1.29 or later.
-2. Ensure you are using V2 schema: `export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`.
+2. *(Only required if using version lower than 2.0)* Ensure you are using V2 schema: `export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`.
3. Run `terraform validate` to ensure there are no missing reference updates. You might see errors like:
- `Error: Reference to undeclared resource`: You forgot to update the resource type to `mongodbatlas_advanced_cluster`
```text
diff --git a/examples/migrate_cluster_to_advanced_cluster/module_user/README.md b/examples/migrate_cluster_to_advanced_cluster/module_user/README.md
index 636db5d506..64aedd2e94 100644
--- a/examples/migrate_cluster_to_advanced_cluster/module_user/README.md
+++ b/examples/migrate_cluster_to_advanced_cluster/module_user/README.md
@@ -58,7 +58,7 @@ terraform apply -var-file=../v1_v2.tfvars
```bash
cd v2
cp ../v1/terraform.tfstate . # if you are not using a remote state
-export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # necessary for the `moved` block to work
+export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # skip this if using version 2.0 or later
terraform init -upgrade # in case your Atlas Provider version needs to be upgraded
terraform apply -var-file=../v1_v2.tfvars # notice the same variables used as in `v1`
```
@@ -81,7 +81,7 @@ The example changes:
```bash
cd v3
cp ../v2/terraform.tfstate . # if you are not using a remote state
-export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # necessary for the `moved` block to work
+export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # skip this if using version 2.0 or later
terraform init -upgrade # in case your Atlas Provider version needs to be upgraded
terraform plan -var-file=../v3_no_plan_changes.tfvars # updated variables to enable latest mongodb_advanced_cluster features
```
@@ -101,7 +101,7 @@ The example changes:
```bash
cd v3
cp ../v2/terraform.tfstate . # if you are not using a remote state
-export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # necessary for the `moved` block to work
+export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # skip this if using version 2.0 or later
terraform init -upgrade # in case your Atlas Provider version needs to be upgraded
terraform apply -var-file=../v3.tfvars # updated variables to enable latest mongodb_advanced_cluster features
```
@@ -117,7 +117,7 @@ This example renames the variable `replication_specs_new` to `replication_specs`
```bash
cd v4
cp ../v3/terraform.tfstate . # if you are not using a remote state
-export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # necessary to use the latest schema
+export MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true # skip this if using version 2.0 or later
terraform init -upgrade # in case your Atlas Provider version needs to be upgraded
terraform plan -var-file=../v4.tfvars
```
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/README.md b/examples/migrate_org_invitation_to_cloud_user_org_assignment/README.md
new file mode 100644
index 0000000000..24d4a63705
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/README.md
@@ -0,0 +1,16 @@
+# Combined Example: Org Invitation → Cloud User Org Assignment
+
+This combined example is organized into step subfolders (v1–v3):
+
+- v1/: Initial state with:
+ - a pending `mongodbatlas_org_invitation` (with `teams_ids`), and
+ - an accepted (ACTIVE) user present in the org (no invitation in state).
+- v2/: Migration step showcasing both paths:
+ - moved block for the pending invitation (module-friendly, recommended), and
+ - import blocks for accepted (ACTIVE) users and team assignments.
+- v3/: Cleaned-up final configuration after v2 is applied:
+ - remove the `mongodbatlas_org_invitation` resource,
+ - remove moved and import blocks,
+ - keep only `mongodbatlas_cloud_user_org_assignment` and `mongodbatlas_cloud_user_team_assignment`.
+
+Navigate into each version folder to see the step-specific configuration.
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/README.md b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/README.md
new file mode 100644
index 0000000000..a8584fab29
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/README.md
@@ -0,0 +1,5 @@
+# v1: Initial State
+
+State:
+- `mongodbatlas_org_invitation` manages a pending user (with `teams_ids`).
+- An accepted (ACTIVE) user exists in the organization (no invitation in state), referenced via `data.mongodbatlas_organization`.
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/main.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/main.tf
new file mode 100644
index 0000000000..5cd6994999
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/main.tf
@@ -0,0 +1,29 @@
+############################################################
+# v1: Initial State
+# - One pending invitation managed via mongodbatlas_org_invitation (with teams)
+# - One active user present in org (no invitation resource in state)
+############################################################
+
+# Pending invitation (with teams)
+resource "mongodbatlas_org_invitation" "pending" {
+ org_id = var.org_id
+ username = var.pending_username
+ roles = var.roles
+ teams_ids = var.pending_team_ids
+}
+
+# Active user is represented only for reference via data source
+data "mongodbatlas_organization" "org" {
+ org_id = var.org_id
+}
+
+locals {
+ active_users = {
+ for u in data.mongodbatlas_organization.org.users :
+ u.username => u if u.org_membership_status == "ACTIVE" && u.username == var.active_username
+ }
+}
+
+output "active_users" {
+ value = local.active_users
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/provider.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/variables.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/variables.tf
new file mode 100644
index 0000000000..91cf9c35fe
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/variables.tf
@@ -0,0 +1,24 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "org_id" { type = string }
+# Pending invite user
+variable "pending_username" { type = string }
+variable "roles" {
+ type = set(string)
+ default = ["ORG_MEMBER"]
+}
+# Teams for pending invite
+variable "pending_team_ids" {
+ type = set(string)
+ default = []
+}
+
+# Active user already in org (no invitation resource remains in state)
+variable "active_username" { type = string }
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/versions.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/versions.tf
new file mode 100644
index 0000000000..ef1e7bbb88
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v1/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.5.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/README.md b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/README.md
new file mode 100644
index 0000000000..79937b83c7
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/README.md
@@ -0,0 +1,6 @@
+# v2: Migrate using Moved and Import blocks
+
+State:
+- Pending invitation → move state from `mongodbatlas_org_invitation` to `mongodbatlas_cloud_user_org_assignment` using a Terraform `moved` block (no recreate).
+- Accepted (ACTIVE) user → declare the resource and use `import` blocks to adopt the existing assignment (`org_id,user_id`).
+- Teams → manage memberships via `mongodbatlas_cloud_user_team_assignment`; import existing mappings (`org_id,team_id,user_id`).
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/main.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/main.tf
new file mode 100644
index 0000000000..8eaa2d90ab
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/main.tf
@@ -0,0 +1,58 @@
+############################################################
+# v2: Migration
+# - Pending invitation → cloud_user_org_assignment via moved block
+# - Demonstrate import path for ACTIVE users and team assignments
+############################################################
+
+# New resource + moved block (recommended)
+resource "mongodbatlas_cloud_user_org_assignment" "pending" {
+ org_id = var.org_id
+ username = var.pending_username
+ roles = { org_roles = var.roles }
+}
+
+moved {
+ from = mongodbatlas_org_invitation.pending
+ to = mongodbatlas_cloud_user_org_assignment.pending
+}
+
+# Import ACTIVE users discovered via data source
+data "mongodbatlas_organization" "org" {
+ org_id = var.org_id
+}
+
+locals {
+ active_users = {
+ for u in data.mongodbatlas_organization.org.users :
+ u.id => u if u.org_membership_status == "ACTIVE" && u.username == var.active_username
+ }
+}
+
+resource "mongodbatlas_cloud_user_org_assignment" "active" {
+ for_each = local.active_users
+
+ org_id = var.org_id
+ username = each.value.username
+ roles = { org_roles = each.value.roles[0].org_roles }
+}
+
+import {
+ for_each = local.active_users
+ to = mongodbatlas_cloud_user_org_assignment.active[each.key]
+ id = "${var.org_id}/${each.key}"
+}
+
+# Team assignments for the pending user (after moved/import)
+resource "mongodbatlas_cloud_user_team_assignment" "teams" {
+ for_each = var.pending_team_ids
+
+ org_id = var.org_id
+ team_id = each.key
+ user_id = mongodbatlas_cloud_user_org_assignment.pending.user_id
+}
+
+import {
+ for_each = var.pending_team_ids
+ to = mongodbatlas_cloud_user_team_assignment.teams[each.key]
+ id = "${var.org_id}/${each.key}/${var.pending_username}"
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/provider.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/variables.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/variables.tf
new file mode 100644
index 0000000000..5e59f859a8
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/variables.tf
@@ -0,0 +1,20 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "org_id" { type = string }
+variable "pending_username" { type = string }
+variable "roles" {
+ type = set(string)
+ default = ["ORG_MEMBER"]
+}
+variable "pending_team_ids" {
+ type = set(string)
+ default = []
+}
+variable "active_username" { type = string }
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/versions.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/versions.tf
new file mode 100644
index 0000000000..ef1e7bbb88
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v2/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.5.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/README.md b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/README.md
new file mode 100644
index 0000000000..6f05edaa78
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/README.md
@@ -0,0 +1,6 @@
+# v3: Cleaned Up Configuration
+
+State: Final configuration after migration:
+- Only `mongodbatlas_cloud_user_org_assignment` and (optionally) `mongodbatlas_cloud_user_team_assignment` remain.
+- No `mongodbatlas_org_invitation` resources.
+- No import blocks.
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/main.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/main.tf
new file mode 100644
index 0000000000..c94661b8bb
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/main.tf
@@ -0,0 +1,28 @@
+############################################################
+# v3: Cleaned Up Configuration
+# - Remove org_invitation and moved/import blocks after v2 is applied
+# - Keep only cloud_user_org_assignment and team assignments
+############################################################
+
+# Pending user is now managed directly by cloud_user_org_assignment
+resource "mongodbatlas_cloud_user_org_assignment" "pending" {
+ org_id = var.org_id
+ username = var.pending_username
+ roles = { org_roles = var.roles }
+}
+
+# Active user is already imported; now managed directly
+resource "mongodbatlas_cloud_user_org_assignment" "active" {
+ org_id = var.org_id
+ username = var.active_username
+ roles = { org_roles = ["ORG_MEMBER"] }
+}
+
+# Team assignments are managed directly; no import blocks required
+resource "mongodbatlas_cloud_user_team_assignment" "teams" {
+ for_each = var.pending_team_ids
+
+ org_id = var.org_id
+ team_id = each.key
+ user_id = mongodbatlas_cloud_user_org_assignment.pending.user_id
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/provider.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/variables.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/variables.tf
new file mode 100644
index 0000000000..fbae20f609
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/variables.tf
@@ -0,0 +1,24 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "org_id" { type = string }
+
+# Pending invite user (now managed via cloud_user_org_assignment)
+variable "pending_username" { type = string }
+variable "roles" {
+ type = set(string)
+ default = ["ORG_MEMBER"]
+}
+variable "pending_team_ids" {
+ type = set(string)
+ default = []
+}
+
+# Active user already in org
+variable "active_username" { type = string }
diff --git a/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/versions.tf b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/versions.tf
new file mode 100644
index 0000000000..ef1e7bbb88
--- /dev/null
+++ b/examples/migrate_org_invitation_to_cloud_user_org_assignment/v3/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.5.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/README.md b/examples/migrate_project_invitation_to_cloud_user_project_assignment/README.md
new file mode 100644
index 0000000000..dd6a8a7792
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/README.md
@@ -0,0 +1,53 @@
+# Migration Example: Project Invitation to Cloud User Project Assignment
+
+This example demonstrates how to migrate from the deprecated `mongodbatlas_project_invitation` resource to the new `mongodbatlas_cloud_user_project_assignment` resource.
+
+## Migration Phases
+
+### v1: Initial State (Deprecated Resource)
+Shows the original configuration using deprecated `mongodbatlas_project_invitation` for pending invitations.
+
+### v2: Migration Phase (Re-creation with Removed Block)
+Demonstrates the migration approach:
+- Adds new `mongodbatlas_cloud_user_project_assignment` resource
+- Uses `removed` block to cleanly remove old resource from state
+- Shows both removed block and manual state removal options
+
+### v3: Final State (New Resource Only)
+Clean final configuration using only `mongodbatlas_cloud_user_project_assignment`.
+
+## Important Notes
+
+- **Pending invites only**: This migration applies only to PENDING project invitations that still exist in your Terraform configuration
+- **Re-creation approach**: The new resources and data sources cannot discover pending invites created by the deprecated resource, so we re-create them with the new resource
+- **Accepted invites**: If users already accepted invitations, the provider removed them from state and you should remove them from configuration (no migration needed)
+
+## Usage
+
+1. Start with v1 to understand the original setup with pending invitations
+2. Apply v2 configuration to re-create invites with new resource and remove old resource
+3. Apply v3 configuration for the final clean state
+
+## Prerequisites
+
+- MongoDB Atlas Terraform Provider 2.0.0 or later
+- Valid MongoDB Atlas project ID
+- Pending project invitations in your configuration (not yet accepted)
+
+## Variables
+
+Set these variables for all versions:
+
+```terraform
+public_key = "your-mongodb-atlas-public-key" # Optional, can use env vars
+private_key = "your-mongodb-atlas-private-key" # Optional, can use env vars
+project_id = "your-project-id"
+username = "user@example.com" # User with pending invitation
+roles = ["GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_ONLY"]
+```
+
+Alternatively, set environment variables:
+```bash
+export MONGODB_ATLAS_PUBLIC_KEY="your-public-key"
+export MONGODB_ATLAS_PRIVATE_KEY="your-private-key"
+```
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/main.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/main.tf
new file mode 100644
index 0000000000..7e75ae8f9c
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/main.tf
@@ -0,0 +1,25 @@
+############################################################
+# v1: Original configuration using deprecated resource
+############################################################
+
+# Pending project invitation using deprecated resource
+resource "mongodbatlas_project_invitation" "pending_user" {
+ project_id = var.project_id
+ username = var.username
+ roles = var.roles
+}
+
+# Example usage of the invitation
+locals {
+ invitation_id = mongodbatlas_project_invitation.pending_user.invitation_id
+ invited_user = mongodbatlas_project_invitation.pending_user.username
+ assigned_roles = mongodbatlas_project_invitation.pending_user.roles
+
+ # This shows how the deprecated resource was typically used
+ invitation_details = {
+ id = local.invitation_id
+ username = local.invited_user
+ roles = local.assigned_roles
+ project = var.project_id
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/outputs.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/outputs.tf
new file mode 100644
index 0000000000..fa5cc095d7
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/outputs.tf
@@ -0,0 +1,30 @@
+# Original invitation outputs
+output "invitation_id" {
+ description = "ID of the pending project invitation"
+ value = mongodbatlas_project_invitation.pending_user.invitation_id
+}
+
+output "invited_username" {
+ description = "Username of the invited user"
+ value = mongodbatlas_project_invitation.pending_user.username
+}
+
+output "assigned_roles" {
+ description = "Roles assigned to the invited user"
+ value = mongodbatlas_project_invitation.pending_user.roles
+}
+
+output "invitation_details" {
+ description = "Complete invitation details"
+ value = local.invitation_details
+}
+
+output "creation_timestamp" {
+ description = "When the invitation was created"
+ value = mongodbatlas_project_invitation.pending_user.created_at
+}
+
+output "expiration_timestamp" {
+ description = "When the invitation expires"
+ value = mongodbatlas_project_invitation.pending_user.expires_at
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/provider.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/variables.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/variables.tf
new file mode 100644
index 0000000000..cddb43d0a1
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/variables.tf
@@ -0,0 +1,24 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "project_id" {
+ description = "MongoDB Atlas Project ID"
+ type = string
+}
+
+variable "username" {
+ description = "MongoDB Atlas Username (email) for pending invitation"
+ type = string
+}
+
+variable "roles" {
+ description = "Project roles to assign to the user"
+ type = list(string)
+ default = ["GROUP_READ_ONLY"]
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/versions.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/versions.tf
new file mode 100644
index 0000000000..95d555827a
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v1/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.7.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/README.md b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/README.md
new file mode 100644
index 0000000000..499e9acd31
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/README.md
@@ -0,0 +1,51 @@
+# v2: Migration Phase
+
+This configuration demonstrates the migration approach using the `removed` block to cleanly transition from `mongodbatlas_project_invitation` to `mongodbatlas_cloud_user_project_assignment`.
+
+## What this shows
+
+- **Re-creation**: The new resource re-creates the pending invitation using the new API
+- **Clean removal**: Uses `removed` block to remove old resource from state without destroying the Atlas invitation
+- **Validation**: Outputs that verify the new resource works correctly
+- **New capabilities**: Shows additional features available with the new resource
+
+## Key differences
+
+### Old resource (`mongodbatlas_project_invitation`)
+- Only managed pending invitations
+- Removed from state when user accepted invitation
+- Limited to invitation lifecycle only
+
+### New resource (`mongodbatlas_cloud_user_project_assignment`)
+- Manages active project membership
+- Exposes `user_id` (not available in old resource)
+- Supports import for existing users
+- Works for both pending and active users
+
+## Migration approach
+
+1. **Add new resource**: Re-creates the pending invitation with new API
+2. **Remove old resource**: Uses `removed` block to clean up Terraform state
+3. **Validate**: Check that the new resource works as expected
+
+## Alternative removal method
+
+If you prefer manual state removal instead of the `removed` block:
+
+```bash
+# Remove from configuration first, then:
+terraform state rm mongodbatlas_project_invitation.pending_user
+```
+
+## Usage
+
+1. Apply this configuration: `terraform apply`
+2. Review the validation outputs to ensure migration success
+3. Check `migration_validation.ready_for_v3` is `true`
+4. Once validated, proceed to v3
+
+## Expected behavior
+
+- The user should receive a new invitation email (since we're re-creating the invitation)
+- The old invitation remains valid until it expires or is accepted
+- Terraform state is cleaned up properly
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/main.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/main.tf
new file mode 100644
index 0000000000..bc96c01d88
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/main.tf
@@ -0,0 +1,33 @@
+############################################################
+# v2: Migration phase - re-create with new resource and remove old
+############################################################
+
+# NEW: Project assignment using the new resource
+# This re-creates the pending invitation using the new resource
+resource "mongodbatlas_cloud_user_project_assignment" "user_assignment" {
+ project_id = var.project_id
+ username = var.username
+ roles = var.roles
+}
+
+# REMOVE: Clean removal of deprecated resource from state
+removed {
+ from = mongodbatlas_project_invitation.pending_user
+
+ lifecycle {
+ destroy = false
+ }
+}
+
+# Migration validation
+locals {
+ # Verify the new resource works correctly
+ new_assignment_user = mongodbatlas_cloud_user_project_assignment.user_assignment.username
+ new_assignment_roles = mongodbatlas_cloud_user_project_assignment.user_assignment.roles
+
+ # Basic validation
+ username_matches = var.username == local.new_assignment_user
+ roles_match = toset(var.roles) == toset(local.new_assignment_roles)
+
+ migration_successful = local.username_matches && local.roles_match
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/outputs.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/outputs.tf
new file mode 100644
index 0000000000..243e93f435
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/outputs.tf
@@ -0,0 +1,46 @@
+# New resource outputs
+output "assigned_username" {
+ description = "Username from the new assignment"
+ value = mongodbatlas_cloud_user_project_assignment.user_assignment.username
+}
+
+output "assigned_roles" {
+ description = "Roles from the new assignment"
+ value = mongodbatlas_cloud_user_project_assignment.user_assignment.roles
+}
+
+output "user_id" {
+ description = "User ID from the new assignment (not available in old resource)"
+ value = mongodbatlas_cloud_user_project_assignment.user_assignment.user_id
+}
+
+# Migration validation outputs
+output "migration_validation" {
+ description = "Validation results for the migration"
+ value = {
+ username_matches = local.username_matches
+ roles_match = local.roles_match
+ migration_successful = local.migration_successful
+ ready_for_v3 = local.migration_successful
+ }
+}
+
+output "migration_comparison" {
+ description = "Compare configuration inputs vs actual assignment"
+ value = {
+ input_username = var.username
+ input_roles = var.roles
+ actual_username = local.new_assignment_user
+ actual_roles = local.new_assignment_roles
+ }
+}
+
+# New capabilities not available in old resource
+output "new_capabilities" {
+ description = "New capabilities available with cloud_user_project_assignment"
+ value = {
+ user_id_available = mongodbatlas_cloud_user_project_assignment.user_assignment.user_id != null
+ manages_active_users = true
+ supports_import = true
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/provider.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/variables.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/variables.tf
new file mode 100644
index 0000000000..cddb43d0a1
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/variables.tf
@@ -0,0 +1,24 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "project_id" {
+ description = "MongoDB Atlas Project ID"
+ type = string
+}
+
+variable "username" {
+ description = "MongoDB Atlas Username (email) for pending invitation"
+ type = string
+}
+
+variable "roles" {
+ description = "Project roles to assign to the user"
+ type = list(string)
+ default = ["GROUP_READ_ONLY"]
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/versions.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/versions.tf
new file mode 100644
index 0000000000..95d555827a
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v2/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.7.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/README.md b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/README.md
new file mode 100644
index 0000000000..6fc57e002b
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/README.md
@@ -0,0 +1,49 @@
+# v3: Final State
+
+This is the clean, final configuration using only the new `mongodbatlas_cloud_user_project_assignment` resource.
+
+## What changed from v1
+
+### Resource purpose
+- **Old**: Managed pending invitations only
+- **New**: Manages active project membership
+
+### Lifecycle behavior
+- **Old**: Removed from state when user accepted invitation
+- **New**: Remains in state, manages ongoing membership
+
+### Available data
+- **Old**: Only invitation details (invitation_id, expires_at, etc.)
+- **New**: User assignment details including user_id
+
+### Data source support
+- **Old**: Had data source for reading invitation details
+- **New**: Has data source for reading user assignments
+
+## Key improvements
+
+1. **Persistent management**: Resource doesn't disappear when user accepts invitation
+2. **User ID access**: Provides user_id for use in other resources
+3. **Import support**: Can import existing project members
+4. **Cleaner lifecycle**: No surprise state removals
+
+## Enhanced functionality
+
+The new resource provides additional capabilities:
+
+- **Data source**: Read existing user assignments
+- **Import**: Bring existing project members under Terraform management
+- **User ID exposure**: Reference users by ID in other resources
+- **Active membership**: Manage actual project membership, not just invitations
+
+## Usage patterns
+
+This configuration demonstrates:
+- Basic user assignment to project
+- Data source usage for reading assignments
+- Local values for organizing assignment data
+- Output examples showing common use cases
+
+## Migration complete
+
+At this point, you have successfully migrated from the deprecated `mongodbatlas_project_invitation` resource to the modern `mongodbatlas_cloud_user_project_assignment` resource. All references to the old resource have been removed and replaced with the new resource.
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/main.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/main.tf
new file mode 100644
index 0000000000..cb13a16aaf
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/main.tf
@@ -0,0 +1,39 @@
+############################################################
+# v3: Final state - only new resource
+############################################################
+
+# Project user assignment using the new resource
+resource "mongodbatlas_cloud_user_project_assignment" "user_assignment" {
+ project_id = var.project_id
+ username = var.username
+ roles = var.roles
+}
+
+# Example of additional functionality available with new resource
+data "mongodbatlas_cloud_user_project_assignment" "user_lookup" {
+ project_id = var.project_id
+ username = mongodbatlas_cloud_user_project_assignment.user_assignment.username
+}
+
+# Clean, simplified local values
+locals {
+ # Basic assignment info
+ assigned_user = mongodbatlas_cloud_user_project_assignment.user_assignment.username
+ assigned_roles = mongodbatlas_cloud_user_project_assignment.user_assignment.roles
+ user_id = mongodbatlas_cloud_user_project_assignment.user_assignment.user_id
+
+ # Enhanced information from data source
+ user_details = {
+ username = data.mongodbatlas_cloud_user_project_assignment.user_lookup.username
+ user_id = data.mongodbatlas_cloud_user_project_assignment.user_lookup.user_id
+ roles = data.mongodbatlas_cloud_user_project_assignment.user_lookup.roles
+ }
+
+ # Assignment summary
+ assignment_summary = {
+ project_id = var.project_id
+ user = local.assigned_user
+ roles = local.assigned_roles
+ user_id = local.user_id
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/outputs.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/outputs.tf
new file mode 100644
index 0000000000..b6ea73dc37
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/outputs.tf
@@ -0,0 +1,49 @@
+# Project assignment outputs
+output "assigned_username" {
+ description = "Username of the assigned user"
+ value = mongodbatlas_cloud_user_project_assignment.user_assignment.username
+}
+
+output "assigned_roles" {
+ description = "Roles assigned to the user"
+ value = mongodbatlas_cloud_user_project_assignment.user_assignment.roles
+}
+
+output "user_id" {
+ description = "MongoDB Atlas User ID (available with new resource)"
+ value = mongodbatlas_cloud_user_project_assignment.user_assignment.user_id
+}
+
+output "assignment_summary" {
+ description = "Complete assignment summary"
+ value = local.assignment_summary
+}
+
+# Data source outputs (demonstrates read capability)
+output "user_details_from_data_source" {
+ description = "User details retrieved via data source"
+ value = local.user_details
+}
+
+# Demonstrates the advantages of the new resource
+output "new_resource_advantages" {
+ description = "Advantages of the new resource over deprecated project_invitation"
+ value = {
+ manages_active_membership = "Manages actual project membership, not just invitations"
+ exposes_user_id = "Provides user_id which wasn't available in project_invitation"
+ supports_data_source = "Has corresponding data source for reading assignments"
+ import_capable = "Can import existing project members"
+ no_state_removal = "Doesn't get removed from state when user accepts invitation"
+ }
+}
+
+# Usage examples for common patterns
+output "usage_examples" {
+ description = "Common usage patterns with the new resource"
+ value = {
+ basic_assignment = "Assign user to project with specific roles"
+ read_assignment = "Read existing user assignment from project"
+ user_id_reference = "Use user_id for other resource references"
+ role_management = "Update user roles within project"
+ }
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/provider.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/variables.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/variables.tf
new file mode 100644
index 0000000000..b418af1e82
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/variables.tf
@@ -0,0 +1,24 @@
+variable "public_key" {
+ type = string
+ default = ""
+}
+variable "private_key" {
+ type = string
+ default = ""
+}
+
+variable "project_id" {
+ description = "MongoDB Atlas Project ID"
+ type = string
+}
+
+variable "username" {
+ description = "MongoDB Atlas Username (email) for user assignment"
+ type = string
+}
+
+variable "roles" {
+ description = "Project roles to assign to the user"
+ type = list(string)
+ default = ["GROUP_READ_ONLY"]
+}
diff --git a/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/versions.tf b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/versions.tf
new file mode 100644
index 0000000000..95d555827a
--- /dev/null
+++ b/examples/migrate_project_invitation_to_cloud_user_project_assignment/v3/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.7.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_team_project_assignment/README.md b/examples/migrate_team_project_assignment/README.md
new file mode 100644
index 0000000000..f22f13550e
--- /dev/null
+++ b/examples/migrate_team_project_assignment/README.md
@@ -0,0 +1,40 @@
+# Migration Example: Team Project Attribute to Team Project Assignment
+
+This example demonstrates how to migrate from the deprecated `mongodbatlas_project.teams` attribute to the new `mongodbatlas_team_project_assignment` resource.
+
+## Migration Phases
+
+### v1: Initial State (Deprecated Resource)
+Shows the original configuration using deprecated `mongodbatlas_project.teams` attribute for team assignments.
+
+### v2: Final State (New Resource Only)
+Update the configuration to use `mongodbatlas_team_project_assignment` and migrate away from deprecated `mongodbatlas_project.teams`.
+
+## Usage
+
+1. Start with v1 to understand the original setup with team assignments
+2. Apply v2 configuration to import existing assignments with new resource and no longer use deprecated attribute teams
+
+## Prerequisites
+
+- MongoDB Atlas Terraform Provider 2.0.0 or later
+- Valid MongoDB Atlas and Team IDs
+
+## Variables
+
+Set these variables for all versions:
+
+```terraform
+public_key = # Optional, can use env vars
+private_key = # Optional, can use env vars
+team_id_1 = # Team to assign
+team_id_2 = # Another team to assign
+team_1_roles = ["GROUP_OWNER"]
+team_2_roles = ["GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_WRITE"]
+```
+
+Alternatively, set environment variables:
+```bash
+export MONGODB_ATLAS_PUBLIC_KEY="your-public-key"
+export MONGODB_ATLAS_PRIVATE_KEY="your-private-key"
+```
diff --git a/examples/migrate_team_project_assignment/v1/main.tf b/examples/migrate_team_project_assignment/v1/main.tf
new file mode 100644
index 0000000000..00fde318b5
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v1/main.tf
@@ -0,0 +1,38 @@
+############################################################
+# v1: Original configuration using deprecated attribute
+############################################################
+
+# Map of team IDs to their roles
+locals {
+ team_map = {
+ (var.team_id_1) = var.team_1_roles
+ (var.team_id_2) = var.team_2_roles
+ }
+}
+
+# Using deprecated team block inside mongodbatlas_project to assign teams to the project
+resource "mongodbatlas_project" "this" {
+ name = "this"
+ org_id = var.org_id
+
+ dynamic "teams" {
+ for_each = local.team_map
+ content {
+ team_id = teams.key
+ role_names = teams.value
+ }
+ }
+}
+
+output "project_teams" {
+ description = "List of teams assigned to the Atlas project, with their roles"
+ value = mongodbatlas_project.this.teams
+}
+
+output "project_teams_map" {
+ description = "Map of team IDs to their roles (from teams attribute)"
+ value = {
+ for t in mongodbatlas_project.this.teams :
+ t.team_id => t.role_names
+ }
+}
diff --git a/examples/migrate_team_project_assignment/v1/provider.tf b/examples/migrate_team_project_assignment/v1/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v1/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_team_project_assignment/v1/variables.tf b/examples/migrate_team_project_assignment/v1/variables.tf
new file mode 100644
index 0000000000..040a1961b2
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v1/variables.tf
@@ -0,0 +1,35 @@
+variable "org_id" {
+ description = "The ID of the MongoDB Atlas organization"
+ type = string
+}
+
+variable "team_id_1" {
+ description = "The ID of the first team"
+ type = string
+}
+
+variable "team_1_roles" {
+ description = "Roles to assign to the first team in the project"
+ type = list(string)
+}
+
+variable "team_id_2" {
+ description = "The ID of the second team"
+ type = string
+}
+
+variable "team_2_roles" {
+ description = "Roles to assign to the second team in the project"
+ type = list(string)
+}
+
+variable "public_key" {
+ description = "Public key for MongoDB Atlas API"
+ type = string
+ default = ""
+}
+variable "private_key" {
+ description = "Private key for MongoDB Atlas API"
+ type = string
+ default = ""
+}
diff --git a/examples/migrate_team_project_assignment/v1/versions.tf b/examples/migrate_team_project_assignment/v1/versions.tf
new file mode 100644
index 0000000000..95d555827a
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v1/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.7.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_team_project_assignment/v2/README.md b/examples/migrate_team_project_assignment/v2/README.md
new file mode 100644
index 0000000000..4f59454c80
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v2/README.md
@@ -0,0 +1,24 @@
+# v2: Final State
+
+This is the final configuration using only the new `mongodbatlas_team_project_assignment` resource while ignoring the deprecated `mongodbatlas_project.teams` attribute.
+
+## What changed from v1
+
+### Resource purpose
+- **Old**: Managed through deprecated `mongodbatlas_project.teams` attribute
+- **New**: Uses `mongodbatlas_team_project_assignment` resource for team-to-project assignments
+
+### Data source support
+- **Old**: Had data source for reading teams attribute
+- **New**: Has data source for reading team assignments
+
+## Usage patterns
+
+This configuration demonstrates:
+- Basic team assignment to project
+- Data source usage for reading assignments
+- Output examples showing how to print team assignments in various formats
+
+## Migration complete
+
+At this point, you have successfully migrated from the deprecated `mongodbatlas_project.teams` attribute to the new `mongodbatlas_team_project_assignment` resource. All references to the old attribute have been replaced with the new resource.
\ No newline at end of file
diff --git a/examples/migrate_team_project_assignment/v2/main.tf b/examples/migrate_team_project_assignment/v2/main.tf
new file mode 100644
index 0000000000..0980824bc2
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v2/main.tf
@@ -0,0 +1,71 @@
+############################################################
+# v2: New resource usage
+############################################################
+
+# Map of team IDs to their roles
+locals {
+ team_map = {
+ (var.team_id_1) = var.team_1_roles
+ (var.team_id_2) = var.team_2_roles
+ }
+}
+
+# Ignore the deprecated teams block in mongodbatlas_project
+resource "mongodbatlas_project" "this" {
+ name = "this"
+ org_id = var.org_id
+ lifecycle {
+ ignore_changes = [teams]
+ }
+}
+
+# Use the new mongodbatlas_team_project_assignment resource
+resource "mongodbatlas_team_project_assignment" "this" {
+ for_each = local.team_map
+
+ project_id = mongodbatlas_project.this.id
+ team_id = each.key
+ role_names = each.value
+}
+
+# Import existing team-project relationships into the new resource
+import {
+ for_each = local.team_map
+ to = mongodbatlas_team_project_assignment.this[each.key]
+ id = "${mongodbatlas_project.this.id}/${each.key}"
+}
+
+# Example outputs showing team assignments in various formats
+output "team_project_assignments" {
+ description = "List of all team assignments for the MongoDB Atlas project"
+ value = [
+ for assignment in mongodbatlas_team_project_assignment.this :
+ {
+ team_id = assignment.team_id
+ role_names = assignment.role_names
+ }
+ ]
+}
+
+output "team_project_assignments_map" {
+ description = "Map of team_id to role_names for the MongoDB Atlas project"
+ value = {
+ for k, assignment in mongodbatlas_team_project_assignment.this :
+ assignment.team_id => assignment.role_names
+ }
+}
+
+# Data source to read current team assignments for the project
+data "mongodbatlas_team_project_assignment" "this" {
+ project_id = mongodbatlas_project.this.id
+ team_id = var.team_id_1 # Example for one team; repeat for others as needed
+}
+
+output "data_team_project_assignment" {
+ description = "Data source output for team assignment"
+ value = {
+ team_id = data.mongodbatlas_team_project_assignment.this.team_id
+ project_id = data.mongodbatlas_team_project_assignment.this.project_id
+ role_names = data.mongodbatlas_team_project_assignment.this.role_names
+ }
+}
diff --git a/examples/migrate_team_project_assignment/v2/provider.tf b/examples/migrate_team_project_assignment/v2/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v2/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/migrate_team_project_assignment/v2/variables.tf b/examples/migrate_team_project_assignment/v2/variables.tf
new file mode 100644
index 0000000000..040a1961b2
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v2/variables.tf
@@ -0,0 +1,35 @@
+variable "org_id" {
+ description = "The ID of the MongoDB Atlas organization"
+ type = string
+}
+
+variable "team_id_1" {
+ description = "The ID of the first team"
+ type = string
+}
+
+variable "team_1_roles" {
+ description = "Roles to assign to the first team in the project"
+ type = list(string)
+}
+
+variable "team_id_2" {
+ description = "The ID of the second team"
+ type = string
+}
+
+variable "team_2_roles" {
+ description = "Roles to assign to the second team in the project"
+ type = list(string)
+}
+
+variable "public_key" {
+ description = "Public key for MongoDB Atlas API"
+ type = string
+ default = ""
+}
+variable "private_key" {
+ description = "Private key for MongoDB Atlas API"
+ type = string
+ default = ""
+}
diff --git a/examples/migrate_team_project_assignment/v2/versions.tf b/examples/migrate_team_project_assignment/v2/versions.tf
new file mode 100644
index 0000000000..95d555827a
--- /dev/null
+++ b/examples/migrate_team_project_assignment/v2/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_version = ">= 1.7.0"
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+}
diff --git a/examples/migrate_user_team_assignment/README.md b/examples/migrate_user_team_assignment/README.md
new file mode 100644
index 0000000000..fb41d246f2
--- /dev/null
+++ b/examples/migrate_user_team_assignment/README.md
@@ -0,0 +1,5 @@
+# MongoDB Atlas Provider — Cloud User Team Assignment Example
+
+Please refer to the [mongodbatlas_cloud_user_assignment example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_assignment/README.md) for the example to manage and assigning users to a team using Terraform.
+
+For module usage, see the [module example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_team_assignment/module/new_module/README.md).
diff --git a/examples/migrate_user_team_assignment/module_maintainer/v1/README.md b/examples/migrate_user_team_assignment/module_maintainer/v1/README.md
new file mode 100644
index 0000000000..939fcd2fce
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_maintainer/v1/README.md
@@ -0,0 +1,5 @@
+# Old Module Example: Cloud User Team (Legacy)
+
+This example demonstrates the legacy pattern (prior to v2.0.0) for assigning users to a team using the `mongodbatlas_team` resource. It is intended to show the "before" state for users migrating to the new recommended pattern.
+
+For migration steps, see the [Migration Guide](../../../docs/guides/atlas-user-management.md).
diff --git a/examples/migrate_user_team_assignment/module_maintainer/v1/main.tf b/examples/migrate_user_team_assignment/module_maintainer/v1/main.tf
new file mode 100644
index 0000000000..595283c863
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_maintainer/v1/main.tf
@@ -0,0 +1,5 @@
+resource "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+ usernames = var.usernames # DEPRECATED
+}
diff --git a/examples/migrate_user_team_assignment/module_maintainer/v1/variables.tf b/examples/migrate_user_team_assignment/module_maintainer/v1/variables.tf
new file mode 100644
index 0000000000..eae1014b5f
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_maintainer/v1/variables.tf
@@ -0,0 +1,15 @@
+variable "org_id" {
+ description = "MongoDB Atlas Organization ID"
+ type = string
+}
+
+variable "team_name" {
+ description = "Name of the team"
+ type = string
+}
+
+variable "usernames" {
+ description = "List of usernames to assign to the team"
+ type = list(string)
+ default = []
+}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/versions.tf b/examples/migrate_user_team_assignment/module_maintainer/v1/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/versions.tf
rename to examples/migrate_user_team_assignment/module_maintainer/v1/versions.tf
diff --git a/examples/migrate_user_team_assignment/module_maintainer/v2/README.md b/examples/migrate_user_team_assignment/module_maintainer/v2/README.md
new file mode 100644
index 0000000000..c33e52ee6c
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_maintainer/v2/README.md
@@ -0,0 +1,5 @@
+# Module Example: Cloud User Team Assignment
+
+This example demonstrates how to use the `mongodbatlas_cloud_user_team_assignment` resource within a Terraform module. It shows how to assign a user to a team using module inputs.
+
+If you are migrating from `mongodbatlas_team` resource, please see the [Migration Guide](../../../docs/guides/atlas-user-management.md) for important instructions on importing existing resources into your module-managed configuration.
diff --git a/examples/migrate_user_team_assignment/module_maintainer/v2/main.tf b/examples/migrate_user_team_assignment/module_maintainer/v2/main.tf
new file mode 100644
index 0000000000..878a20775f
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_maintainer/v2/main.tf
@@ -0,0 +1,12 @@
+resource "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+}
+
+resource "mongodbatlas_cloud_user_team_assignment" "this" {
+ for_each = var.team_assigments
+
+ org_id = each.value.org_id
+ team_id = each.value.team_id
+ user_id = each.value.user_id
+}
diff --git a/examples/migrate_user_team_assignment/module_maintainer/v2/variables.tf b/examples/migrate_user_team_assignment/module_maintainer/v2/variables.tf
new file mode 100644
index 0000000000..4c9b0ba083
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_maintainer/v2/variables.tf
@@ -0,0 +1,17 @@
+variable "org_id" {
+ type = string
+ description = "MongoDB Atlas Organization ID"
+}
+
+variable "team_name" {
+ type = string
+ description = "Name of the Atlas team"
+}
+
+variable "team_assigments" {
+ type = map(object({
+ org_id = string
+ team_id = string
+ user_id = string
+ }))
+}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/versions.tf b/examples/migrate_user_team_assignment/module_maintainer/v2/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/versions.tf
rename to examples/migrate_user_team_assignment/module_maintainer/v2/versions.tf
diff --git a/examples/migrate_user_team_assignment/module_user/v1/main.tf b/examples/migrate_user_team_assignment/module_user/v1/main.tf
new file mode 100644
index 0000000000..d3accd3390
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_user/v1/main.tf
@@ -0,0 +1,12 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
+
+# Old module usage
+module "user_team_assignment" {
+ source = "../../module_maintainer/v1"
+ org_id = var.org_id
+ team_name = var.team_name
+ usernames = var.usernames
+}
diff --git a/examples/migrate_user_team_assignment/module_user/v1/variables.tf b/examples/migrate_user_team_assignment/module_user/v1/variables.tf
new file mode 100644
index 0000000000..e8fe303852
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_user/v1/variables.tf
@@ -0,0 +1,26 @@
+variable "org_id" {
+ type = string
+ description = "MongoDB Atlas Organization ID"
+}
+
+variable "team_name" {
+ type = string
+ description = "Name of the Atlas team"
+}
+
+variable "usernames" {
+ type = list(string)
+ description = "List of user emails to assign to the team"
+}
+
+variable "public_key" {
+ description = "Public API key to authenticate to Atlas"
+ type = string
+ default = ""
+}
+variable "private_key" {
+ description = "Private API key to authenticate to Atlas"
+ type = string
+ default = ""
+}
+
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/versions.tf b/examples/migrate_user_team_assignment/module_user/v1/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/versions.tf
rename to examples/migrate_user_team_assignment/module_user/v1/versions.tf
diff --git a/examples/migrate_user_team_assignment/module_user/v2/main.tf b/examples/migrate_user_team_assignment/module_user/v2/main.tf
new file mode 100644
index 0000000000..0ee6e56c06
--- /dev/null
+++ b/examples/migrate_user_team_assignment/module_user/v2/main.tf
@@ -0,0 +1,35 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
+
+# New module usage
+data "mongodbatlas_team" "this" {
+ org_id = var.org_id
+ name = var.team_name
+}
+
+locals {
+ team_assigments = {
+ for user in data.mongodbatlas_team.this.users :
+ user.id => {
+ org_id = var.org_id
+ team_id = data.mongodbatlas_team.this.team_id
+ user_id = user.id
+ }
+ }
+}
+
+module "user_team_assignment" {
+ source = "../../module_maintainer/v2"
+ org_id = var.org_id
+ team_name = var.team_name
+ team_assigments = local.team_assigments
+}
+
+import {
+ for_each = local.team_assigments
+
+ to = module.user_team_assignment.mongodbatlas_cloud_user_team_assignment.this[each.key]
+ id = "${var.org_id}/${data.mongodbatlas_team.this.team_id}/${each.value.user_id}"
+}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/variables.tf b/examples/migrate_user_team_assignment/module_user/v2/variables.tf
similarity index 60%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/variables.tf
rename to examples/migrate_user_team_assignment/module_user/v2/variables.tf
index 590efc9578..d654f5ba77 100644
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/variables.tf
+++ b/examples/migrate_user_team_assignment/module_user/v2/variables.tf
@@ -1,17 +1,21 @@
-variable "project_id" {
- description = "Atlas project id"
+variable "org_id" {
type = string
+ description = "MongoDB Atlas Organization ID"
}
+
+variable "team_name" {
+ type = string
+ description = "Name of the Atlas team"
+}
+
variable "public_key" {
description = "Public API key to authenticate to Atlas"
type = string
+ default = ""
}
variable "private_key" {
description = "Private API key to authenticate to Atlas"
type = string
+ default = ""
}
-variable "fcv_expiration_date" {
- description = "Expiration date of the pinned FCV"
- type = string
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/versions.tf b/examples/migrate_user_team_assignment/module_user/v2/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/versions.tf
rename to examples/migrate_user_team_assignment/module_user/v2/versions.tf
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/README.md
deleted file mode 100644
index ea4f00a4bc..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Examples for mongodbatlas_advanced_cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-This directory contains examples of using the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more info in the [resource doc page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/README.md
deleted file mode 100644
index eb897cfc36..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# MongoDB Atlas Provider -- Global Cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-This example creates a project and a Sharded Cluster with 4 independent shards with varying cluster tiers.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-## Dependencies
-
-* Terraform MongoDB Atlas Provider v1.29.0
-* A MongoDB Atlas account
-
-```
-Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
-```
-
-
-## Usage
-**1\. Ensure your MongoDB Atlas credentials are set up.**
-
-This can be done using environment variables:
-
-```bash
-export MONGODB_ATLAS_PUBLIC_KEY=""
-export MONGODB_ATLAS_PRIVATE_KEY=""
-```
-
-... or follow as in the `variables.tf` file and create **terraform.tfvars** file with all the variable values, ex:
-```
-public_key = ""
-private_key = ""
-atlas_org_id = ""
-```
-
-... or use [AWS Secrets Manager](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs#aws-secrets-manager)
-
-**2\. Review the Terraform plan.**
-
-Execute the below command and ensure you are happy with the plan.
-
-``` bash
-$ terraform plan
-```
-This project currently supports the below deployments:
-
-- An Atlas Project
-- A Sharded Cluster with independent shards with varying cluster tiers
-
-**3\. Execute the Terraform apply.**
-
-Now execute the plan to provision the Atlas Project and Cluster resources.
-
-``` bash
-$ terraform apply
-```
-
-**4\. Destroy the resources.**
-
-Once you are finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges.
-
-``` bash
-$ terraform destroy
-```
-
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/main.tf
deleted file mode 100644
index 1a4f7fbc26..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/main.tf
+++ /dev/null
@@ -1,88 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = var.cluster_name
- cluster_type = "SHARDED"
- backup_enabled = true
-
- replication_specs = [
- { # shard 1 - M30 instance size
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- },
- { # shard 2 - M30 instance size
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- },
- { # shard 3 - M40 instance size
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- },
- { # shard 4 - M40 instance size
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- }
- ]
-
- advanced_configuration = {
- javascript_enabled = true
- oplog_size_mb = 999
- sample_refresh_interval_bi_connector = 300
- }
-
- tags = {
- environment = "dev"
- }
-}
-
-resource "mongodbatlas_project" "project" {
- name = "Asymmetric Sharded Cluster"
- org_id = var.atlas_org_id
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/variables.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/variables.tf
deleted file mode 100644
index 05e875b6b0..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/asymmetric-sharded-cluster/variables.tf
+++ /dev/null
@@ -1,17 +0,0 @@
-variable "atlas_org_id" {
- description = "Atlas organization id"
- type = string
-}
-variable "public_key" {
- description = "Public API key to authenticate to Atlas"
- type = string
-}
-variable "private_key" {
- description = "Private API key to authenticate to Atlas"
- type = string
-}
-variable "cluster_name" {
- description = "Atlas cluster name"
- type = string
- default = "AsymmetricShardedCluster"
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/README.md
deleted file mode 100644
index 58b25ea53f..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# MongoDB Atlas Provider -- Sharded Cluster with Independent Shard Auto-scaling (Preview for MongoDB Atlas Provider 2.0.0)
-
-This example creates a Sharded Cluster with 2 shards defining electable and analytics nodes. Compute auto-scaling is enabled for both `electable_specs` and `analytics_specs`, while also leveraging the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema) by defining each shard with its individual `replication_specs`. This enables scaling of each shard to be independent. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-## Dependencies
-
-* Terraform MongoDB Atlas Provider v1.29.0
-* A MongoDB Atlas account
-
-```
-Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
-```
-
-
-## Usage
-**1\. If you haven't already, set up your MongoDB Atlas credentials.**
-
-This can be done using environment variables:
-
-```bash
-export MONGODB_ATLAS_PUBLIC_KEY=""
-export MONGODB_ATLAS_PRIVATE_KEY=""
-```
-
-... or follow as in the `variables.tf` file and create **terraform.tfvars** file with all the variable values, ex:
-```
-public_key = ""
-private_key = ""
-atlas_org_id = ""
-```
-
-Alternatively, you can use [AWS Secrets Manager](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs#aws-secrets-manager).
-
-**2\. Review the Terraform plan.**
-
-Execute the below command and ensure you are happy with the plan.
-
-``` bash
-$ terraform plan
-```
-This project currently supports the below deployments:
-
-- An Atlas Project
-- A Sharded Cluster with independent shards with varying cluster tiers
-
-**3\. Apply your changes.**
-
-Now execute the plan to provision the Atlas Project and Cluster resources.
-
-``` bash
-$ terraform apply
-```
-
-**4\. Destroy the resources.**
-
-Once you are finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges.
-
-``` bash
-$ terraform destroy
-```
-
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/main.tf
deleted file mode 100644
index d12e94453e..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/main.tf
+++ /dev/null
@@ -1,78 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "test" {
- project_id = mongodbatlas_project.project.id
- name = "AutoScalingCluster"
- cluster_type = "SHARDED"
- replication_specs = [
- { # first shard
- region_configs = [
- {
- auto_scaling = {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- analytics_auto_scaling = {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- electable_specs = {
- instance_size = "M40"
- node_count = 3
- }
- analytics_specs = {
- instance_size = "M40"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- zone_name = "Zone 1"
- },
- { # second shard
- region_configs = [
- {
- auto_scaling = {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- analytics_auto_scaling = {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- electable_specs = {
- instance_size = "M40"
- node_count = 3
- }
- analytics_specs = {
- instance_size = "M40"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- zone_name = "Zone 1"
- }
- ]
-
- lifecycle { # avoids non-empty plans as instance size start to scale from initial values
- ignore_changes = [
- replication_specs[0].region_configs[0].electable_specs.instance_size,
- replication_specs[0].region_configs[0].analytics_specs.instance_size,
- replication_specs[1].region_configs[0].electable_specs.instance_size,
- replication_specs[1].region_configs[0].analytics_specs.instance_size
- ]
- }
-}
-
-resource "mongodbatlas_project" "project" {
- name = "AutoScalingPerShardCluster"
- org_id = var.atlas_org_id
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/variables.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/variables.tf
deleted file mode 100644
index d34c0ba2be..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/auto-scaling-per-shard/variables.tf
+++ /dev/null
@@ -1,12 +0,0 @@
-variable "atlas_org_id" {
- description = "Atlas organization id"
- type = string
-}
-variable "public_key" {
- description = "Public API key to authenticate to Atlas"
- type = string
-}
-variable "private_key" {
- description = "Private API key to authenticate to Atlas"
- type = string
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/README.md
deleted file mode 100644
index b43738f271..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/README.md
+++ /dev/null
@@ -1,148 +0,0 @@
-# MongoDB Atlas Provider -- Flex cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-This example creates a project and a Flex cluster using `mongodbatlas_advanced_cluster` resource. It is intended to show how to create a Flex cluster, upgrade an M0 cluster to Flex and upgrade a Flex cluster to a Dedicated cluster.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-Variables Required:
-- `atlas_org_id`: ID of the Atlas organization
-- `public_key`: Atlas public key
-- `private_key`: Atlas private key
-- `provider_name`: Name of provider to use for cluster (TENANT, AWS, GCP)
-- `backing_provider_name`: If provider_name is tenant, the backing provider (AWS, GCP)
-- `provider_instance_size_name`: Size of the cluster (Shared: M0. Dedicated: M10+.)
-
-For this example, first we'll start out on the Free tier, then upgrade to a flex cluster and finally to a Dedicated tier cluster.
-
-Utilize the following to execute a working example, replacing the org id, public and private key with your values:
-
-Apply with the following `terraform.tfvars` to first create a free tier cluster:
-```
-atlas_org_id =
-public_key =
-private_key =
-provider_name = "TENANT"
-backing_provider_name = "AWS"
-provider_instance_size_name = "M0"
-node_count = null
-```
-
-The configuration will be equivalent to:
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = "ClusterToUpgrade"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M0"
- node_count = null # equivalent to not setting a value
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-
- tags = {
- key = "environment"
- value = "dev"
- }
-}
-```
-
-Apply with the following `terraform.tfvars` to upgrade the free tier cluster you just created to flex tier:
-```
-atlas_org_id =
-public_key =
-private_key =
-provider_name = "FLEX"
-backing_provider_name = "AWS"
-provider_instance_size_name = null
-node_count = null
-```
-
-The configuration will be equivalent to:
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = "ClusterToUpgrade"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = null # equivalent to not setting a value
- node_count = null # equivalent to not setting a value
- }
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-
- tags = {
- key = "environment"
- value = "dev"
- }
-}
-```
-
-Apply with the following `terraform.tfvars` to upgrade the flex tier cluster you just created to dedicated tier:
-```
-atlas_org_id =
-public_key =
-private_key =
-provider_name = "AWS"
-backing_provider_name = null
-provider_instance_size_name = "M10"
-node_count = 3
-```
-
-The configuration will be equivalent to:
-
-```terraform
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = "ClusterToUpgrade"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- node_count = 3
- }
- provider_name = "AWS"
- backing_provider_name = null # equivalent to not setting a value
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-
- tags = {
- key = "environment"
- value = "dev"
- }
-}
-```
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/main.tf
deleted file mode 100644
index 382a4a3c9d..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/main.tf
+++ /dev/null
@@ -1,37 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = "ClusterToUpgrade"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = var.provider_instance_size_name
- node_count = var.node_count
- }
- provider_name = var.provider_name
- backing_provider_name = var.backing_provider_name
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-
- tags = {
- key = "environment"
- value = "dev"
- }
-}
-
-resource "mongodbatlas_project" "project" {
- name = "ClusterUpgradeTest"
- org_id = var.atlas_org_id
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/variables.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/variables.tf
deleted file mode 100644
index 34bc735daa..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/flex-upgrade/variables.tf
+++ /dev/null
@@ -1,32 +0,0 @@
-variable "atlas_org_id" {
- description = "Atlas organization id"
- type = string
-}
-variable "public_key" {
- description = "Public API key to authenticate to Atlas"
- type = string
-}
-variable "private_key" {
- description = "Private API key to authenticate to Atlas"
- type = string
-}
-variable "provider_name" {
- description = "Atlas cluster provider name"
- default = "AWS"
- type = string
-}
-variable "backing_provider_name" {
- description = "Atlas cluster backing provider name"
- type = string
-}
-variable "provider_instance_size_name" {
- description = "Atlas cluster provider instance name"
- default = "M10"
- type = string
-}
-
-variable "node_count" {
- description = "Number of nodes in the cluster"
- default = 3
- type = number
-}
\ No newline at end of file
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/README.md
deleted file mode 100644
index 441fdc43d8..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/README.md
+++ /dev/null
@@ -1,66 +0,0 @@
-# MongoDB Atlas Provider -- Global Cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-This example creates a project and a Global Cluster with 2 zones where each zone has two shards.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-## Dependencies
-
-* Terraform MongoDB Atlas Provider v1.29.0
-* A MongoDB Atlas account
-
-```
-Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
-```
-
-
-## Usage
-**1\. Ensure your MongoDB Atlas credentials are set up.**
-
-This can be done using environment variables:
-
-```bash
-export MONGODB_ATLAS_PUBLIC_KEY=""
-export MONGODB_ATLAS_PRIVATE_KEY=""
-```
-
-... or follow as in the `variables.tf` file and create **terraform.tfvars** file with all the variable values, ex:
-```
-public_key = ""
-private_key = ""
-atlas_org_id = ""
-```
-
-... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager)
-
-**2\. Review the Terraform plan.**
-
-Execute the below command and ensure you are happy with the plan.
-
-``` bash
-$ terraform plan
-```
-This project currently supports the below deployments:
-
-- An Atlas Project
-- A Global Cluster
-
-**3\. Execute the Terraform apply.**
-
-Now execute the plan to provision the Atlas Project and Cluster resources.
-
-``` bash
-$ terraform apply
-```
-
-**4\. Destroy the resources.**
-
-Once you are finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges.
-
-``` bash
-$ terraform destroy
-```
-
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/main.tf
deleted file mode 100644
index 40af5e26c9..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/main.tf
+++ /dev/null
@@ -1,132 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = var.cluster_name
- cluster_type = "GEOSHARDED"
-
- # uncomment next line to use self-managed sharding, see doc for more info
- # global_cluster_self_managed_sharding = true
-
- backup_enabled = true
-
- replication_specs = [
- { # shard 1 - zone n1
- zone_name = "zone n1"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- ]
- },
- { # shard 2 - zone n1
-
- zone_name = "zone n1"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- ]
- },
- { # shard 1 - zone n2
-
- zone_name = "zone n2"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
- }
- ]
- },
- { # shard 2 - zone n2
-
- zone_name = "zone n2"
-
- region_configs = [
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- },
- {
- electable_specs = {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
- }
- ]
- }
- ]
-
- advanced_configuration = {
- javascript_enabled = true
- oplog_size_mb = 999
- sample_refresh_interval_bi_connector = 300
- }
-
- tags = {
- environment = "dev"
- }
-}
-
-resource "mongodbatlas_project" "project" {
- name = "Global Cluster"
- org_id = var.atlas_org_id
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/variables.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/variables.tf
deleted file mode 100644
index 72235e5d05..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/global-cluster/variables.tf
+++ /dev/null
@@ -1,17 +0,0 @@
-variable "atlas_org_id" {
- description = "Atlas organization id"
- type = string
-}
-variable "public_key" {
- description = "Public API key to authenticate to Atlas"
- type = string
-}
-variable "private_key" {
- description = "Private API key to authenticate to Atlas"
- type = string
-}
-variable "cluster_name" {
- description = "Atlas cluster name"
- type = string
- default = "GlobalCluster"
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/README.md
deleted file mode 100644
index 454faeefe6..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/README.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# MongoDB Atlas Provider -- Multi-Cloud Advanced Cluster (Preview for MongoDB Atlas Provider 2.0.0)
-
-This example creates a project and a Multi Cloud Advanced Cluster with 2 shards.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-
-## Dependencies
-
-* Terraform MongoDB Atlas Provider v1.29.0
-* A MongoDB Atlas account
-
-```
-Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
-```
-
-
-## Usage
-**1\. Ensure your MongoDB Atlas credentials are set up.**
-
-This can be done using environment variables:
-
-```bash
-export MONGODB_ATLAS_PUBLIC_KEY=""
-export MONGODB_ATLAS_PRIVATE_KEY=""
-```
-
-... or follow as in the `variables.tf` file and create **terraform.tfvars** file with all the variable values, ex:
-```
-public_key = ""
-private_key = ""
-atlas_org_id = ""
-```
-
-... or use [AWS Secrets Manager](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/docs/index.md#aws-secrets-manager)
-
-**2\. Review the Terraform plan.**
-
-Execute the below command and ensure you are happy with the plan.
-
-``` bash
-$ terraform plan
-```
-This project currently supports the below deployments:
-
-- An Atlas Project
-- A Multi-Cloud Cluster
-
-**3\. Execute the Terraform apply.**
-
-Now execute the plan to provision the Atlas Project and Cluster resources.
-
-``` bash
-$ terraform apply
-```
-
-**4\. Destroy the resources.**
-
-Once you are finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges.
-
-``` bash
-$ terraform destroy
-```
-
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/main.tf
deleted file mode 100644
index 464e73c641..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/main.tf
+++ /dev/null
@@ -1,87 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = var.cluster_name
- cluster_type = "SHARDED"
- backup_enabled = true
-
- replication_specs { # shard 1
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # shard 2
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- advanced_configuration {
- javascript_enabled = true
- oplog_size_mb = 999
- sample_refresh_interval_bi_connector = 300
- }
-
- tags {
- key = "environment"
- value = "dev"
- }
-}
-
-resource "mongodbatlas_project" "project" {
- name = "Multi-Cloud Cluster"
- org_id = var.atlas_org_id
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/variables.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/variables.tf
deleted file mode 100644
index b74780dab4..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/variables.tf
+++ /dev/null
@@ -1,17 +0,0 @@
-variable "atlas_org_id" {
- description = "Atlas organization id"
- type = string
-}
-variable "public_key" {
- description = "Public API key to authenticate to Atlas"
- type = string
-}
-variable "private_key" {
- description = "Private API key to authenticate to Atlas"
- type = string
-}
-variable "cluster_name" {
- description = "Atlas cluster name"
- type = string
- default = "MultiCloudCluster"
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/README.md
deleted file mode 100644
index 81dc6b9c4e..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# MongoDB Atlas Provider -- Advanced Cluster Tenant Upgrade (Preview for MongoDB Atlas Provider 2.0.0)
-
-This example creates a project and cluster. It is intended to show how to upgrade from shared, aka tenant, to dedicated tier.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-Variables Required:
-- `atlas_org_id`: ID of the Atlas organization
-- `public_key`: Atlas public key
-- `private_key`: Atlas private key
-- `provider_name`: Name of provider to use for cluster (TENANT, AWS, GCP)
-- `backing_provider_name`: If provider_name is tenant, the backing provider (AWS, GCP)
-- `provider_instance_size_name`: Size of the cluster (Free: M0, Dedicated: M10+.)
-
-For this example, first we'll start out on the shared tier, then upgrade to a dedicated tier.
-
-Utilize the following to execute a working example, replacing the org id, public and private key with your values:
-
-Apply with the following `terraform.tfvars` to first create a shared tier cluster:
-```
-atlas_org_id =
-public_key =
-private_key =
-provider_name = "TENANT"
-backing_provider_name = "AWS"
-provider_instance_size_name = "M0"
-```
-
-Apply with the following `terraform.tfvars` to upgrade the shared tier cluster you just created to dedicated tier:
-```
-atlas_org_id =
-public_key =
-private_key =
-provider_name = "AWS"
-provider_instance_size_name = "M10"
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/main.tf
deleted file mode 100644
index cd25a31ee8..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/main.tf
+++ /dev/null
@@ -1,35 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = mongodbatlas_project.project.id
- name = "ClusterToUpgrade"
- cluster_type = "REPLICASET"
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = var.provider_instance_size_name
- }
- provider_name = var.provider_name
- backing_provider_name = var.backing_provider_name
- region_name = "US_EAST_1"
- priority = 7
- }
- ]
- }
- ]
-
- tags = {
- environment = "dev"
- }
-}
-
-resource "mongodbatlas_project" "project" {
- name = "TenantUpgradeTest"
- org_id = var.atlas_org_id
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/variables.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/variables.tf
deleted file mode 100644
index 6d12f18d42..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/variables.tf
+++ /dev/null
@@ -1,27 +0,0 @@
-variable "atlas_org_id" {
- description = "Atlas organization id"
- type = string
-}
-variable "public_key" {
- description = "Public API key to authenticate to Atlas"
- type = string
-}
-variable "private_key" {
- description = "Private API key to authenticate to Atlas"
- type = string
-}
-variable "provider_name" {
- description = "Atlas cluster provider name"
- default = "AWS"
- type = string
-}
-variable "backing_provider_name" {
- description = "Atlas cluster backing provider name"
- default = null # so it's not set when upgrading
- type = string
-}
-variable "provider_instance_size_name" {
- description = "Atlas cluster provider instance name"
- default = "M10"
- type = string
-}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/README.md b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/README.md
deleted file mode 100644
index 73defa2b3c..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/README.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# MongoDB Atlas Provider -- Cluster with pinned FCV (Preview for MongoDB Atlas Provider 2.0.0)
-
-Example shows how to pin the FCV of a cluster making use of `pinned_fcv` block. This enables direct control to pin cluster’s FCV before performing an upgrade on the `mongo_db_major_version`. Users can then downgrade to the previous MongoDB version with minimal risk if desired, as the FCV is maintained.
-
-The unpin operation can be performed by removing the `pinned_fcv` block. **Note**: Once FCV is unpinned it will not be possible to downgrade the `mongo_db_major_version`. If FCV is unpinned past the expiration date the `pinned_fcv` attribute must be removed.
-
-The following [knowledge hub article](https://kb.corp.mongodb.com/article/000021785/) and [FCV documentation](https://www.mongodb.com/docs/atlas/tutorial/major-version-change/#manage-feature-compatibility--fcv--during-upgrades) can be referenced for more details.
-
-It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
-You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
-
-## Dependencies
-
-* Terraform MongoDB Atlas Provider v1.29.0
-* A MongoDB Atlas account
-
-```
-Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
-```
-
-
-## Usage
-**1\. Ensure your MongoDB Atlas credentials are set up.**
-
-Following `variables.tf` file create **terraform.tfvars** file with all the variable values, as demonstrated below:
-```
-public_key = ""
-private_key = ""
-atlas_project_id = ""
-fcv_expiration_date = ""
-```
-
-**2\. Review the Terraform plan.**
-
-Execute the following command.
-
-``` bash
-$ terraform plan
-```
-This project currently supports the following deployments:
-
-- A Cluster with pinned FCV configured.
-
-**3\. Execute the Terraform apply.**
-
-Execute the following plan to provision the Atlas Project and Cluster resources.
-
-``` bash
-$ terraform apply
-```
-
-**4\. Destroy the resources.**
-
-Once you finished your testing, ensure you destroy the resources to avoid unnecessary Atlas charges.
-
-``` bash
-$ terraform destroy
-```
-
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/main.tf b/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/main.tf
deleted file mode 100644
index 55851596dd..0000000000
--- a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/main.tf
+++ /dev/null
@@ -1,36 +0,0 @@
-provider "mongodbatlas" {
- public_key = var.public_key
- private_key = var.private_key
-}
-
-resource "mongodbatlas_advanced_cluster" "cluster" {
- project_id = var.project_id
- name = "cluster"
- cluster_type = "REPLICASET"
-
- mongo_db_major_version = "7.0"
-
- pinned_fcv = {
- expiration_date = var.fcv_expiration_date # e.g. format: "2024-11-22T10:50:00Z". Hashicorp time provider https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/offset can be used to compute this string value.
- }
-
- replication_specs = [
- {
- region_configs = [
- {
- electable_specs = {
- instance_size = "M10"
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- ]
- }
- ]
-}
-
-output "feature_compatibility_version" {
- value = mongodbatlas_advanced_cluster.cluster.pinned_fcv.version
-}
-
diff --git a/examples/mongodbatlas_advanced_cluster/README.md b/examples/mongodbatlas_advanced_cluster/README.md
new file mode 100644
index 0000000000..d0d27a7ed3
--- /dev/null
+++ b/examples/mongodbatlas_advanced_cluster/README.md
@@ -0,0 +1,5 @@
+# Examples for mongodbatlas_advanced_cluster
+
+This directory contains examples of using `mongodbatlas_advanced_cluster`.
+
+You can find more info in the [resource doc page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster).
diff --git a/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/README.md b/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/README.md
index fb2d7af91c..eb897cfc36 100644
--- a/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/README.md
+++ b/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/README.md
@@ -1,15 +1,19 @@
-# MongoDB Atlas Provider -- Global Cluster
+# MongoDB Atlas Provider -- Global Cluster (Preview for MongoDB Atlas Provider 2.0.0)
+
This example creates a project and a Sharded Cluster with 4 independent shards with varying cluster tiers.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
## Dependencies
-* Terraform MongoDB Atlas Provider v1.18.0
+* Terraform MongoDB Atlas Provider v1.29.0
* A MongoDB Atlas account
```
Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.18.0
++ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
```
diff --git a/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/main.tf b/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/main.tf
index 36ecccbe96..1a4f7fbc26 100644
--- a/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/asymmetric-sharded-cluster/main.tf
@@ -9,67 +9,76 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "SHARDED"
backup_enabled = true
- replication_specs { # shard 1 - M30 instance size
- region_configs {
- electable_specs {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
+ replication_specs = [
+ { # shard 1 - M30 instance size
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ },
+ { # shard 2 - M30 instance size
- replication_specs { # shard 2 - M30 instance size
- region_configs {
- electable_specs {
- instance_size = "M30"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ },
+ { # shard 3 - M40 instance size
- replication_specs { # shard 3 - M40 instance size
- region_configs {
- electable_specs {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M40"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ },
+ { # shard 4 - M40 instance size
- replication_specs { # shard 4 - M40 instance size
- region_configs {
- electable_specs {
- instance_size = "M40"
- disk_iops = 3000
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M40"
+ disk_iops = 3000
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
}
- }
+ ]
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = true
oplog_size_mb = 999
sample_refresh_interval_bi_connector = 300
}
- tags {
- key = "environment"
- value = "dev"
+ tags = {
+ environment = "dev"
}
}
diff --git a/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/README.md b/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/README.md
index 7223995f99..58b25ea53f 100644
--- a/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/README.md
+++ b/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/README.md
@@ -1,16 +1,19 @@
-# MongoDB Atlas Provider -- Sharded Cluster with Independent Shard Auto-scaling
+# MongoDB Atlas Provider -- Sharded Cluster with Independent Shard Auto-scaling (Preview for MongoDB Atlas Provider 2.0.0)
This example creates a Sharded Cluster with 2 shards defining electable and analytics nodes. Compute auto-scaling is enabled for both `electable_specs` and `analytics_specs`, while also leveraging the [New Sharding Configuration](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema) by defining each shard with its individual `replication_specs`. This enables scaling of each shard to be independent. Please reference the [Use Auto-Scaling Per Shard](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/advanced-cluster-new-sharding-schema#use-auto-scaling-per-shard) section for more details.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
## Dependencies
-* Terraform MongoDB Atlas Provider v1.23.0
+* Terraform MongoDB Atlas Provider v1.29.0
* A MongoDB Atlas account
```
Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.23.0
++ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
```
diff --git a/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/main.tf b/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/main.tf
index a628d19e4c..d12e94453e 100644
--- a/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/auto-scaling-per-shard/main.tf
@@ -7,60 +7,67 @@ resource "mongodbatlas_advanced_cluster" "test" {
project_id = mongodbatlas_project.project.id
name = "AutoScalingCluster"
cluster_type = "SHARDED"
- replication_specs { # first shard
- region_configs {
- auto_scaling {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- analytics_auto_scaling {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- electable_specs {
- instance_size = "M40"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M40"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
+ replication_specs = [
+ { # first shard
+ region_configs = [
+ {
+ auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M60"
+ }
+ analytics_auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M60"
+ }
+ electable_specs = {
+ instance_size = "M40"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M40"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ zone_name = "Zone 1"
+ },
+ { # second shard
+ region_configs = [
+ {
+ auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M60"
+ }
+ analytics_auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M60"
+ }
+ electable_specs = {
+ instance_size = "M40"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M40"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
+ zone_name = "Zone 1"
}
- zone_name = "Zone 1"
- }
- replication_specs { # second shard
- region_configs {
- auto_scaling {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- analytics_auto_scaling {
- compute_enabled = true
- compute_max_instance_size = "M60"
- }
- electable_specs {
- instance_size = "M40"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M40"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- zone_name = "Zone 1"
- }
+ ]
+
lifecycle { # avoids non-empty plans as instance size start to scale from initial values
ignore_changes = [
- replication_specs[0].region_configs[0].electable_specs[0].instance_size,
- replication_specs[0].region_configs[0].analytics_specs[0].instance_size,
- replication_specs[1].region_configs[0].electable_specs[0].instance_size,
- replication_specs[1].region_configs[0].analytics_specs[0].instance_size
+ replication_specs[0].region_configs[0].electable_specs.instance_size,
+ replication_specs[0].region_configs[0].analytics_specs.instance_size,
+ replication_specs[1].region_configs[0].electable_specs.instance_size,
+ replication_specs[1].region_configs[0].analytics_specs.instance_size
]
}
}
diff --git a/examples/mongodbatlas_advanced_cluster/flex-upgrade/README.md b/examples/mongodbatlas_advanced_cluster/flex-upgrade/README.md
index 20acc5096a..b43738f271 100644
--- a/examples/mongodbatlas_advanced_cluster/flex-upgrade/README.md
+++ b/examples/mongodbatlas_advanced_cluster/flex-upgrade/README.md
@@ -1,6 +1,11 @@
-# MongoDB Atlas Provider -- Flex cluster
+# MongoDB Atlas Provider -- Flex cluster (Preview for MongoDB Atlas Provider 2.0.0)
+
This example creates a project and a Flex cluster using `mongodbatlas_advanced_cluster` resource. It is intended to show how to create a Flex cluster, upgrade an M0 cluster to Flex and upgrade a Flex cluster to a Dedicated cluster.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+
Variables Required:
- `atlas_org_id`: ID of the Atlas organization
- `public_key`: Atlas public key
@@ -32,20 +37,24 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
name = "ClusterToUpgrade"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M0"
- node_count = null # equivalent to not setting a value
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M0"
+ node_count = null # equivalent to not setting a value
+ }
+ provider_name = "TENANT"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
- tags {
+ tags = {
key = "environment"
value = "dev"
}
@@ -64,26 +73,31 @@ node_count = null
```
The configuration will be equivalent to:
+
```terraform
resource "mongodbatlas_advanced_cluster" "cluster" {
project_id = mongodbatlas_project.project.id
name = "ClusterToUpgrade"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = null # equivalent to not setting a value
- node_count = null # equivalent to not setting a value
- }
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = null # equivalent to not setting a value
+ node_count = null # equivalent to not setting a value
+ }
+ provider_name = "FLEX"
+ backing_provider_name = "AWS"
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
- tags {
+ tags = {
key = "environment"
value = "dev"
}
@@ -102,26 +116,31 @@ node_count = 3
```
The configuration will be equivalent to:
+
```terraform
resource "mongodbatlas_advanced_cluster" "cluster" {
project_id = mongodbatlas_project.project.id
name = "ClusterToUpgrade"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- provider_name = "AWS"
- backing_provider_name = null # equivalent to not setting a value
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ backing_provider_name = null # equivalent to not setting a value
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
- tags {
+ tags = {
key = "environment"
value = "dev"
}
diff --git a/examples/mongodbatlas_advanced_cluster/flex-upgrade/main.tf b/examples/mongodbatlas_advanced_cluster/flex-upgrade/main.tf
index 4a13d959bf..382a4a3c9d 100644
--- a/examples/mongodbatlas_advanced_cluster/flex-upgrade/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/flex-upgrade/main.tf
@@ -8,20 +8,24 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
name = "ClusterToUpgrade"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = var.provider_instance_size_name
- node_count = var.node_count
- }
- provider_name = var.provider_name
- backing_provider_name = var.backing_provider_name
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = var.provider_instance_size_name
+ node_count = var.node_count
+ }
+ provider_name = var.provider_name
+ backing_provider_name = var.backing_provider_name
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
- tags {
+ tags = {
key = "environment"
value = "dev"
}
diff --git a/examples/mongodbatlas_advanced_cluster/global-cluster/README.md b/examples/mongodbatlas_advanced_cluster/global-cluster/README.md
index 79b5b8026e..441fdc43d8 100644
--- a/examples/mongodbatlas_advanced_cluster/global-cluster/README.md
+++ b/examples/mongodbatlas_advanced_cluster/global-cluster/README.md
@@ -1,15 +1,19 @@
-# MongoDB Atlas Provider -- Global Cluster
+# MongoDB Atlas Provider -- Global Cluster (Preview for MongoDB Atlas Provider 2.0.0)
+
This example creates a project and a Global Cluster with 2 zones where each zone has two shards.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
## Dependencies
-* Terraform MongoDB Atlas Provider v1.10.0
+* Terraform MongoDB Atlas Provider v1.29.0
* A MongoDB Atlas account
```
Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.10.0
++ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
```
diff --git a/examples/mongodbatlas_advanced_cluster/global-cluster/main.tf b/examples/mongodbatlas_advanced_cluster/global-cluster/main.tf
index af3644402b..40af5e26c9 100644
--- a/examples/mongodbatlas_advanced_cluster/global-cluster/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/global-cluster/main.tf
@@ -13,111 +13,116 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
backup_enabled = true
- replication_specs { # shard 1 - zone n1
- zone_name = "zone n1"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # shard 2 - zone n1
- zone_name = "zone n1"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # shard 1 - zone n2
- zone_name = "zone n2"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
+ replication_specs = [
+ { # shard 1 - zone n1
+ zone_name = "zone n1"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
+ },
+ { # shard 2 - zone n1
+
+ zone_name = "zone n1"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
+ },
+ { # shard 1 - zone n2
+
+ zone_name = "zone n2"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "EUROPE_NORTH"
+ }
+ ]
+ },
+ { # shard 2 - zone n2
+
+ zone_name = "zone n2"
+
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "EUROPE_NORTH"
+ }
+ ]
}
- }
-
- replication_specs { # shard 2 - zone n2
- zone_name = "zone n2"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "EUROPE_NORTH"
- }
- }
+ ]
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = true
oplog_size_mb = 999
sample_refresh_interval_bi_connector = 300
}
- tags {
- key = "environment"
- value = "dev"
+ tags = {
+ environment = "dev"
}
}
diff --git a/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md b/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md
index 1713deedef..454faeefe6 100644
--- a/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md
+++ b/examples/mongodbatlas_advanced_cluster/multi-cloud/README.md
@@ -1,15 +1,20 @@
-# MongoDB Atlas Provider -- Multi-Cloud Advanced Cluster
+# MongoDB Atlas Provider -- Multi-Cloud Advanced Cluster (Preview for MongoDB Atlas Provider 2.0.0)
+
This example creates a project and a Multi Cloud Advanced Cluster with 2 shards.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+
## Dependencies
-* Terraform MongoDB Atlas Provider v1.10.0
+* Terraform MongoDB Atlas Provider v1.29.0
* A MongoDB Atlas account
```
Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.10.0
++ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
```
diff --git a/examples/mongodbatlas_advanced_cluster/multi-cloud/main.tf b/examples/mongodbatlas_advanced_cluster/multi-cloud/main.tf
index 464e73c641..f8fc6118c0 100644
--- a/examples/mongodbatlas_advanced_cluster/multi-cloud/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/multi-cloud/main.tf
@@ -9,75 +9,77 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "SHARDED"
backup_enabled = true
- replication_specs { # shard 1
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
+ replication_specs = [
+ { # shard 1
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M30"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ analytics_specs = {
+ instance_size = "M30"
+ node_count = 1
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
+ },
+ { # shard 2
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M30"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ analytics_specs = {
+ instance_size = "M30"
+ node_count = 1
+ }
+ provider_name = "AZURE"
+ priority = 6
+ region_name = "US_EAST_2"
+ }
+ ]
}
+ ]
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- replication_specs { # shard 2
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- analytics_specs {
- instance_size = "M30"
- node_count = 1
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
-
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = true
oplog_size_mb = 999
sample_refresh_interval_bi_connector = 300
}
- tags {
- key = "environment"
- value = "dev"
+ tags = {
+ "environment" = "dev"
}
}
diff --git a/examples/mongodbatlas_advanced_cluster/tenant-upgrade/README.md b/examples/mongodbatlas_advanced_cluster/tenant-upgrade/README.md
index 21a5ee78b1..81dc6b9c4e 100644
--- a/examples/mongodbatlas_advanced_cluster/tenant-upgrade/README.md
+++ b/examples/mongodbatlas_advanced_cluster/tenant-upgrade/README.md
@@ -1,6 +1,11 @@
-# MongoDB Atlas Provider -- Advanced Cluster Tenant Upgrade
+# MongoDB Atlas Provider -- Advanced Cluster Tenant Upgrade (Preview for MongoDB Atlas Provider 2.0.0)
+
This example creates a project and cluster. It is intended to show how to upgrade from shared, aka tenant, to dedicated tier.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+
Variables Required:
- `atlas_org_id`: ID of the Atlas organization
- `public_key`: Atlas public key
diff --git a/examples/mongodbatlas_advanced_cluster/tenant-upgrade/main.tf b/examples/mongodbatlas_advanced_cluster/tenant-upgrade/main.tf
index 863eb1b240..cd25a31ee8 100644
--- a/examples/mongodbatlas_advanced_cluster/tenant-upgrade/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/tenant-upgrade/main.tf
@@ -8,21 +8,24 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
name = "ClusterToUpgrade"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = var.provider_instance_size_name
- }
- provider_name = var.provider_name
- backing_provider_name = var.backing_provider_name
- region_name = "US_EAST_1"
- priority = 7
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = var.provider_instance_size_name
+ }
+ provider_name = var.provider_name
+ backing_provider_name = var.backing_provider_name
+ region_name = "US_EAST_1"
+ priority = 7
+ }
+ ]
}
- }
+ ]
- tags {
- key = "environment"
- value = "dev"
+ tags = {
+ environment = "dev"
}
}
diff --git a/examples/mongodbatlas_advanced_cluster/tenant-upgrade/variables.tf b/examples/mongodbatlas_advanced_cluster/tenant-upgrade/variables.tf
index 281c2f9f6d..6d12f18d42 100644
--- a/examples/mongodbatlas_advanced_cluster/tenant-upgrade/variables.tf
+++ b/examples/mongodbatlas_advanced_cluster/tenant-upgrade/variables.tf
@@ -17,6 +17,7 @@ variable "provider_name" {
}
variable "backing_provider_name" {
description = "Atlas cluster backing provider name"
+ default = null # so it's not set when upgrading
type = string
}
variable "provider_instance_size_name" {
diff --git a/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/README.md b/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/README.md
index 4acb9d2ec1..73defa2b3c 100644
--- a/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/README.md
+++ b/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/README.md
@@ -1,4 +1,4 @@
-# MongoDB Atlas Provider -- Cluster with pinned FCV
+# MongoDB Atlas Provider -- Cluster with pinned FCV (Preview for MongoDB Atlas Provider 2.0.0)
Example shows how to pin the FCV of a cluster making use of `pinned_fcv` block. This enables direct control to pin cluster’s FCV before performing an upgrade on the `mongo_db_major_version`. Users can then downgrade to the previous MongoDB version with minimal risk if desired, as the FCV is maintained.
@@ -6,14 +6,18 @@ The unpin operation can be performed by removing the `pinned_fcv` block. **Note*
The following [knowledge hub article](https://kb.corp.mongodb.com/article/000021785/) and [FCV documentation](https://www.mongodb.com/docs/atlas/tutorial/major-version-change/#manage-feature-compatibility--fcv--during-upgrades) can be referenced for more details.
+It uses the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the enviroment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
+
+You can find more information in the [resource documentation page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
+
## Dependencies
-* Terraform MongoDB Atlas Provider v1.23.0
+* Terraform MongoDB Atlas Provider v1.29.0
* A MongoDB Atlas account
```
Terraform >= 0.13
-+ provider registry.terraform.io/terraform-providers/mongodbatlas v1.23.0
++ provider registry.terraform.io/terraform-providers/mongodbatlas v1.29.0
```
diff --git a/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/main.tf b/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/main.tf
index e9a66f972a..55851596dd 100644
--- a/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/main.tf
+++ b/examples/mongodbatlas_advanced_cluster/version-upgrade-with-pinned-fcv/main.tf
@@ -10,24 +10,27 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
mongo_db_major_version = "7.0"
- pinned_fcv {
+ pinned_fcv = {
expiration_date = var.fcv_expiration_date # e.g. format: "2024-11-22T10:50:00Z". Hashicorp time provider https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/offset can be used to compute this string value.
}
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
+ replication_specs = [
+ {
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = "M10"
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
+ ]
}
- }
+ ]
}
output "feature_compatibility_version" {
- value = mongodbatlas_advanced_cluster.cluster.pinned_fcv[0].version
+ value = mongodbatlas_advanced_cluster.cluster.pinned_fcv.version
}
diff --git a/examples/mongodbatlas_backup_compliance_policy/resource/README.md b/examples/mongodbatlas_backup_compliance_policy/resource/README.md
index 31d2b6855d..ff584e8c3c 100644
--- a/examples/mongodbatlas_backup_compliance_policy/resource/README.md
+++ b/examples/mongodbatlas_backup_compliance_policy/resource/README.md
@@ -34,8 +34,6 @@ By following the steps below you will see how to avoid this error.
## Usage
-**Note**: This directory contains an example of using the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`. In order to enable the Preview, you must set the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true`, otherwise the current version will be used.
-
You can find more info in the [resource doc page](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/advanced_cluster%2520%2528preview%2520provider%25202.0.0%2529).
diff --git a/examples/mongodbatlas_cloud_backup_schedule/main.tf b/examples/mongodbatlas_cloud_backup_schedule/main.tf
index 4842cbfce3..cf7239354e 100644
--- a/examples/mongodbatlas_cloud_backup_schedule/main.tf
+++ b/examples/mongodbatlas_cloud_backup_schedule/main.tf
@@ -16,13 +16,13 @@ resource "mongodbatlas_advanced_cluster" "automated_backup_test_cluster" {
name = each.value.name
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
@@ -30,8 +30,8 @@ resource "mongodbatlas_advanced_cluster" "automated_backup_test_cluster" {
provider_name = "AWS"
region_name = each.value.region
priority = 7
- }
- }
+ }]
+ }]
backup_enabled = true # enable cloud backup snapshots
pit_enabled = true
@@ -55,7 +55,7 @@ resource "mongodbatlas_cloud_backup_schedule" "test" {
"YEARLY",
"ON_DEMAND"]
region_name = "US_WEST_1"
- zone_id = mongodbatlas_advanced_cluster.automated_backup_test_cluster[each.key].replication_specs[0].zone_id[0]
+ zone_id = mongodbatlas_advanced_cluster.automated_backup_test_cluster[each.key].replication_specs[0].zone_id
should_copy_oplogs = true
}
diff --git a/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf b/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf
index 952daf02b9..eba0bf95b1 100644
--- a/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf
+++ b/examples/mongodbatlas_cloud_backup_snapshot_export_job/main.tf
@@ -28,24 +28,28 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_snapshot" "test" {
- project_id = var.project_id
- cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
- description = "myDescription"
- retention_in_days = 1
+ project_id = var.project_id
+ cluster_name = mongodbatlas_advanced_cluster.my_cluster.name
+ description = "myDescription"
+ retention_in_days = 1
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ }
}
resource "mongodbatlas_cloud_backup_snapshot_export_bucket" "test" {
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/README.md b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/README.md
similarity index 100%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/README.md
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/README.md
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/main.tf b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/main.tf
similarity index 95%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/main.tf
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/main.tf
index 1eec516292..c8be8fe17a 100644
--- a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/main.tf
+++ b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/main.tf
@@ -10,11 +10,9 @@ resource "mongodbatlas_advanced_cluster" "advanced_cluster_test" {
name = var.cluster_name
cluster_type = "REPLICASET"
- replication_specs {
- num_shards = 1
-
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
@@ -22,8 +20,8 @@ resource "mongodbatlas_advanced_cluster" "advanced_cluster_test" {
provider_name = "AWS"
region_name = "US_EAST_1"
priority = 7
- }
- }
+ }]
+ }]
backup_enabled = true # enable cloud backup snapshots
pit_enabled = true # Flag that indicates whether the cluster uses continuous cloud backups
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/variables.tf b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/variables.tf
similarity index 100%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/variables.tf
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/variables.tf
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/versions.tf b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/multi-cloud/versions.tf
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time-advanced-cluster/versions.tf
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/README.md b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/README.md
similarity index 100%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/README.md
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/README.md
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/main.tf
similarity index 95%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/main.tf
index 1dd1453de7..b112100468 100644
--- a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/main.tf
+++ b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/main.tf
@@ -14,17 +14,17 @@ resource "mongodbatlas_advanced_cluster" "cluster_test" {
pit_enabled = true
retain_backups_enabled = true # keep the backup snapshopts once the cluster is deleted
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_snapshot" "test" {
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/variables.tf b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/variables.tf
similarity index 100%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/variables.tf
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/variables.tf
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/versions.tf b/examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/tenant-upgrade/versions.tf
rename to examples/mongodbatlas_cloud_backup_snapshot_restore_job/point-in-time/versions.tf
diff --git a/examples/mongodbatlas_cloud_user_org_assignment/README.md b/examples/mongodbatlas_cloud_user_org_assignment/README.md
new file mode 100644
index 0000000000..ed79edab98
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_org_assignment/README.md
@@ -0,0 +1,29 @@
+# Example: mongodbatlas_cloud_user_org_assignment
+
+This example demonstrates how to use the `mongodbatlas_cloud_user_org_assignment` resource to assign a user to an existing organization with specified roles in MongoDB Atlas.
+
+## Usage
+
+```hcl
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
+
+resource "mongodbatlas_cloud_user_org_assignment" "example" {
+ org_id = var.org_id
+ username = var.user_email
+ roles = {
+ org_roles = ["ORG_MEMBER"]
+ }
+}
+```
+
+You must set the following variables:
+
+- `public_key`: Your MongoDB Atlas API public key.
+- `private_key`: Your MongoDB Atlas API private key.
+- `org_id`: The ID of the organization to assign the user to.
+- `user_email`: The email address of the user to assign.
+
+To learn more, see the [MongoDB Cloud Users Documentation](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createorganizationuser).
\ No newline at end of file
diff --git a/examples/mongodbatlas_cloud_user_org_assignment/main.tf b/examples/mongodbatlas_cloud_user_org_assignment/main.tf
new file mode 100644
index 0000000000..cf107dc0c1
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_org_assignment/main.tf
@@ -0,0 +1,17 @@
+resource "mongodbatlas_cloud_user_org_assignment" "example" {
+ org_id = var.org_id
+ username = var.user_email
+ roles = {
+ org_roles = ["ORG_MEMBER"]
+ }
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "example_username" {
+ org_id = var.org_id
+ username = mongodbatlas_cloud_user_org_assignment.example.username
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "example_user_id" {
+ org_id = var.org_id
+ user_id = mongodbatlas_cloud_user_org_assignment.example.user_id
+}
diff --git a/examples/mongodbatlas_cloud_user_org_assignment/outputs.tf b/examples/mongodbatlas_cloud_user_org_assignment/outputs.tf
new file mode 100644
index 0000000000..b0e268b286
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_org_assignment/outputs.tf
@@ -0,0 +1,14 @@
+output "user_from_username" {
+ description = "User details retrieved by username"
+ value = data.mongodbatlas_cloud_user_org_assignment.example_username
+}
+
+output "user_from_user_id" {
+ description = "User details retrieved by user_id"
+ value = data.mongodbatlas_cloud_user_org_assignment.example_user_id
+}
+
+output "created_user" {
+ description = "Details of the created user"
+ value = mongodbatlas_cloud_user_org_assignment.example
+}
diff --git a/examples/mongodbatlas_cloud_user_org_assignment/provider.tf b/examples/mongodbatlas_cloud_user_org_assignment/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_org_assignment/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/mongodbatlas_cloud_user_org_assignment/variables.tf b/examples/mongodbatlas_cloud_user_org_assignment/variables.tf
new file mode 100644
index 0000000000..8b2468f08b
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_org_assignment/variables.tf
@@ -0,0 +1,21 @@
+variable "org_id" {
+ description = "The MongoDB Atlas organization ID"
+ type = string
+}
+
+variable "user_email" {
+ description = "The email address of the user"
+ type = string
+}
+
+variable "public_key" {
+ description = "Atlas API public key"
+ type = string
+ default = ""
+}
+
+variable "private_key" {
+ description = "Atlas API private key"
+ type = string
+ default = ""
+}
diff --git a/examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/versions.tf b/examples/mongodbatlas_cloud_user_org_assignment/versions.tf
similarity index 100%
rename from examples/mongodbatlas_advanced_cluster (preview provider 2.0.0)/version-upgrade-with-pinned-fcv/versions.tf
rename to examples/mongodbatlas_cloud_user_org_assignment/versions.tf
diff --git a/examples/mongodbatlas_cloud_user_project_assignment/README.md b/examples/mongodbatlas_cloud_user_project_assignment/README.md
new file mode 100644
index 0000000000..b842499236
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_project_assignment/README.md
@@ -0,0 +1,27 @@
+# Example: mongodbatlas_cloud_user_project_assignment
+
+This example demonstrates how to use the `mongodbatlas_cloud_user_project_assignment` resource to assign a user to a MongoDB Atlas project with specified roles.
+
+## Usage
+
+```hcl
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
+
+resource "mongodbatlas_cloud_user_project_assignment" "example" {
+ project_id = var.project_id
+ username = var.user_email
+ roles = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+```
+
+You must set the following variables:
+
+- `public_key`: Your MongoDB Atlas API public key.
+- `private_key`: Your MongoDB Atlas API private key.
+- `project_id`: The ID of the MongoDB Atlas project to assign the user to.
+- `user_email`: The email address of the user to assign to the project.
+
+To learn more, see the [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addprojectuser) Documentation.
diff --git a/examples/mongodbatlas_cloud_user_project_assignment/main.tf b/examples/mongodbatlas_cloud_user_project_assignment/main.tf
new file mode 100644
index 0000000000..4f4911d61b
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_project_assignment/main.tf
@@ -0,0 +1,15 @@
+resource "mongodbatlas_cloud_user_project_assignment" "example" {
+ project_id = var.project_id
+ username = var.user_email
+ roles = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_cloud_user_project_assignment" "example_username" {
+ project_id = var.project_id
+ username = mongodbatlas_cloud_user_project_assignment.example.username
+}
+
+data "mongodbatlas_cloud_user_project_assignment" "example_user_id" {
+ project_id = var.project_id
+ user_id = mongodbatlas_cloud_user_project_assignment.example.user_id
+}
diff --git a/examples/mongodbatlas_cloud_user_project_assignment/outputs.tf b/examples/mongodbatlas_cloud_user_project_assignment/outputs.tf
new file mode 100644
index 0000000000..7764af11b8
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_project_assignment/outputs.tf
@@ -0,0 +1,14 @@
+output "user_from_username" {
+ description = "Project assignment details for the user retrieved by username"
+ value = data.mongodbatlas_cloud_user_project_assignment.example_username
+}
+
+output "user_from_user_id" {
+ description = "Project assignment details for the user retrieved by user_id"
+ value = data.mongodbatlas_cloud_user_project_assignment.example_user_id
+}
+
+output "assigned_user" {
+ description = "Details of the assigned user"
+ value = mongodbatlas_cloud_user_project_assignment.example
+}
diff --git a/examples/mongodbatlas_cloud_user_project_assignment/provider.tf b/examples/mongodbatlas_cloud_user_project_assignment/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_project_assignment/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/mongodbatlas_cloud_user_project_assignment/variables.tf b/examples/mongodbatlas_cloud_user_project_assignment/variables.tf
new file mode 100644
index 0000000000..4d2052131f
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_project_assignment/variables.tf
@@ -0,0 +1,21 @@
+variable "project_id" {
+ description = "The MongoDB Atlas project ID"
+ type = string
+}
+
+variable "user_email" {
+ description = "The email address of the user"
+ type = string
+}
+
+variable "public_key" {
+ description = "Atlas API public key"
+ type = string
+ default = ""
+}
+
+variable "private_key" {
+ description = "Atlas API private key"
+ type = string
+ default = ""
+}
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/versions.tf b/examples/mongodbatlas_cloud_user_project_assignment/versions.tf
similarity index 100%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time-advanced-cluster/versions.tf
rename to examples/mongodbatlas_cloud_user_project_assignment/versions.tf
diff --git a/examples/mongodbatlas_cloud_user_team_assignment/README.md b/examples/mongodbatlas_cloud_user_team_assignment/README.md
new file mode 100644
index 0000000000..0d23e276ad
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_team_assignment/README.md
@@ -0,0 +1,28 @@
+# Example: mongodbatlas_cloud_user_team_assignment
+
+This example demonstrates how to use the `mongodbatlas_cloud_user_team_assignment` resource to assign a user to a team within a MongoDB Atlas organization.
+
+## Usage
+
+```hcl
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
+
+resource "mongodbatlas_cloud_user_team_assignment" "example" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = var.user_id
+}
+```
+
+You must set the following variables:
+- `public_key`: Your MongoDB Atlas API public key.
+- `private_key`: Your MongoDB Atlas API private key.
+- `org_id`: The ID of the MongoDB Atlas organization.
+- `team_id`: The ID of the team to assign the user to.
+- `user_id`: The ID of the user to assign to the team.
+
+
+To learn more, see the [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addusertoteam) Documentation.
diff --git a/examples/mongodbatlas_cloud_user_team_assignment/main.tf b/examples/mongodbatlas_cloud_user_team_assignment/main.tf
new file mode 100644
index 0000000000..7085f437e9
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_team_assignment/main.tf
@@ -0,0 +1,17 @@
+resource "mongodbatlas_cloud_user_team_assignment" "example" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = var.user_id
+}
+
+data "mongodbatlas_cloud_user_team_assignment" "example_user_id" {
+ org_id = var.org_id
+ team_id = var.team_id
+ user_id = mongodbatlas_cloud_user_team_assignment.example.user_id
+}
+
+data "mongodbatlas_cloud_user_team_assignment" "example_username" {
+ org_id = var.org_id
+ team_id = var.team_id
+ username = mongodbatlas_cloud_user_team_assignment.example.username
+}
diff --git a/examples/mongodbatlas_cloud_user_team_assignment/outputs.tf b/examples/mongodbatlas_cloud_user_team_assignment/outputs.tf
new file mode 100644
index 0000000000..27c97a3703
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_team_assignment/outputs.tf
@@ -0,0 +1,14 @@
+output "user_from_username" {
+ description = "User details retrieved by username"
+ value = data.mongodbatlas_cloud_user_team_assignment.example_username
+}
+
+output "user_from_user_id" {
+ description = "User details retrieved by user_id"
+ value = data.mongodbatlas_cloud_user_team_assignment.example_user_id
+}
+
+output "assigned_user" {
+ description = "Details of the user assigned to the team"
+ value = mongodbatlas_cloud_user_team_assignment.example
+}
diff --git a/examples/mongodbatlas_cloud_user_team_assignment/provider.tf b/examples/mongodbatlas_cloud_user_team_assignment/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_team_assignment/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/mongodbatlas_cloud_user_team_assignment/variables.tf b/examples/mongodbatlas_cloud_user_team_assignment/variables.tf
new file mode 100644
index 0000000000..1be39dcea6
--- /dev/null
+++ b/examples/mongodbatlas_cloud_user_team_assignment/variables.tf
@@ -0,0 +1,26 @@
+variable "org_id" {
+ description = "The MongoDB Atlas organization ID"
+ type = string
+}
+
+variable "team_id" {
+ description = "The team ID"
+ type = string
+}
+
+variable "user_id" {
+ description = "The user ID"
+ type = string
+}
+
+variable "public_key" {
+ description = "Atlas API public key"
+ type = string
+ default = ""
+}
+
+variable "private_key" {
+ description = "Atlas API private key"
+ type = string
+ default = ""
+}
diff --git a/examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/versions.tf b/examples/mongodbatlas_cloud_user_team_assignment/versions.tf
similarity index 100%
rename from examples/mongodbatlas_cloud_provider_snapshot_restore_job/point-in-time/versions.tf
rename to examples/mongodbatlas_cloud_user_team_assignment/versions.tf
diff --git a/examples/mongodbatlas_cluster_outage_simulation/main.tf b/examples/mongodbatlas_cluster_outage_simulation/main.tf
index 84bacf43d4..3d9beaf547 100644
--- a/examples/mongodbatlas_cluster_outage_simulation/main.tf
+++ b/examples/mongodbatlas_cluster_outage_simulation/main.tf
@@ -3,41 +3,39 @@ resource "mongodbatlas_advanced_cluster" "atlas_cluster" {
name = var.atlas_cluster_name
cluster_type = var.atlas_cluster_type
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = var.provider_instance_size_name
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = var.provider_instance_size_name
node_count = 1
}
provider_name = var.provider_name
priority = 7
region_name = "US_EAST_1"
- }
-
- region_configs {
- electable_specs {
- instance_size = var.provider_instance_size_name
- node_count = 2
- }
- provider_name = var.provider_name
- priority = 6
- region_name = "US_EAST_2"
- }
-
- region_configs {
- electable_specs {
- instance_size = var.provider_instance_size_name
- node_count = 2
- }
- provider_name = var.provider_name
- priority = 5
- region_name = "US_WEST_1"
- }
- }
+ },
+ {
+ electable_specs = {
+ instance_size = var.provider_instance_size_name
+ node_count = 2
+ }
+ provider_name = var.provider_name
+ priority = 6
+ region_name = "US_EAST_2"
+ },
+ {
+ electable_specs = {
+ instance_size = var.provider_instance_size_name
+ node_count = 2
+ }
+ provider_name = var.provider_name
+ priority = 5
+ region_name = "US_WEST_1"
+ }]
+ }]
}
resource "mongodbatlas_cluster_outage_simulation" "outage_simulation" {
diff --git a/examples/mongodbatlas_data_lake_pipeline/main.tf b/examples/mongodbatlas_data_lake_pipeline/main.tf
index 1edec72001..cc4a8ad0cd 100644
--- a/examples/mongodbatlas_data_lake_pipeline/main.tf
+++ b/examples/mongodbatlas_data_lake_pipeline/main.tf
@@ -8,19 +8,17 @@ resource "mongodbatlas_advanced_cluster" "automated_backup_test" {
name = var.cluster_name
cluster_type = "REPLICASET"
- replication_specs {
- num_shards = 1
-
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
}
provider_name = "GCP"
region_name = "US_EAST_1"
priority = 7
- }
- }
+ }]
+ }]
backup_enabled = true # enable cloud backup snapshots
}
@@ -54,4 +52,3 @@ resource "mongodbatlas_data_lake_pipeline" "test" {
}
}
-
diff --git a/examples/mongodbatlas_database_user/atlas_cluster.tf b/examples/mongodbatlas_database_user/atlas_cluster.tf
index 0c19072a80..8b3da00eb7 100644
--- a/examples/mongodbatlas_database_user/atlas_cluster.tf
+++ b/examples/mongodbatlas_database_user/atlas_cluster.tf
@@ -4,17 +4,17 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = var.region
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
output "atlasclusterstring" {
diff --git a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf
index 0c42af7f6b..4ccb01afe3 100644
--- a/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf
+++ b/examples/mongodbatlas_encryption_at_rest/aws/atlas-cluster/main.tf
@@ -32,17 +32,17 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
backup_enabled = true
encryption_at_rest_provider = "AWS"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
data "mongodbatlas_encryption_at_rest" "test" {
diff --git a/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/modules/multi-region-cluster/main.tf b/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/modules/multi-region-cluster/main.tf
index b6b0f81f50..f233508394 100644
--- a/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/modules/multi-region-cluster/main.tf
+++ b/examples/mongodbatlas_encryption_at_rest/aws/multi-region-cluster/modules/multi-region-cluster/main.tf
@@ -5,39 +5,69 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
backup_enabled = true
encryption_at_rest_provider = var.provider_name
- replication_specs {
- num_shards = 2 # 2-shard Multi-Region Cluster
-
- region_configs { # shard n1
- electable_specs {
- instance_size = var.instance_size
- node_count = 3
- }
- analytics_specs {
- instance_size = var.instance_size
- node_count = 1
- }
- provider_name = var.provider_name
- priority = 7
- region_name = var.aws_region_shard_1
- }
-
- region_configs { # shard n2
- electable_specs {
- instance_size = var.instance_size
- node_count = 2
+ replication_specs = [{
+ # shard 1
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = var.instance_size
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = var.instance_size
+ node_count = 1
+ }
+ provider_name = var.provider_name
+ priority = 7
+ region_name = var.aws_region_shard_1
+ },
+ {
+ electable_specs = {
+ instance_size = var.instance_size
+ node_count = 2
+ }
+ analytics_specs = {
+ instance_size = var.instance_size
+ node_count = 1
+ }
+ provider_name = var.provider_name
+ priority = 6
+ region_name = var.aws_region_shard_2
}
- analytics_specs {
- instance_size = var.instance_size
- node_count = 1
+ ]
+ }, { # shard 2
+ region_configs = [
+ {
+ electable_specs = {
+ instance_size = var.instance_size
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = var.instance_size
+ node_count = 1
+ }
+ provider_name = var.provider_name
+ priority = 7
+ region_name = var.aws_region_shard_1
+ },
+ {
+ electable_specs = {
+ instance_size = var.instance_size
+ node_count = 2
+ }
+ analytics_specs = {
+ instance_size = var.instance_size
+ node_count = 1
+ }
+ provider_name = var.provider_name
+ priority = 6
+ region_name = var.aws_region_shard_2
}
- provider_name = var.provider_name
- priority = 6
- region_name = var.aws_region_shard_2
+ ]
}
- }
+ ]
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = true
oplog_size_mb = 999
sample_refresh_interval_bi_connector = 300
diff --git a/examples/mongodbatlas_federated_query_limit/main.tf b/examples/mongodbatlas_federated_query_limit/main.tf
index 133f7c55e0..5871bbfcb4 100644
--- a/examples/mongodbatlas_federated_query_limit/main.tf
+++ b/examples/mongodbatlas_federated_query_limit/main.tf
@@ -3,17 +3,17 @@ resource "mongodbatlas_advanced_cluster" "atlas_cluster_1" {
name = var.atlas_cluster_name_1
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = var.provider_instance_size_name
}
provider_name = var.provider_name
backing_provider_name = var.backing_provider_name
region_name = var.provider_region_name
priority = 7
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_advanced_cluster" "atlas_cluster_2" {
@@ -21,17 +21,17 @@ resource "mongodbatlas_advanced_cluster" "atlas_cluster_2" {
name = var.atlas_cluster_name_2
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = var.provider_instance_size_name
}
provider_name = var.provider_name
backing_provider_name = var.backing_provider_name
region_name = var.provider_region_name
priority = 7
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_federated_database_instance" "test-instance" {
@@ -81,4 +81,4 @@ resource "mongodbatlas_federated_query_limit" "query_limit" {
limit_name = var.federated_query_limit
overrun_policy = var.overrun_policy
value = var.limit_value
-}
\ No newline at end of file
+}
diff --git a/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf b/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf
index 42a890c75e..bcbd589ba2 100644
--- a/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf
+++ b/examples/mongodbatlas_federated_settings_identity_provider/azure/atlas.tf
@@ -1,5 +1,5 @@
locals {
- mongodb_uri = mongodbatlas_advanced_cluster.this.connection_strings[0].standard
+ mongodb_uri = mongodbatlas_advanced_cluster.this.connection_strings.standard
}
data "mongodbatlas_federated_settings" "this" {
@@ -21,17 +21,17 @@ resource "mongodbatlas_advanced_cluster" "this" {
name = var.project_name
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = var.region
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_federated_settings_identity_provider" "oidc" {
diff --git a/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf b/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf
index 04d4a84209..da09304096 100644
--- a/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf
+++ b/examples/mongodbatlas_federated_settings_identity_provider/azure/outputs.tf
@@ -9,7 +9,7 @@ output "ssh_connection_string" {
}
output "user_test_conn_string" {
- value = "mongodb+srv://${local.test_user_username}:${local.test_user_password}@${replace(mongodbatlas_advanced_cluster.this.connection_strings[0].standard_srv, "mongodb+srv://", "")}/?retryWrites=true"
+ value = "mongodb+srv://${local.test_user_username}:${local.test_user_password}@${replace(mongodbatlas_advanced_cluster.this.connection_strings.standard_srv, "mongodb+srv://", "")}/?retryWrites=true"
sensitive = true
description = "Useful for connecting to the database from Compass or other tool to validate data"
}
diff --git a/examples/mongodbatlas_network_peering/aws/main.tf b/examples/mongodbatlas_network_peering/aws/main.tf
index 28da1d5cda..e203bfe37a 100644
--- a/examples/mongodbatlas_network_peering/aws/main.tf
+++ b/examples/mongodbatlas_network_peering/aws/main.tf
@@ -14,17 +14,17 @@ resource "mongodbatlas_advanced_cluster" "cluster-atlas" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = var.atlas_region
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_database_user" "db-user" {
@@ -40,13 +40,19 @@ resource "mongodbatlas_database_user" "db-user" {
}
resource "mongodbatlas_network_peering" "aws-atlas" {
- accepter_region_name = var.aws_region
- project_id = mongodbatlas_project.aws_atlas.id
- container_id = one(values(mongodbatlas_advanced_cluster.cluster-atlas.replication_specs[0].container_id))
- provider_name = "AWS"
- route_table_cidr_block = aws_vpc.primary.cidr_block
- vpc_id = aws_vpc.primary.id
- aws_account_id = var.aws_account_id
+ accepter_region_name = var.aws_region
+ project_id = mongodbatlas_project.aws_atlas.id
+ container_id = one(values(mongodbatlas_advanced_cluster.cluster-atlas.replication_specs[0].container_id))
+ provider_name = "AWS"
+ route_table_cidr_block = aws_vpc.primary.cidr_block
+ vpc_id = aws_vpc.primary.id
+ aws_account_id = var.aws_account_id
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ update = "10m"
+ delete = "10m"
+ }
}
resource "mongodbatlas_project_ip_access_list" "test" {
diff --git a/examples/mongodbatlas_network_peering/azure/atlas.tf b/examples/mongodbatlas_network_peering/azure/atlas.tf
index 5485899bcf..1bb0ebf0c4 100644
--- a/examples/mongodbatlas_network_peering/azure/atlas.tf
+++ b/examples/mongodbatlas_network_peering/azure/atlas.tf
@@ -11,17 +11,17 @@ resource "mongodbatlas_advanced_cluster" "azure-cluster" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AZURE"
region_name = var.provider_region_name
- electable_specs {
+ electable_specs = {
instance_size = var.provider_instance_size_name
node_count = 3
}
- }
- }
+ }]
+ }]
}
# Create the peering connection request
diff --git a/examples/mongodbatlas_network_peering/gcp/cluster.tf b/examples/mongodbatlas_network_peering/gcp/cluster.tf
index b8e1d9ebe1..f196d27943 100644
--- a/examples/mongodbatlas_network_peering/gcp/cluster.tf
+++ b/examples/mongodbatlas_network_peering/gcp/cluster.tf
@@ -6,35 +6,34 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "REPLICASET"
backup_enabled = true # enable cloud provider snapshots
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "GCP"
region_name = var.atlas_region
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- auto_scaling {
+ auto_scaling = {
compute_enabled = true
compute_scale_down_enabled = true
compute_min_instance_size = "M10"
compute_max_instance_size = "M20"
disk_gb_enabled = true
}
- }
+ }]
+ }]
+ tags = {
+ environment = "prod"
}
- tags {
- key = "environment"
- value = "prod"
- }
- advanced_configuration {
+ advanced_configuration = {
minimum_enabled_tls_protocol = "TLS1_2"
}
lifecycle {
ignore_changes = [
- replication_specs[0].region_configs[0].electable_specs[0].instance_size,
+ replication_specs[0].region_configs[0].electable_specs.instance_size,
]
}
}
diff --git a/examples/mongodbatlas_online_archive/main.tf b/examples/mongodbatlas_online_archive/main.tf
index ebb9eb8cdc..21704e068e 100644
--- a/examples/mongodbatlas_online_archive/main.tf
+++ b/examples/mongodbatlas_online_archive/main.tf
@@ -29,6 +29,11 @@ resource "mongodbatlas_online_archive" "users_archive" {
field_name = var.partition_field_two
order = 2
}
+
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ }
}
data "mongodbatlas_online_archive" "read_archive" {
diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded/atlas-cluster.tf b/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded/atlas-cluster.tf
index f5c12d9d99..366d19d83a 100644
--- a/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded/atlas-cluster.tf
+++ b/examples/mongodbatlas_privatelink_endpoint/aws/cluster-geosharded/atlas-cluster.tf
@@ -4,53 +4,52 @@ resource "mongodbatlas_advanced_cluster" "geosharded" {
cluster_type = "GEOSHARDED"
backup_enabled = true
- replication_specs { # Shard 1
- zone_name = "Zone 1"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = var.atlas_region_east
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AWS"
- priority = 6
- region_name = var.atlas_region_west
- }
- }
-
- replication_specs { # Shard 2
- zone_name = "Zone 1"
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = var.atlas_region_east
- }
-
- region_configs {
- electable_specs {
- instance_size = "M30"
- node_count = 2
- }
- provider_name = "AWS"
- priority = 6
- region_name = var.atlas_region_west
+ replication_specs = [
+ { # Shard 1
+ zone_name = "Zone 1"
+
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = var.atlas_region_east
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AWS"
+ priority = 6
+ region_name = var.atlas_region_west
+ }]
+ },
+ { # Shard 2
+ zone_name = "Zone 1"
+
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 3
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = var.atlas_region_east
+ },
+ {
+ electable_specs = {
+ instance_size = "M30"
+ node_count = 2
+ }
+ provider_name = "AWS"
+ priority = 6
+ region_name = var.atlas_region_west
+ }]
}
- }
+ ]
depends_on = [
mongodbatlas_privatelink_endpoint_service.pe_east_service,
@@ -58,4 +57,3 @@ resource "mongodbatlas_advanced_cluster" "geosharded" {
mongodbatlas_private_endpoint_regional_mode.test
]
}
-
diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf
index 38e08232b1..53bd9e3443 100644
--- a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf
+++ b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/atlas-cluster.tf
@@ -4,16 +4,16 @@ resource "mongodbatlas_advanced_cluster" "aws_private_connection" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
depends_on = [mongodbatlas_privatelink_endpoint_service.pe_east_service]
}
diff --git a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/main.tf b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/main.tf
index a3e7f12f55..00bb65c3e0 100644
--- a/examples/mongodbatlas_privatelink_endpoint/aws/cluster/main.tf
+++ b/examples/mongodbatlas_privatelink_endpoint/aws/cluster/main.tf
@@ -1,12 +1,22 @@
resource "mongodbatlas_privatelink_endpoint" "pe_east" {
- project_id = var.project_id
- provider_name = "AWS"
- region = "us-east-1"
+ project_id = var.project_id
+ provider_name = "AWS"
+ region = "us-east-1"
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ delete = "10m"
+ }
}
resource "mongodbatlas_privatelink_endpoint_service" "pe_east_service" {
- project_id = mongodbatlas_privatelink_endpoint.pe_east.project_id
- private_link_id = mongodbatlas_privatelink_endpoint.pe_east.id
- endpoint_service_id = aws_vpc_endpoint.vpce_east.id
- provider_name = "AWS"
+ project_id = mongodbatlas_privatelink_endpoint.pe_east.project_id
+ private_link_id = mongodbatlas_privatelink_endpoint.pe_east.id
+ endpoint_service_id = aws_vpc_endpoint.vpce_east.id
+ provider_name = "AWS"
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ delete = "10m"
+ }
}
diff --git a/examples/mongodbatlas_privatelink_endpoint/azure/main.tf b/examples/mongodbatlas_privatelink_endpoint/azure/main.tf
index 09ea2b5fab..20be3a4863 100644
--- a/examples/mongodbatlas_privatelink_endpoint/azure/main.tf
+++ b/examples/mongodbatlas_privatelink_endpoint/azure/main.tf
@@ -28,9 +28,14 @@ resource "azurerm_subnet" "test" {
}
resource "mongodbatlas_privatelink_endpoint" "test" {
- project_id = var.project_id
- provider_name = "AZURE"
- region = "eastus2"
+ project_id = var.project_id
+ provider_name = "AZURE"
+ region = "eastus2"
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ delete = "10m"
+ }
}
resource "azurerm_private_endpoint" "test" {
@@ -53,6 +58,11 @@ resource "mongodbatlas_privatelink_endpoint_service" "test" {
endpoint_service_id = azurerm_private_endpoint.test.id
private_endpoint_ip_address = azurerm_private_endpoint.test.private_service_connection[0].private_ip_address
provider_name = "AZURE"
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ delete = "10m"
+ }
}
data "mongodbatlas_advanced_cluster" "cluster" {
diff --git a/examples/mongodbatlas_privatelink_endpoint/gcp/main.tf b/examples/mongodbatlas_privatelink_endpoint/gcp/main.tf
index 71ebc32887..ed8584d52b 100644
--- a/examples/mongodbatlas_privatelink_endpoint/gcp/main.tf
+++ b/examples/mongodbatlas_privatelink_endpoint/gcp/main.tf
@@ -1,7 +1,12 @@
resource "mongodbatlas_privatelink_endpoint" "test" {
- project_id = var.project_id
- provider_name = "GCP"
- region = var.gcp_region
+ project_id = var.project_id
+ provider_name = "GCP"
+ region = var.gcp_region
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ delete = "10m"
+ }
}
# Create a Google Network
@@ -45,12 +50,16 @@ resource "google_compute_forwarding_rule" "default" {
}
resource "mongodbatlas_privatelink_endpoint_service" "test" {
- project_id = mongodbatlas_privatelink_endpoint.test.project_id
- private_link_id = mongodbatlas_privatelink_endpoint.test.private_link_id
- provider_name = "GCP"
- endpoint_service_id = google_compute_network.default.name
- gcp_project_id = var.gcp_project_id
-
+ project_id = mongodbatlas_privatelink_endpoint.test.project_id
+ private_link_id = mongodbatlas_privatelink_endpoint.test.private_link_id
+ provider_name = "GCP"
+ endpoint_service_id = google_compute_network.default.name
+ gcp_project_id = var.gcp_project_id
+ delete_on_create_timeout = true
+ timeouts {
+ create = "10m"
+ delete = "10m"
+ }
dynamic "endpoints" {
for_each = google_compute_address.default
diff --git a/examples/mongodbatlas_search_deployment/main.tf b/examples/mongodbatlas_search_deployment/main.tf
index 3d2f08d2ba..a4de7ecd61 100644
--- a/examples/mongodbatlas_search_deployment/main.tf
+++ b/examples/mongodbatlas_search_deployment/main.tf
@@ -8,17 +8,17 @@ resource "mongodbatlas_advanced_cluster" "example" {
name = "ClusterExample"
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ }]
}
resource "mongodbatlas_search_deployment" "example" {
diff --git a/examples/mongodbatlas_team_project_assignment/README.md b/examples/mongodbatlas_team_project_assignment/README.md
new file mode 100644
index 0000000000..7aa41cad00
--- /dev/null
+++ b/examples/mongodbatlas_team_project_assignment/README.md
@@ -0,0 +1,33 @@
+# Example: mongodbatlas_team_project_assignment
+
+This example demonstrates how to use the `mongodbatlas_team_project_assignment` resource to assign a team to an existing project with specified roles in MongoDB Atlas.
+
+## Usage
+
+```hcl
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
+
+resource "mongodbatlas_team_project_assignment" "this" {
+ project_id = var.project_id
+ team_id = var.team_id
+ role_names = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_team_project_assignment" "this" {
+ project_id = mongodbatlas_team_project_assignment.this.project_id
+ team_id = mongodbatlas_team_project_assignment.this.team_id
+}
+```
+
+You must set the following variables:
+
+- `public_key`: Your MongoDB Atlas API public key.
+- `private_key`: Your MongoDB Atlas API private key.
+- `project_id`: The ID of the project to assign the team to.
+- `team_id`: The ID of the team to assign to the project.
+
+To learn more, see the [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addallteamstoproject) Documentation.
+
diff --git a/examples/mongodbatlas_team_project_assignment/main.tf b/examples/mongodbatlas_team_project_assignment/main.tf
new file mode 100644
index 0000000000..d5857cc09a
--- /dev/null
+++ b/examples/mongodbatlas_team_project_assignment/main.tf
@@ -0,0 +1,10 @@
+resource "mongodbatlas_team_project_assignment" "this" {
+ project_id = var.project_id
+ team_id = var.team_id
+ role_names = ["GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"]
+}
+
+data "mongodbatlas_team_project_assignment" "this" {
+ project_id = mongodbatlas_team_project_assignment.this.project_id
+ team_id = mongodbatlas_team_project_assignment.this.team_id
+}
diff --git a/examples/mongodbatlas_team_project_assignment/outputs.tf b/examples/mongodbatlas_team_project_assignment/outputs.tf
new file mode 100644
index 0000000000..e64544f041
--- /dev/null
+++ b/examples/mongodbatlas_team_project_assignment/outputs.tf
@@ -0,0 +1,8 @@
+output "assigned_team" {
+ description = "Details of the assigned team"
+ value = mongodbatlas_team_project_assignment.this
+}
+output "team_from_team_id" {
+ description = "Project assignment details for the team retrieved by team_id"
+ value = data.mongodbatlas_team_project_assignment.this
+}
diff --git a/examples/mongodbatlas_team_project_assignment/provider.tf b/examples/mongodbatlas_team_project_assignment/provider.tf
new file mode 100644
index 0000000000..18c430e061
--- /dev/null
+++ b/examples/mongodbatlas_team_project_assignment/provider.tf
@@ -0,0 +1,4 @@
+provider "mongodbatlas" {
+ public_key = var.public_key
+ private_key = var.private_key
+}
diff --git a/examples/mongodbatlas_team_project_assignment/variables.tf b/examples/mongodbatlas_team_project_assignment/variables.tf
new file mode 100644
index 0000000000..e439259fea
--- /dev/null
+++ b/examples/mongodbatlas_team_project_assignment/variables.tf
@@ -0,0 +1,21 @@
+variable "project_id" {
+ description = "The MongoDB Atlas project ID"
+ type = string
+}
+
+variable "team_id" {
+ description = "The MongoDB Atlas team ID"
+ type = string
+}
+
+variable "public_key" {
+ description = "Atlas API public key"
+ type = string
+ default = ""
+}
+
+variable "private_key" {
+ description = "Atlas API private key"
+ type = string
+ default = ""
+}
diff --git a/examples/mongodbatlas_team_project_assignment/versions.tf b/examples/mongodbatlas_team_project_assignment/versions.tf
new file mode 100644
index 0000000000..0fe79cfac9
--- /dev/null
+++ b/examples/mongodbatlas_team_project_assignment/versions.tf
@@ -0,0 +1,8 @@
+terraform {
+ required_providers {
+ mongodbatlas = {
+ source = "mongodb/mongodbatlas"
+ }
+ }
+ required_version = ">= 1.0"
+}
diff --git a/examples/mongodbatlas_third_party_integration/datadog/instance.tf b/examples/mongodbatlas_third_party_integration/datadog/instance.tf
index da9a02c555..042be6cf65 100644
--- a/examples/mongodbatlas_third_party_integration/datadog/instance.tf
+++ b/examples/mongodbatlas_third_party_integration/datadog/instance.tf
@@ -3,19 +3,17 @@ resource "mongodbatlas_advanced_cluster" "my_cluster" {
name = var.cluster_name
cluster_type = "REPLICASET"
- replication_specs {
- zone_name = "Zone 1"
- num_shards = 1
-
- region_configs {
+ replication_specs = [{
+ zone_name = "Zone 1"
+ region_configs = [{
provider_name = "AWS"
region_name = "US_EAST_1"
priority = 7
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
-}
\ No newline at end of file
+ }]
+ }]
+}
diff --git a/examples/starter/atlas_cluster.tf b/examples/starter/atlas_cluster.tf
index 18fff374e6..3946b01962 100644
--- a/examples/starter/atlas_cluster.tf
+++ b/examples/starter/atlas_cluster.tf
@@ -4,20 +4,19 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
cluster_type = "REPLICASET"
backup_enabled = true
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = var.cloud_provider
region_name = var.region
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ }]
+ }]
}
output "connection_strings" {
- value = mongodbatlas_advanced_cluster.cluster.connection_strings[0].standard_srv
+ value = mongodbatlas_advanced_cluster.cluster.connection_strings.standard_srv
}
-
diff --git a/internal/common/cleanup/handle_timeout.go b/internal/common/cleanup/handle_timeout.go
new file mode 100644
index 0000000000..3c51c9bb5d
--- /dev/null
+++ b/internal/common/cleanup/handle_timeout.go
@@ -0,0 +1,113 @@
+package cleanup
+
+import (
+ "context"
+ "errors"
+ "strings"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
+ "github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
+)
+
+const (
+ CleanupWarning = "Failed to create resource. Will run cleanup due to the operation timing out"
+)
+
+// HandleCreateTimeout helps to implement Create in long-running operations.
+// It deletes the resource if the creation times out and `delete_on_create_timeout` is enabled.
+// It returns an error with additional information which should be used instead of the original error.
+func HandleCreateTimeout(deleteOnCreateTimeout bool, errWait error, cleanup func(context.Context) error) error {
+ if _, isTimeoutErr := errWait.(*retry.TimeoutError); !isTimeoutErr {
+ return errWait
+ }
+ if !deleteOnCreateTimeout {
+ return errors.Join(errWait, errors.New("cleanup won't be run because delete_on_create_timeout is false"))
+ }
+ errWait = errors.Join(errWait, errors.New("will run cleanup because delete_on_create_timeout is true. If you suspect a transient error, wait before retrying to allow resource deletion to finish"))
+ // cleanup uses a new context as existing one is expired.
+ if errCleanup := cleanup(context.Background()); errCleanup != nil {
+ errWait = errors.Join(errWait, errors.New("cleanup failed: "+errCleanup.Error()))
+ }
+ return errWait
+}
+
+// OnTimeout creates a new context with a timeout and a deferred function that will run `cleanup` when the context hit the timeout (no timeout=no-op).
+// Remember to always call the returned `deferCall` function: `defer deferCall()`.
+// `warningDetail` should have resource identifiable information, for example cluster name and project ID.
+// warnDiags(summary, detail) are called:
+// 1. Before the cleanup call.
+// 2. (Only if the cleanup fails) Details of the cleanup error.
+func OnTimeout(ctx context.Context, timeout time.Duration, warnDiags func(string, string), warningDetail string, cleanup func(context.Context) error) (outCtx context.Context, deferCall func()) {
+ outCtx, cancel := context.WithTimeout(ctx, timeout)
+ return outCtx, func() {
+ cancel()
+ if !errors.Is(outCtx.Err(), context.DeadlineExceeded) {
+ return
+ }
+ warnDiags(CleanupWarning, warningDetail)
+ newContext := context.Background() // Create a new context for cleanup as the old context is expired
+ if err := cleanup(newContext); err != nil {
+ warnDiags("Error during cleanup", warningDetail+" error="+err.Error())
+ }
+ }
+}
+
+const (
+ contextDeadlineExceeded = "context deadline exceeded"
+ TimeoutReachedPrefix = "Timeout reached after "
+)
+
+func ReplaceContextDeadlineExceededDiags(diags *diag.Diagnostics, duration time.Duration) {
+ for i := range len(*diags) {
+ d := (*diags)[i]
+ if strings.Contains(d.Detail(), contextDeadlineExceeded) {
+ (*diags)[i] = diag.NewErrorDiagnostic(
+ d.Summary(),
+ strings.ReplaceAll(d.Detail(), contextDeadlineExceeded, TimeoutReachedPrefix+duration.String()),
+ )
+ }
+ }
+}
+
+const (
+ OperationCreate = "create"
+ OperationUpdate = "update"
+ OperationDelete = "delete"
+)
+
+// ResolveTimeout extracts the appropriate timeout duration from the model for the given operation
+func ResolveTimeout(ctx context.Context, t *timeouts.Value, operationName string, diags *diag.Diagnostics) time.Duration {
+ var (
+ timeoutDuration time.Duration
+ localDiags diag.Diagnostics
+ )
+ switch operationName {
+ case OperationCreate:
+ timeoutDuration, localDiags = t.Create(ctx, constant.DefaultTimeout)
+ diags.Append(localDiags...)
+ case OperationUpdate:
+ timeoutDuration, localDiags = t.Update(ctx, constant.DefaultTimeout)
+ diags.Append(localDiags...)
+ case OperationDelete:
+ timeoutDuration, localDiags = t.Delete(ctx, constant.DefaultTimeout)
+ diags.Append(localDiags...)
+ default:
+ timeoutDuration = constant.DefaultTimeout
+ }
+ return timeoutDuration
+}
+
+// ResolveDeleteOnCreateTimeout returns true if delete_on_create_timeout should be enabled.
+// Default behavior is true when not explicitly set to false.
+func ResolveDeleteOnCreateTimeout(deleteOnCreateTimeout types.Bool) bool {
+ // If null or unknown, default to true
+ if deleteOnCreateTimeout.IsNull() || deleteOnCreateTimeout.IsUnknown() {
+ return true
+ }
+ // Otherwise use the explicit value
+ return deleteOnCreateTimeout.ValueBool()
+}
diff --git a/internal/common/cleanup/on_timeout_test.go b/internal/common/cleanup/handle_timeout_test.go
similarity index 100%
rename from internal/common/cleanup/on_timeout_test.go
rename to internal/common/cleanup/handle_timeout_test.go
diff --git a/internal/common/cleanup/on_timeout.go b/internal/common/cleanup/on_timeout.go
deleted file mode 100644
index 4d505d0aa9..0000000000
--- a/internal/common/cleanup/on_timeout.go
+++ /dev/null
@@ -1,52 +0,0 @@
-package cleanup
-
-import (
- "context"
- "errors"
- "strings"
- "time"
-
- "github.com/hashicorp/terraform-plugin-framework/diag"
-)
-
-const (
- CleanupWarning = "Failed to create resource. Will run cleanup due to the operation timing out"
-)
-
-// OnTimeout creates a new context with a timeout and a deferred function that will run `cleanup` when the context hit the timeout (no timeout=no-op).
-// Remember to always call the returned `deferCall` function: `defer deferCall()`.
-// `warningDetail` should have resource identifiable information, for example cluster name and project ID.
-// warnDiags(summary, detail) are called:
-// 1. Before the cleanup call.
-// 2. (Only if the cleanup fails) Details of the cleanup error.
-func OnTimeout(ctx context.Context, timeout time.Duration, warnDiags func(string, string), warningDetail string, cleanup func(context.Context) error) (outCtx context.Context, deferCall func()) {
- outCtx, cancel := context.WithTimeout(ctx, timeout)
- return outCtx, func() {
- cancel()
- if !errors.Is(outCtx.Err(), context.DeadlineExceeded) {
- return
- }
- warnDiags(CleanupWarning, warningDetail)
- newContext := context.Background() // Create a new context for cleanup as the old context is expired
- if err := cleanup(newContext); err != nil {
- warnDiags("Error during cleanup", warningDetail+" error="+err.Error())
- }
- }
-}
-
-const (
- contextDeadlineExceeded = "context deadline exceeded"
- TimeoutReachedPrefix = "Timeout reached after "
-)
-
-func ReplaceContextDeadlineExceededDiags(diags *diag.Diagnostics, duration time.Duration) {
- for i := range len(*diags) {
- d := (*diags)[i]
- if strings.Contains(d.Detail(), contextDeadlineExceeded) {
- (*diags)[i] = diag.NewErrorDiagnostic(
- d.Summary(),
- strings.ReplaceAll(d.Detail(), contextDeadlineExceeded, TimeoutReachedPrefix+duration.String()),
- )
- }
- }
-}
diff --git a/internal/common/constant/deprecation.go b/internal/common/constant/deprecation.go
index 79b5cdde3a..8702aba932 100644
--- a/internal/common/constant/deprecation.go
+++ b/internal/common/constant/deprecation.go
@@ -13,4 +13,5 @@ const (
DeprecationParamByDateWithExternalLink = "This parameter is deprecated and will be removed in %s. For more details see %s."
DeprecationSharedTier = "Shared-tier instance sizes are deprecated and will reach End of Life in %s. For more details see %s"
ServerlessSharedEOLDate = "January 2026"
+ DeprecationNextMajorWithReplacementGuide = "This %s is deprecated and will be removed in the next major release. Please transition to `%s`. For more details, see %s."
)
diff --git a/internal/common/conversion/collections.go b/internal/common/conversion/collections.go
index c4c14b07e8..f79d3ff453 100644
--- a/internal/common/conversion/collections.go
+++ b/internal/common/conversion/collections.go
@@ -1,6 +1,12 @@
package conversion
-import "reflect"
+import (
+ "context"
+ "reflect"
+
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+)
// HasElementsSliceOrMap checks if param is a non-empty slice or map
func HasElementsSliceOrMap(value any) bool {
@@ -22,3 +28,11 @@ func ToAnySlicePointer(value *[]map[string]any) *[]any {
}
return &ret
}
+
+func TFSetValueOrNull[T any](ctx context.Context, ptr *[]T, elemType attr.Type) types.Set {
+ if ptr == nil || len(*ptr) == 0 {
+ return types.SetNull(elemType)
+ }
+ set, _ := types.SetValueFrom(ctx, elemType, *ptr)
+ return set
+}
diff --git a/internal/common/conversion/collections_test.go b/internal/common/conversion/collections_test.go
index b271af92a6..ba2ff53897 100644
--- a/internal/common/conversion/collections_test.go
+++ b/internal/common/conversion/collections_test.go
@@ -3,8 +3,10 @@ package conversion_test
import (
"testing"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/hashicorp/terraform-plugin-framework/types"
"github.com/stretchr/testify/assert"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
)
func TestHasElementsSliceOrMap(t *testing.T) {
@@ -61,3 +63,25 @@ func TestToAnySlicePointer(t *testing.T) {
})
}
}
+
+func TestTFSetValueOrNull(t *testing.T) {
+ ctx := t.Context()
+
+ testCases := map[string]*[]string{
+ "nil": nil,
+ "empty": {},
+ "populated": {"a", "b", "c"},
+ }
+
+ for name, value := range testCases {
+ t.Run(name, func(t *testing.T) {
+ result := conversion.TFSetValueOrNull(ctx, value, types.StringType)
+ if value == nil || len(*value) == 0 {
+ assert.True(t, result.IsNull())
+ } else {
+ assert.False(t, result.IsNull())
+ assert.Len(t, result.Elements(), len(*value))
+ }
+ })
+ }
+}
diff --git a/internal/common/conversion/flatten_expand.go b/internal/common/conversion/flatten_expand.go
index 5d346af843..ff3beacdab 100644
--- a/internal/common/conversion/flatten_expand.go
+++ b/internal/common/conversion/flatten_expand.go
@@ -1,6 +1,8 @@
package conversion
import (
+ "time"
+
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
@@ -28,6 +30,57 @@ func FlattenTags(tags []admin.ResourceTag) []map[string]string {
return ret
}
+func FlattenUsers(users []admin.OrgUserResponse) []map[string]any {
+ ret := make([]map[string]any, len(users))
+ for i := range users {
+ user := &users[i]
+ ret[i] = map[string]any{
+ "id": user.GetId(),
+ "org_membership_status": user.GetOrgMembershipStatus(),
+ "roles": flattenUserRoles(user.GetRoles()),
+ "team_ids": user.GetTeamIds(),
+ "username": user.GetUsername(),
+ "invitation_created_at": user.GetInvitationCreatedAt().Format(time.RFC3339),
+ "invitation_expires_at": user.GetInvitationExpiresAt().Format(time.RFC3339),
+ "inviter_username": user.GetInviterUsername(),
+ "country": user.GetCountry(),
+ "created_at": user.GetCreatedAt().Format(time.RFC3339),
+ "first_name": user.GetFirstName(),
+ "last_auth": user.GetLastAuth().Format(time.RFC3339),
+ "last_name": user.GetLastName(),
+ "mobile_number": user.GetMobileNumber(),
+ }
+ }
+ return ret
+}
+
+func flattenUserRoles(roles admin.OrgUserRolesResponse) []map[string]any {
+ ret := make([]map[string]any, 0)
+ roleMap := map[string]any{
+ "org_roles": []string{},
+ "project_role_assignments": []map[string]any{},
+ }
+ if roles.HasOrgRoles() {
+ roleMap["org_roles"] = roles.GetOrgRoles()
+ }
+ if roles.HasGroupRoleAssignments() {
+ roleMap["project_role_assignments"] = flattenProjectRolesAssignments(roles.GetGroupRoleAssignments())
+ }
+ ret = append(ret, roleMap)
+ return ret
+}
+
+func flattenProjectRolesAssignments(assignments []admin.GroupRoleAssignment) []map[string]any {
+ ret := make([]map[string]any, 0, len(assignments))
+ for _, assignment := range assignments {
+ ret = append(ret, map[string]any{
+ "project_id": assignment.GetGroupId(),
+ "project_roles": assignment.GetGroupRoles(),
+ })
+ }
+ return ret
+}
+
func ExpandTagsFromSetSchema(d *schema.ResourceData) *[]admin.ResourceTag {
list := d.Get("tags").(*schema.Set)
ret := make([]admin.ResourceTag, list.Len())
diff --git a/internal/common/conversion/schema_generation.go b/internal/common/conversion/schema_generation.go
index c5a3998bc6..133748c56f 100644
--- a/internal/common/conversion/schema_generation.go
+++ b/internal/common/conversion/schema_generation.go
@@ -91,10 +91,10 @@ var convertNestedMappings = map[string]reflect.Type{
}
func convertAttrs(rsAttrs map[string]schema.Attribute, requiredFields []string) map[string]dsschema.Attribute {
- const ignoreField = "timeouts"
+ ignoreFields := []string{"timeouts", "delete_on_create_timeout"}
dsAttrs := make(map[string]dsschema.Attribute, len(rsAttrs))
for name, attr := range rsAttrs {
- if name == ignoreField {
+ if slices.Contains(ignoreFields, name) {
continue
}
dsAttrs[name] = convertElement(name, attr, requiredFields).(dsschema.Attribute)
diff --git a/internal/common/customplanmodifier/create_only.go b/internal/common/customplanmodifier/create_only.go
index 5754a6cbc7..c7de0750ef 100644
--- a/internal/common/customplanmodifier/create_only.go
+++ b/internal/common/customplanmodifier/create_only.go
@@ -12,6 +12,46 @@ import (
"github.com/hashicorp/terraform-plugin-framework/types"
)
+// CreateOnlyStringPlanModifier creates a plan modifier that prevents updates to string attributes.
+func CreateOnlyStringPlanModifier() planmodifier.String {
+ return &createOnlyAttributePlanModifier{}
+}
+
+// CreateOnlyBoolPlanModifier creates a plan modifier that prevents updates to boolean attributes.
+func CreateOnlyBoolPlanModifier() planmodifier.Bool {
+ return &createOnlyAttributePlanModifier{}
+}
+
+// Plan modifier that implements create-only behavior for multiple attribute types
+type createOnlyAttributePlanModifier struct{}
+
+func (d *createOnlyAttributePlanModifier) Description(ctx context.Context) string {
+ return d.MarkdownDescription(ctx)
+}
+
+func (d *createOnlyAttributePlanModifier) MarkdownDescription(ctx context.Context) string {
+ return "Ensures that update operations fail when attempting to modify a create-only attribute."
+}
+
+func (d *createOnlyAttributePlanModifier) PlanModifyString(ctx context.Context, req planmodifier.StringRequest, resp *planmodifier.StringResponse) {
+ validateCreateOnly(req.PlanValue, req.StateValue, req.Path, &resp.Diagnostics)
+}
+
+func (d *createOnlyAttributePlanModifier) PlanModifyBool(ctx context.Context, req planmodifier.BoolRequest, resp *planmodifier.BoolResponse) {
+ validateCreateOnly(req.PlanValue, req.StateValue, req.Path, &resp.Diagnostics)
+}
+
+// validateCreateOnly checks if an attribute value has changed and adds an error if it has
+func validateCreateOnly(planValue, stateValue attr.Value, attrPath path.Path, diagnostics *diag.Diagnostics,
+) {
+ if !stateValue.IsNull() && !stateValue.Equal(planValue) {
+ diagnostics.AddError(
+ fmt.Sprintf("%s cannot be updated", attrPath),
+ fmt.Sprintf("%s cannot be updated", attrPath),
+ )
+ }
+}
+
type CreateOnlyModifier interface {
planmodifier.String
planmodifier.Bool
@@ -31,18 +71,18 @@ func CreateOnlyAttributePlanModifier() CreateOnlyModifier {
// On update the default has no impact and the UseStateForUnknown behavior is observed instead.
// Always use Optional+Computed when using a default value.
func CreateOnlyAttributePlanModifierWithBoolDefault(b bool) CreateOnlyModifier {
- return &createOnlyAttributePlanModifier{defaultBool: &b}
+ return &createOnlyAttributePlanModifierWithBoolDefault{defaultBool: &b}
}
-type createOnlyAttributePlanModifier struct {
+type createOnlyAttributePlanModifierWithBoolDefault struct {
defaultBool *bool
}
-func (d *createOnlyAttributePlanModifier) Description(ctx context.Context) string {
+func (d *createOnlyAttributePlanModifierWithBoolDefault) Description(ctx context.Context) string {
return d.MarkdownDescription(ctx)
}
-func (d *createOnlyAttributePlanModifier) MarkdownDescription(ctx context.Context) string {
+func (d *createOnlyAttributePlanModifierWithBoolDefault) MarkdownDescription(ctx context.Context) string {
return "Ensures the update operation fails when updating an attribute. If the read after import don't equal the configuration value it will also raise an error."
}
@@ -50,11 +90,11 @@ func isCreate(t *tfsdk.State) bool {
return t.Raw.IsNull()
}
-func (d *createOnlyAttributePlanModifier) UseDefault() bool {
+func (d *createOnlyAttributePlanModifierWithBoolDefault) UseDefault() bool {
return d.defaultBool != nil
}
-func (d *createOnlyAttributePlanModifier) PlanModifyBool(ctx context.Context, req planmodifier.BoolRequest, resp *planmodifier.BoolResponse) {
+func (d *createOnlyAttributePlanModifierWithBoolDefault) PlanModifyBool(ctx context.Context, req planmodifier.BoolRequest, resp *planmodifier.BoolResponse) {
if isCreate(&req.State) {
if !IsKnown(req.PlanValue) && d.UseDefault() {
resp.PlanValue = types.BoolPointerValue(d.defaultBool)
@@ -69,7 +109,7 @@ func (d *createOnlyAttributePlanModifier) PlanModifyBool(ctx context.Context, re
}
}
-func (d *createOnlyAttributePlanModifier) PlanModifyString(ctx context.Context, req planmodifier.StringRequest, resp *planmodifier.StringResponse) {
+func (d *createOnlyAttributePlanModifierWithBoolDefault) PlanModifyString(ctx context.Context, req planmodifier.StringRequest, resp *planmodifier.StringResponse) {
if isCreate(&req.State) {
return
}
@@ -88,7 +128,7 @@ func isUpdated(state, plan attr.Value) bool {
return !state.Equal(plan)
}
-func (d *createOnlyAttributePlanModifier) addDiags(diags *diag.Diagnostics, attrPath path.Path, stateValue attr.Value) {
+func (d *createOnlyAttributePlanModifierWithBoolDefault) addDiags(diags *diag.Diagnostics, attrPath path.Path, stateValue attr.Value) {
message := fmt.Sprintf("%s cannot be updated or set after import, remove it from the configuration or use the state value (see below).", attrPath)
detail := fmt.Sprintf("The current state value is %s", stateValue)
diags.AddError(message, detail)
diff --git a/internal/common/dsschema/users_schema.go b/internal/common/dsschema/users_schema.go
new file mode 100644
index 0000000000..a2f7f6e506
--- /dev/null
+++ b/internal/common/dsschema/users_schema.go
@@ -0,0 +1,99 @@
+package dsschema
+
+import (
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+)
+
+func DSOrgUsersSchema() *schema.Schema {
+ return &schema.Schema{
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "id": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "org_membership_status": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "roles": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "org_roles": {
+ Type: schema.TypeSet,
+ Computed: true,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ },
+ "project_role_assignments": {
+ Type: schema.TypeSet,
+ Computed: true,
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "project_id": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "project_roles": {
+ Type: schema.TypeSet,
+ Computed: true,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ "team_ids": {
+ Type: schema.TypeList,
+ Computed: true,
+ Elem: &schema.Schema{Type: schema.TypeString},
+ },
+ "username": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "invitation_created_at": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "invitation_expires_at": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "inviter_username": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "country": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "created_at": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "first_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "last_auth": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "last_name": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ "mobile_number": {
+ Type: schema.TypeString,
+ Computed: true,
+ },
+ },
+ },
+ }
+}
diff --git a/internal/common/schemafunc/attr.go b/internal/common/schemafunc/attr.go
new file mode 100644
index 0000000000..0ff14ce816
--- /dev/null
+++ b/internal/common/schemafunc/attr.go
@@ -0,0 +1,11 @@
+package schemafunc
+
+import "github.com/hashicorp/terraform-plugin-go/tftypes"
+
+func GetAttrFromStateObj[T any](rawState map[string]tftypes.Value, attrName string) *T {
+ var ret *T
+ if err := rawState[attrName].As(&ret); err != nil {
+ return nil
+ }
+ return ret
+}
diff --git a/internal/config/preview_provider_v2.go b/internal/config/preview_provider_v2.go
deleted file mode 100644
index a627d5fb12..0000000000
--- a/internal/config/preview_provider_v2.go
+++ /dev/null
@@ -1,15 +0,0 @@
-package config
-
-import (
- "os"
- "strconv"
-)
-
-const PreviewProviderV2AdvancedClusterEnvVar = "MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER"
-
-// Environment variable is read only once to avoid possible changes during runtime
-var previewProviderV2AdvancedCluster, _ = strconv.ParseBool(os.Getenv(PreviewProviderV2AdvancedClusterEnvVar))
-
-func PreviewProviderV2AdvancedCluster() bool {
- return previewProviderV2AdvancedCluster
-}
diff --git a/internal/provider/provider.go b/internal/provider/provider.go
index d55969648c..300afddbaf 100644
--- a/internal/provider/provider.go
+++ b/internal/provider/provider.go
@@ -30,6 +30,9 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/alertconfiguration"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/apikeyprojectassignment"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/atlasuser"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/clouduserorgassignment"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/clouduserprojectassignment"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/clouduserteamassignment"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/controlplaneipaddresses"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/databaseuser"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/encryptionatrest"
@@ -49,6 +52,7 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streaminstance"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprivatelinkendpoint"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/teamprojectassignment"
"github.com/mongodb/terraform-provider-mongodbatlas/version"
)
@@ -267,12 +271,11 @@ func (p *MongodbtlasProvider) Configure(ctx context.Context, req provider.Config
}
cfg := config.Config{
- PublicKey: data.PublicKey.ValueString(),
- PrivateKey: data.PrivateKey.ValueString(),
- BaseURL: data.BaseURL.ValueString(),
- RealmBaseURL: data.RealmBaseURL.ValueString(),
- TerraformVersion: req.TerraformVersion,
- PreviewV2AdvancedClusterEnabled: config.PreviewProviderV2AdvancedCluster(),
+ PublicKey: data.PublicKey.ValueString(),
+ PrivateKey: data.PrivateKey.ValueString(),
+ BaseURL: data.BaseURL.ValueString(),
+ RealmBaseURL: data.RealmBaseURL.ValueString(),
+ TerraformVersion: req.TerraformVersion,
}
var assumeRoles []tfAssumeRoleModel
@@ -486,11 +489,14 @@ func (p *MongodbtlasProvider) DataSources(context.Context) []func() datasource.D
flexrestorejob.PluralDataSource,
resourcepolicy.DataSource,
resourcepolicy.PluralDataSource,
+ clouduserorgassignment.DataSource,
+ clouduserprojectassignment.DataSource,
+ clouduserteamassignment.DataSource,
+ teamprojectassignment.DataSource,
apikeyprojectassignment.DataSource,
apikeyprojectassignment.PluralDataSource,
- }
- if config.PreviewProviderV2AdvancedCluster() {
- dataSources = append(dataSources, advancedclustertpf.DataSource, advancedclustertpf.PluralDataSource)
+ advancedclustertpf.DataSource,
+ advancedclustertpf.PluralDataSource,
}
return dataSources
}
@@ -512,10 +518,12 @@ func (p *MongodbtlasProvider) Resources(context.Context) []func() resource.Resou
streamprivatelinkendpoint.Resource,
flexcluster.Resource,
resourcepolicy.Resource,
+ clouduserorgassignment.Resource,
apikeyprojectassignment.Resource,
- }
- if config.PreviewProviderV2AdvancedCluster() {
- resources = append(resources, advancedclustertpf.Resource)
+ clouduserprojectassignment.Resource,
+ teamprojectassignment.Resource,
+ clouduserteamassignment.Resource,
+ advancedclustertpf.Resource,
}
return resources
}
diff --git a/internal/provider/provider_sdk2.go b/internal/provider/provider_sdk2.go
index 5d31fe9492..47dd34e225 100644
--- a/internal/provider/provider_sdk2.go
+++ b/internal/provider/provider_sdk2.go
@@ -13,7 +13,6 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/accesslistapikey"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/apikey"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/auditing"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/backupcompliancepolicy"
@@ -45,10 +44,8 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/orginvitation"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privateendpointregionalmode"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpoint"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointserverless"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointservice"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointservicedatafederationonlinearchive"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointserviceserverless"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/projectapikey"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/projectinvitation"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/rolesorgid"
@@ -172,7 +169,6 @@ func getDataSourcesMap() map[string]*schema.Resource {
"mongodbatlas_maintenance_window": maintenancewindow.DataSource(),
"mongodbatlas_auditing": auditing.DataSource(),
"mongodbatlas_team": team.DataSource(),
- "mongodbatlas_teams": team.LegacyTeamsDataSource(),
"mongodbatlas_global_cluster_config": globalclusterconfig.DataSource(),
"mongodbatlas_x509_authentication_database_user": x509authenticationdatabaseuser.DataSource(),
"mongodbatlas_private_endpoint_regional_mode": privateendpointregionalmode.DataSource(),
@@ -180,8 +176,6 @@ func getDataSourcesMap() map[string]*schema.Resource {
"mongodbatlas_privatelink_endpoint_service_data_federation_online_archives": privatelinkendpointservicedatafederationonlinearchive.PluralDataSource(),
"mongodbatlas_privatelink_endpoint": privatelinkendpoint.DataSource(),
"mongodbatlas_privatelink_endpoint_service": privatelinkendpointservice.DataSource(),
- "mongodbatlas_privatelink_endpoint_service_serverless": privatelinkendpointserviceserverless.DataSource(),
- "mongodbatlas_privatelink_endpoints_service_serverless": privatelinkendpointserviceserverless.PluralDataSource(),
"mongodbatlas_third_party_integration": thirdpartyintegration.DataSource(),
"mongodbatlas_third_party_integrations": thirdpartyintegration.PluralDataSource(),
"mongodbatlas_cloud_provider_access_setup": cloudprovideraccess.DataSourceSetup(),
@@ -231,34 +225,27 @@ func getDataSourcesMap() map[string]*schema.Resource {
"mongodbatlas_shared_tier_snapshot": sharedtier.DataSourceSnapshot(),
"mongodbatlas_shared_tier_snapshots": sharedtier.PluralDataSourceSnapshot(),
}
- if !config.PreviewProviderV2AdvancedCluster() {
- dataSourcesMap["mongodbatlas_advanced_cluster"] = advancedcluster.DataSource()
- dataSourcesMap["mongodbatlas_advanced_clusters"] = advancedcluster.PluralDataSource()
- }
return dataSourcesMap
}
func getResourcesMap() map[string]*schema.Resource {
resourcesMap := map[string]*schema.Resource{
- "mongodbatlas_api_key": apikey.Resource(),
- "mongodbatlas_access_list_api_key": accesslistapikey.Resource(),
- "mongodbatlas_project_api_key": projectapikey.Resource(),
- "mongodbatlas_custom_db_role": customdbrole.Resource(),
- "mongodbatlas_cluster": cluster.Resource(),
- "mongodbatlas_network_container": networkcontainer.Resource(),
- "mongodbatlas_network_peering": networkpeering.Resource(),
- "mongodbatlas_maintenance_window": maintenancewindow.Resource(),
- "mongodbatlas_auditing": auditing.Resource(),
- "mongodbatlas_team": team.Resource(),
- "mongodbatlas_teams": team.LegacyTeamsResource(),
- "mongodbatlas_global_cluster_config": globalclusterconfig.Resource(),
- "mongodbatlas_x509_authentication_database_user": x509authenticationdatabaseuser.Resource(),
- "mongodbatlas_private_endpoint_regional_mode": privateendpointregionalmode.Resource(),
+ "mongodbatlas_api_key": apikey.Resource(),
+ "mongodbatlas_access_list_api_key": accesslistapikey.Resource(),
+ "mongodbatlas_project_api_key": projectapikey.Resource(),
+ "mongodbatlas_custom_db_role": customdbrole.Resource(),
+ "mongodbatlas_cluster": cluster.Resource(),
+ "mongodbatlas_network_container": networkcontainer.Resource(),
+ "mongodbatlas_network_peering": networkpeering.Resource(),
+ "mongodbatlas_maintenance_window": maintenancewindow.Resource(),
+ "mongodbatlas_auditing": auditing.Resource(),
+ "mongodbatlas_team": team.Resource(),
+ "mongodbatlas_global_cluster_config": globalclusterconfig.Resource(),
+ "mongodbatlas_x509_authentication_database_user": x509authenticationdatabaseuser.Resource(),
+ "mongodbatlas_private_endpoint_regional_mode": privateendpointregionalmode.Resource(),
"mongodbatlas_privatelink_endpoint_service_data_federation_online_archive": privatelinkendpointservicedatafederationonlinearchive.Resource(),
"mongodbatlas_privatelink_endpoint": privatelinkendpoint.Resource(),
- "mongodbatlas_privatelink_endpoint_serverless": privatelinkendpointserverless.Resource(),
"mongodbatlas_privatelink_endpoint_service": privatelinkendpointservice.Resource(),
- "mongodbatlas_privatelink_endpoint_service_serverless": privatelinkendpointserviceserverless.Resource(),
"mongodbatlas_third_party_integration": thirdpartyintegration.Resource(),
"mongodbatlas_online_archive": onlinearchive.Resource(),
"mongodbatlas_custom_dns_configuration_cluster_aws": customdnsconfigurationclusteraws.Resource(),
@@ -286,9 +273,6 @@ func getResourcesMap() map[string]*schema.Resource {
"mongodbatlas_serverless_instance": serverlessinstance.Resource(),
"mongodbatlas_cluster_outage_simulation": clusteroutagesimulation.Resource(),
}
- if !config.PreviewProviderV2AdvancedCluster() {
- resourcesMap["mongodbatlas_advanced_cluster"] = advancedcluster.Resource()
- }
return resourcesMap
}
diff --git a/internal/service/advancedcluster/move_state_test.go b/internal/service/advancedcluster/move_state_test.go
deleted file mode 100644
index 179e566a09..0000000000
--- a/internal/service/advancedcluster/move_state_test.go
+++ /dev/null
@@ -1,84 +0,0 @@
-package advancedcluster_test
-
-import (
- "fmt"
- "regexp"
- "testing"
-
- "github.com/hashicorp/terraform-plugin-testing/helper/resource"
- "github.com/hashicorp/terraform-plugin-testing/tfversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
-)
-
-func TestAccAdvancedCluster_moveNotSupportedLegacySchema(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- var (
- projectID = acc.ProjectIDExecution(t)
- clusterName = acc.RandomClusterName()
- )
- resource.ParallelTest(t, resource.TestCase{
- TerraformVersionChecks: []tfversion.TerraformVersionCheck{
- tfversion.SkipBelow(tfversion.Version1_8_0),
- },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configMoveFirst(projectID, clusterName),
- },
- {
- Config: configMoveSecond(projectID, clusterName),
- ExpectError: regexp.MustCompile("Move Resource State Not Supported"),
- },
- {
- Config: configMoveFirst(projectID, clusterName),
- },
- },
- })
-}
-
-func configMoveFirst(projectID, clusterName string) string {
- return fmt.Sprintf(`
- resource "mongodbatlas_cluster" "old" {
- project_id = %[1]q
- name = %[2]q
- cluster_type = "REPLICASET"
- provider_name = "AWS"
- provider_instance_size_name = "M10"
- replication_specs {
- num_shards = 1
- regions_config {
- region_name = "US_WEST_2"
- electable_nodes = 3
- priority = 7
- }
- }
- }
- `, projectID, clusterName)
-}
-
-func configMoveSecond(projectID, clusterName string) string {
- return fmt.Sprintf(`
- moved {
- from = mongodbatlas_cluster.old
- to = mongodbatlas_advanced_cluster.test
- }
-
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_2"
- }
- }
- }
- `, projectID, clusterName)
-}
diff --git a/internal/service/advancedcluster/resource_advanced_cluster.go b/internal/service/advancedcluster/resource_advanced_cluster.go
index d734e29c9d..be11080263 100644
--- a/internal/service/advancedcluster/resource_advanced_cluster.go
+++ b/internal/service/advancedcluster/resource_advanced_cluster.go
@@ -92,7 +92,7 @@ func Resource() *schema.Resource {
"delete_on_create_timeout": {
Type: schema.TypeBool,
Optional: true,
- Description: "Flag that indicates whether to delete the cluster if the cluster creation times out. Default is false.",
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
},
"bi_connector_config": {
Type: schema.TypeList,
@@ -462,7 +462,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
if isFlex {
flexClusterReq := advancedclustertpf.NewFlexCreateReq(clusterName, d.Get("termination_protection_enabled").(bool), conversion.ExpandTagsFromSetSchema(d), replicationSpecs)
- flexClusterResp, err := flexcluster.CreateFlexCluster(ctx, projectID, clusterName, flexClusterReq, connV2.FlexClustersApi)
+ flexClusterResp, err := flexcluster.CreateFlexCluster(ctx, projectID, clusterName, flexClusterReq, connV2.FlexClustersApi, &timeout)
if err != nil {
return diag.FromErr(fmt.Errorf(flexcluster.ErrorCreateFlex, err))
}
@@ -1326,9 +1326,10 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
}
replicationSpecs := expandAdvancedReplicationSpecs(d.Get("replication_specs").([]any), nil)
+ timeout := d.Timeout(schema.TimeoutDelete)
if advancedclustertpf.IsFlex(replicationSpecs) {
- err := flexcluster.DeleteFlexCluster(ctx, projectID, clusterName, connV2.FlexClustersApi)
+ err := flexcluster.DeleteFlexCluster(ctx, projectID, clusterName, connV2.FlexClustersApi, timeout)
if err != nil {
return diag.FromErr(fmt.Errorf(flexcluster.ErrorDeleteFlex, clusterName, err))
}
@@ -1433,7 +1434,7 @@ func waitStateTransitionFlexUpgrade(ctx context.Context, client admin.FlexCluste
GroupId: projectID,
Name: name,
}
- flexClusterResp, err := flexcluster.WaitStateTransition(ctx, flexClusterParams, client, []string{retrystrategy.RetryStrategyUpdatingState}, []string{retrystrategy.RetryStrategyIdleState}, true, &timeout)
+ flexClusterResp, err := flexcluster.WaitStateTransition(ctx, flexClusterParams, client, []string{retrystrategy.RetryStrategyUpdatingState}, []string{retrystrategy.RetryStrategyIdleState}, true, timeout)
if err != nil {
return nil, err
}
@@ -1539,8 +1540,9 @@ func resourceUpdateFlexCluster(ctx context.Context, flexUpdateRequest *admin.Fle
ids := conversion.DecodeStateID(d.Id())
projectID := ids["project_id"]
clusterName := ids["cluster_name"]
+ timeout := d.Timeout(schema.TimeoutUpdate)
- _, err := flexcluster.UpdateFlexCluster(ctx, projectID, clusterName, flexUpdateRequest, connV2.FlexClustersApi)
+ _, err := flexcluster.UpdateFlexCluster(ctx, projectID, clusterName, flexUpdateRequest, connV2.FlexClustersApi, timeout)
if err != nil {
return diag.FromErr(fmt.Errorf(flexcluster.ErrorUpdateFlex, err))
}
diff --git a/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go b/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go
deleted file mode 100644
index 27e6799141..0000000000
--- a/internal/service/advancedcluster/resource_advanced_cluster_migration_test.go
+++ /dev/null
@@ -1,292 +0,0 @@
-package advancedcluster_test
-
-import (
- "fmt"
- "testing"
-
- "github.com/hashicorp/terraform-plugin-testing/helper/resource"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
-)
-
-// last version that did not support new sharding schema or attributes
-const versionBeforeISSRelease = "1.17.6"
-
-func TestMigAdvancedCluster_replicaSetAWSProvider(t *testing.T) {
- migTest(t, replicaSetAWSProviderTestCase)
-}
-
-func TestMigAdvancedCluster_replicaSetMultiCloud(t *testing.T) {
- migTest(t, replicaSetMultiCloudTestCase)
-}
-
-func TestMigAdvancedCluster_singleShardedMultiCloud(t *testing.T) {
- migTest(t, singleShardedMultiCloudTestCase)
-}
-
-func TestMigAdvancedCluster_symmetricGeoShardedOldSchema(t *testing.T) {
- migTest(t, symmetricGeoShardedOldSchemaTestCase)
-}
-
-func TestMigAdvancedCluster_asymmetricShardedNewSchema(t *testing.T) {
- mig.SkipIfVersionBelow(t, "1.23.0") // version where sharded cluster tier auto-scaling was introduced
- migTest(t, asymmetricShardedNewSchemaTestCase)
-}
-
-func TestMigAdvancedCluster_replicaSetAWSProviderUpdate(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- var (
- projectID = acc.ProjectIDExecution(t)
- clusterName = acc.RandomClusterName()
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- ExternalProviders: acc.ExternalProviders(versionBeforeISSRelease),
- Config: configAWSProvider(t, false, ReplicaSetAWSConfig{
- ProjectID: projectID,
- ClusterName: clusterName,
- ClusterType: "REPLICASET",
- DiskSizeGB: 60,
- NodeCountElectable: 3,
- WithAnalyticsSpecs: true,
- }),
- Check: checkReplicaSetAWSProvider(false, projectID, clusterName, 60, 3, false, false),
- },
- {
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- Config: configAWSProvider(t, false, ReplicaSetAWSConfig{
- ProjectID: projectID,
- ClusterName: clusterName,
- ClusterType: "REPLICASET",
- DiskSizeGB: 60,
- NodeCountElectable: 5,
- WithAnalyticsSpecs: true,
- }),
- Check: checkReplicaSetAWSProvider(false, projectID, clusterName, 60, 5, true, true),
- },
- },
- })
-}
-
-func TestMigAdvancedCluster_geoShardedOldSchemaUpdate(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 12)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { mig.PreCheckBasic(t) },
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- ExternalProviders: acc.ExternalProviders(versionBeforeISSRelease),
- Config: configGeoShardedOldSchema(t, false, projectID, clusterName, 2, 2, false),
- Check: checkGeoShardedOldSchema(false, clusterName, 2, 2, false, false),
- },
- {
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- Config: configGeoShardedOldSchema(t, false, projectID, clusterName, 2, 1, false),
- Check: checkGeoShardedOldSchema(false, clusterName, 2, 1, true, false),
- },
- },
- })
-}
-
-func TestMigAdvancedCluster_shardedMigrationFromOldToNewSchema(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
-
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { mig.PreCheckBasic(t) },
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- ExternalProviders: acc.ExternalProviders(versionBeforeISSRelease),
- Config: configShardedTransitionOldToNewSchema(t, false, projectID, clusterName, false, false),
- Check: checkShardedTransitionOldToNewSchema(false, false),
- },
- {
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- Config: configShardedTransitionOldToNewSchema(t, false, projectID, clusterName, true, false),
- Check: checkShardedTransitionOldToNewSchema(false, true),
- },
- },
- })
-}
-
-func TestMigAdvancedCluster_geoShardedMigrationFromOldToNewSchema(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { mig.PreCheckBasic(t) },
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- ExternalProviders: acc.ExternalProviders(versionBeforeISSRelease),
- Config: configGeoShardedTransitionOldToNewSchema(t, false, projectID, clusterName, false),
- Check: checkGeoShardedTransitionOldToNewSchema(false, false),
- },
- {
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- Config: configGeoShardedTransitionOldToNewSchema(t, false, projectID, clusterName, true),
- Check: checkGeoShardedTransitionOldToNewSchema(false, true),
- },
- },
- })
-}
-
-func TestMigAdvancedCluster_partialAdvancedConf(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- mig.SkipIfVersionBelow(t, "1.24.0") // version where tls_cipher_config_mode was introduced
- var (
- projectID = acc.ProjectIDExecution(t)
- clusterName = acc.RandomClusterName()
- // necessary to test oplog_min_retention_hours
- autoScalingConfigured = `
- auto_scaling {
- disk_gb_enabled = true
- }`
- extraArgs = `
- advanced_configuration {
- fail_index_key_too_long = false
- javascript_enabled = true
- minimum_enabled_tls_protocol = "TLS1_2"
- no_table_scan = false
- oplog_min_retention_hours = 4
- }
-
- bi_connector_config {
- enabled = true
- }`
-
- extraArgsUpdated = `
- advanced_configuration {
- fail_index_key_too_long = false
- javascript_enabled = true
- minimum_enabled_tls_protocol = "TLS1_2"
- no_table_scan = false
- default_read_concern = "available"
- sample_size_bi_connector = 110
- sample_refresh_interval_bi_connector = 310
- default_max_time_ms = 65
- tls_cipher_config_mode = "CUSTOM"
- custom_openssl_cipher_config_tls12 = ["TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"]
- }
-
- bi_connector_config {
- enabled = false
- read_preference = "secondary"
- }`
- configInitial = configPartialAdvancedConfig(projectID, clusterName, extraArgs, autoScalingConfigured)
- configUpdated = configPartialAdvancedConfig(projectID, clusterName, extraArgsUpdated, "")
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- ExternalProviders: mig.ExternalProviders(),
- Config: configInitial,
- Check: resource.ComposeAggregateTestCheckFunc(
- acc.CheckExistsCluster(resourceName),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.fail_index_key_too_long", "false"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.javascript_enabled", "true"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.minimum_enabled_tls_protocol", "TLS1_2"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.no_table_scan", "false"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.oplog_min_retention_hours", "4"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.tls_cipher_config_mode", "DEFAULT"),
- resource.TestCheckResourceAttr(resourceName, "bi_connector_config.0.enabled", "true"),
- ),
- },
- mig.TestStepCheckEmptyPlan(configInitial),
- {
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- Config: configUpdated,
- Check: resource.ComposeAggregateTestCheckFunc(
- acc.CheckExistsCluster(resourceName),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.fail_index_key_too_long", "false"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.javascript_enabled", "true"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.minimum_enabled_tls_protocol", "TLS1_2"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.no_table_scan", "false"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.sample_refresh_interval_bi_connector", "310"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.sample_size_bi_connector", "110"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.default_max_time_ms", "65"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.tls_cipher_config_mode", "CUSTOM"),
- resource.TestCheckResourceAttr(resourceName, "advanced_configuration.0.custom_openssl_cipher_config_tls12.#", "1"),
- resource.TestCheckResourceAttr(resourceName, "bi_connector_config.0.enabled", "false"),
- resource.TestCheckResourceAttr(resourceName, "bi_connector_config.0.read_preference", "secondary"),
- ),
- },
- mig.TestStepCheckEmptyPlan(configUpdated),
- },
- })
-}
-
-func TestMigAdvancedCluster_newSchemaFromAutoscalingDisabledToEnabled(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: acc.PreCheckBasicSleep(t, nil, projectID, clusterName),
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- ExternalProviders: acc.ExternalProviders("1.22.0"), // last version before cluster tier auto-scaling per shard was introduced
- Config: configShardedTransitionOldToNewSchema(t, false, projectID, clusterName, true, false),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER"),
- },
- {
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- Config: configShardedTransitionOldToNewSchema(t, false, projectID, clusterName, true, true),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "SHARD"),
- },
- },
- })
-}
-
-func configPartialAdvancedConfig(projectID, clusterName, extraArgs, autoScaling string) string {
- return fmt.Sprintf(`
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- cluster_type = "REPLICASET"
-
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_WEST_2"
- %[4]s
- }
- }
- %[3]s
- }
- `, projectID, clusterName, extraArgs, autoScaling)
-}
-
-// migTest is a helper function to run migration tests in normal case (SDKv2 -> SDKv2, TPF -> TPF), or in mixed case (SDKv2 -> TPF).
-func migTest(t *testing.T, testCaseFunc func(t *testing.T, usePreviewProvider bool) resource.TestCase) {
- t.Helper()
- usePreviewProvider := config.PreviewProviderV2AdvancedCluster()
- if acc.IsTestSDKv2ToTPF() {
- usePreviewProvider = false
- t.Log("Running test SDKv2 to TPF")
- }
- testCase := testCaseFunc(t, usePreviewProvider)
- mig.CreateAndRunTest(t, &testCase)
-}
diff --git a/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go b/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go
deleted file mode 100644
index 8bc208c61a..0000000000
--- a/internal/service/advancedcluster/resource_advanced_cluster_state_upgrader_test.go
+++ /dev/null
@@ -1,138 +0,0 @@
-package advancedcluster_test
-
-import (
- "fmt"
- "testing"
-
- "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
-)
-
-func TestMigAdvancedCluster_empty_advancedConfig(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- acc.SkipInUnitTest(t) // needed because TF test infra is not used
- v0State := map[string]any{
- "project_id": "test-id",
- "name": "test-cluster",
- "cluster_type": "REPLICASET",
- "replication_specs": []any{
- map[string]any{
- "region_configs": []any{
- map[string]any{
- "electable_specs": []any{
- map[string]any{
- "instance_size": "M30",
- "node_count": 3,
- },
- },
- "provider_name": "AWS",
- "region_name": "US_WEST_2",
- "priority": 7,
- },
- },
- },
- },
- "bi_connector": []any{
- map[string]any{
- "enabled": 1,
- "read_preference": "secondary",
- },
- },
- }
-
- v0Config := terraform.NewResourceConfigRaw(v0State)
- diags := advancedcluster.ResourceV0().Validate(v0Config)
-
- if len(diags) > 0 {
- t.Error("test precondition failed - invalid mongodb cluster v0 config")
-
- return
- }
-
- // test migrate function
- v1State := advancedcluster.MigrateBIConnectorConfig(v0State)
-
- v1Config := terraform.NewResourceConfigRaw(v1State)
- diags = advancedcluster.Resource().Validate(v1Config)
- if len(diags) > 0 {
- fmt.Println(diags)
- t.Error("migrated cluster advanced config is invalid")
-
- return
- }
-}
-
-func TestMigAdvancedCluster_v0StateUpgrade_ReplicationSpecs(t *testing.T) {
- acc.SkipIfAdvancedClusterV2Schema(t) // This test is specific to the legacy schema
- acc.SkipInUnitTest(t) // needed because TF test infra is not used
- v0State := map[string]any{
- "project_id": "test-id",
- "name": "test-cluster",
- "cluster_type": "REPLICASET",
- "backup_enabled": true,
- "disk_size_gb": 256,
- "replication_specs": []any{
- map[string]any{
- "zone_name": "Test Zone",
- "region_configs": []any{
- map[string]any{
- "priority": 7,
- "provider_name": "AWS",
- "region_name": "US_WEST_2",
- "electable_specs": []any{
- map[string]any{
- "instance_size": "M30",
- "node_count": 3,
- },
- },
- "read_only_specs": []any{
- map[string]any{
- "disk_iops": 0,
- "instance_size": "M30",
- "node_count": 0,
- },
- },
- "auto_scaling": []any{
- map[string]any{
- "compute_enabled": true,
- "compute_max_instance_size": "M60",
- "compute_min_instance_size": "M30",
- "compute_scale_down_enabled": true,
- "disk_gb_enabled": false,
- },
- },
- },
- },
- },
- },
- }
-
- v0Config := terraform.NewResourceConfigRaw(v0State)
- diags := advancedcluster.ResourceV0().Validate(v0Config)
-
- if diags.HasError() {
- fmt.Println(diags)
- t.Error("test precondition failed - invalid mongodb cluster v0 config")
-
- return
- }
-
- // test migrate function
- v1State := advancedcluster.MigrateBIConnectorConfig(v0State)
-
- v1Config := terraform.NewResourceConfigRaw(v1State)
- diags = advancedcluster.Resource().Validate(v1Config)
- if diags.HasError() {
- fmt.Println(diags)
- t.Error("migrated advanced cluster replication_specs invalid")
-
- return
- }
-
- if len(v1State["replication_specs"].([]any)) != len(v0State["replication_specs"].([]any)) {
- t.Error("migrated replication specs did not contain the same number of elements")
-
- return
- }
-}
diff --git a/internal/service/advancedclustertpf/common_admin_sdk.go b/internal/service/advancedclustertpf/common_admin_sdk.go
index b25727f984..85a6c0fd87 100644
--- a/internal/service/advancedclustertpf/common_admin_sdk.go
+++ b/internal/service/advancedclustertpf/common_admin_sdk.go
@@ -5,8 +5,6 @@ import (
"fmt"
"net/http"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
- admin20240805 "go.mongodb.org/atlas-sdk/v20240805005/admin"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
"github.com/hashicorp/terraform-plugin-framework/diag"
@@ -18,7 +16,7 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster"
)
-func CreateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin.ClusterDescription20240805, waitParams *ClusterWaitParams, usingNewShardingConfig bool) *admin.ClusterDescription20240805 {
+func CreateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin.ClusterDescription20240805, waitParams *ClusterWaitParams) *admin.ClusterDescription20240805 {
var (
pauseAfter = req.GetPaused()
clusterResp *admin.ClusterDescription20240805
@@ -26,12 +24,7 @@ func CreateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.
if pauseAfter {
req.Paused = nil
}
- if usingNewShardingConfig {
- clusterResp = createClusterLatest(ctx, diags, client, req, waitParams)
- } else {
- oldReq := ConvertClusterDescription20241023to20240805(req)
- clusterResp = createCluster20240805(ctx, diags, client, oldReq, waitParams)
- }
+ clusterResp = createClusterLatest(ctx, diags, client, req, waitParams)
if diags.HasError() {
return nil
}
@@ -41,15 +34,6 @@ func CreateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.
return clusterResp
}
-func createCluster20240805(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin20240805.ClusterDescription20240805, waitParams *ClusterWaitParams) *admin.ClusterDescription20240805 {
- _, _, err := client.AtlasV220240805.ClustersApi.CreateCluster(ctx, waitParams.ProjectID, req).Execute()
- if err != nil {
- addErrorDiag(diags, operationCreate20240805, defaultAPIErrorDetails(waitParams.ClusterName, err))
- return nil
- }
- return AwaitChanges(ctx, client, waitParams, operationCreate20240805, diags)
-}
-
func createClusterLatest(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin.ClusterDescription20240805, waitParams *ClusterWaitParams) *admin.ClusterDescription20240805 {
_, _, err := client.AtlasV2.ClustersApi.CreateCluster(ctx, waitParams.ProjectID, req).Execute()
if err != nil {
@@ -70,54 +54,33 @@ func updateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.
// ProcessArgs.ClusterAdvancedConfig is managed through create/updateCluster APIs instead of /processArgs APIs but since corresponding TF attributes
// belong in the advanced_configuration attribute we still need to check for any changes
-func UpdateAdvancedConfiguration(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, p *ProcessArgs, waitParams *ClusterWaitParams) (legacy *admin20240530.ClusterDescriptionProcessArgs, latest *admin.ClusterDescriptionProcessArgs20240805, changed bool) {
+func UpdateAdvancedConfiguration(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, p *ProcessArgs, waitParams *ClusterWaitParams) (latest *admin.ClusterDescriptionProcessArgs20240805, changed bool) {
var (
- err error
- advConfig *admin.ClusterDescriptionProcessArgs20240805
- legacyAdvConfig *admin20240530.ClusterDescriptionProcessArgs
- projectID = waitParams.ProjectID
- clusterName = waitParams.ClusterName
+ err error
+ advConfig *admin.ClusterDescriptionProcessArgs20240805
+ projectID = waitParams.ProjectID
+ clusterName = waitParams.ClusterName
)
if !update.IsZeroValues(p.ArgsDefault) {
changed = true
advConfig, _, err = client.AtlasV2.ClustersApi.UpdateProcessArgs(ctx, projectID, clusterName, p.ArgsDefault).Execute()
if err != nil {
addErrorDiag(diags, operationAdvancedConfigurationUpdate, defaultAPIErrorDetails(clusterName, err))
- return nil, nil, false
+ return nil, false
}
_ = AwaitChanges(ctx, client, waitParams, operationAdvancedConfigurationUpdate, diags)
if diags.HasError() {
- return nil, nil, false
- }
- }
- if !update.IsZeroValues(p.ArgsLegacy) {
- changed = true
- legacyAdvConfig, _, err = client.AtlasV220240530.ClustersApi.UpdateClusterAdvancedConfiguration(ctx, projectID, clusterName, p.ArgsLegacy).Execute()
- if err != nil {
- addErrorDiag(diags, operationAdvancedConfigurationUpdate20240530, defaultAPIErrorDetails(clusterName, err))
- diags.AddError(errorAdvancedConfUpdateLegacy, defaultAPIErrorDetails(clusterName, err))
- return nil, nil, false
- }
- _ = AwaitChanges(ctx, client, waitParams, operationAdvancedConfigurationUpdate20240530, diags)
- if diags.HasError() {
- return nil, nil, false
+ return nil, false
}
}
if !update.IsZeroValues(p.ClusterAdvancedConfig) {
changed = true
}
- return legacyAdvConfig, advConfig, changed
+ return advConfig, changed
}
-func ReadIfUnsetAdvancedConfiguration(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, projectID, clusterName string, configLegacy *admin20240530.ClusterDescriptionProcessArgs, configNew *admin.ClusterDescriptionProcessArgs20240805) (legacy *admin20240530.ClusterDescriptionProcessArgs, latest *admin.ClusterDescriptionProcessArgs20240805) {
+func ReadIfUnsetAdvancedConfiguration(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, projectID, clusterName string, configNew *admin.ClusterDescriptionProcessArgs20240805) (latest *admin.ClusterDescriptionProcessArgs20240805) {
var err error
- if configLegacy == nil {
- configLegacy, _, err = client.AtlasV220240530.ClustersApi.GetClusterAdvancedConfiguration(ctx, projectID, clusterName).Execute()
- if err != nil {
- diags.AddError(errorAdvancedConfReadLegacy, defaultAPIErrorDetails(clusterName, err))
- return
- }
- }
if configNew == nil {
configNew, _, err = client.AtlasV2.ClustersApi.GetProcessArgs(ctx, projectID, clusterName).Execute()
if err != nil {
@@ -125,7 +88,7 @@ func ReadIfUnsetAdvancedConfiguration(ctx context.Context, diags *diag.Diagnosti
return
}
}
- return configLegacy, configNew
+ return configNew
}
func UpgradeTenant(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, waitParams *ClusterWaitParams, req *admin.LegacyAtlasTenantClusterUpgradeRequest) *admin.ClusterDescription20240805 {
@@ -172,7 +135,7 @@ func DeleteCluster(ctx context.Context, diags *diag.Diagnostics, client *config.
addErrorDiag(diags, operationDelete, defaultAPIErrorDetails(waitParams.ClusterName, err))
return
}
- err := flexcluster.DeleteFlexCluster(ctx, waitParams.ProjectID, waitParams.ClusterName, client.AtlasV2.FlexClustersApi)
+ err := flexcluster.DeleteFlexCluster(ctx, waitParams.ProjectID, waitParams.ClusterName, client.AtlasV2.FlexClustersApi, waitParams.Timeout)
if err != nil {
addErrorDiag(diags, operationDeleteFlex, defaultAPIErrorDetails(waitParams.ClusterName, err))
return
diff --git a/internal/service/advancedclustertpf/data_source.go b/internal/service/advancedclustertpf/data_source.go
index 52d48423f2..4f4e9fd32d 100644
--- a/internal/service/advancedclustertpf/data_source.go
+++ b/internal/service/advancedclustertpf/data_source.go
@@ -2,7 +2,6 @@ package advancedclustertpf
import (
"context"
- "fmt"
"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/diag"
@@ -14,12 +13,6 @@ import (
var _ datasource.DataSource = &ds{}
var _ datasource.DataSourceWithConfigure = &ds{}
-const (
- errorReadDatasource = "Error reading advanced cluster datasource"
- errorReadDatasourceForceAsymmetric = "Error reading advanced cluster datasource, was expecting symmetric shards but found asymmetric shards"
- errorReadDatasourceForceAsymmetricDetail = "Cluster name %s. Please add `use_replication_spec_per_shard = true` to your data source configuration to enable asymmetric shard support. %s"
-)
-
func DataSource() datasource.DataSource {
return &ds{
DSCommon: config.DSCommon{
@@ -52,7 +45,6 @@ func (d *ds) Read(ctx context.Context, req datasource.ReadRequest, resp *datasou
func (d *ds) readCluster(ctx context.Context, diags *diag.Diagnostics, modelDS *TFModelDS) *TFModelDS {
clusterName := modelDS.Name.ValueString()
projectID := modelDS.ProjectID.ValueString()
- useReplicationSpecPerShard := modelDS.UseReplicationSpecPerShard.ValueBool()
clusterResp, flexClusterResp := GetClusterDetails(ctx, diags, projectID, clusterName, d.Client, false)
if diags.HasError() {
return nil
@@ -67,16 +59,11 @@ func (d *ds) readCluster(ctx context.Context, diags *diag.Diagnostics, modelDS *
}
return conversion.CopyModel[TFModelDS](modelOut)
}
- modelOut, extraInfo := getBasicClusterModel(ctx, diags, d.Client, clusterResp, useReplicationSpecPerShard)
+ modelOut := getBasicClusterModel(ctx, diags, d.Client, clusterResp)
if diags.HasError() {
return nil
}
- if extraInfo.UseOldShardingConfigFailed {
- diags.AddError(errorReadDatasourceForceAsymmetric, fmt.Sprintf(errorReadDatasourceForceAsymmetricDetail, clusterName, DeprecationOldSchemaAction))
- return nil
- }
updateModelAdvancedConfig(ctx, diags, d.Client, modelOut, &ProcessArgs{
- ArgsLegacy: nil,
ArgsDefault: nil,
ClusterAdvancedConfig: clusterResp.AdvancedConfiguration,
})
@@ -84,6 +71,5 @@ func (d *ds) readCluster(ctx context.Context, diags *diag.Diagnostics, modelDS *
return nil
}
modelOutDS := conversion.CopyModel[TFModelDS](modelOut)
- modelOutDS.UseReplicationSpecPerShard = modelDS.UseReplicationSpecPerShard // attrs not in resource model
return modelOutDS
}
diff --git a/internal/service/advancedclustertpf/model_ClusterDescription20240805.go b/internal/service/advancedclustertpf/model_ClusterDescription20240805.go
index 380cfad2a7..d6b0180f2e 100644
--- a/internal/service/advancedclustertpf/model_ClusterDescription20240805.go
+++ b/internal/service/advancedclustertpf/model_ClusterDescription20240805.go
@@ -2,14 +2,13 @@ package advancedclustertpf
import (
"context"
- "fmt"
- "slices"
- "strings"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/types"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
const (
@@ -18,19 +17,11 @@ const (
errorReplicationSpecIDNotSet = "replicationSpecID not set for zoneName %s"
)
-type ExtraAPIInfo struct {
- ZoneNameNumShards map[string]int64
- ZoneNameReplicationSpecIDs map[string]string
- ContainerIDs map[string]string
- UseNewShardingConfig bool
- UseOldShardingConfigFailed bool
-}
-
-func NewTFModel(ctx context.Context, input *admin.ClusterDescription20240805, diags *diag.Diagnostics, apiInfo ExtraAPIInfo) *TFModel {
+func NewTFModel(ctx context.Context, input *admin.ClusterDescription20240805, diags *diag.Diagnostics, containerIDs map[string]string) *TFModel {
biConnector := NewBiConnectorConfigObjType(ctx, input.BiConnector, diags)
connectionStrings := NewConnectionStringsObjType(ctx, input.ConnectionStrings, diags)
labels := NewLabelsObjType(ctx, diags, input.Labels)
- replicationSpecs := NewReplicationSpecsObjType(ctx, input.ReplicationSpecs, diags, &apiInfo)
+ replicationSpecs := NewReplicationSpecsObjType(ctx, input.ReplicationSpecs, diags, containerIDs)
tags := NewTagsObjType(ctx, diags, input.Tags)
pinnedFCV := NewPinnedFCVObjType(ctx, input, diags)
if diags.HasError() {
@@ -45,7 +36,6 @@ func NewTFModel(ctx context.Context, input *admin.ClusterDescription20240805, di
ConfigServerType: types.StringValue(conversion.SafeValue(input.ConfigServerType)),
ConnectionStrings: connectionStrings,
CreateDate: types.StringValue(conversion.SafeValue(conversion.TimePtrToStringPtr(input.CreateDate))),
- DiskSizeGB: types.Float64PointerValue(findFirstRegionDiskSizeGB(input.ReplicationSpecs)),
EncryptionAtRestProvider: types.StringValue(conversion.SafeValue(input.EncryptionAtRestProvider)),
GlobalClusterSelfManagedSharding: types.BoolValue(conversion.SafeValue(input.GlobalClusterSelfManagedSharding)),
ProjectID: types.StringValue(conversion.SafeValue(input.GroupId)),
@@ -113,16 +103,11 @@ func NewLabelsObjType(ctx context.Context, diags *diag.Diagnostics, input *[]adm
return conversion.ToTFMapOfString(ctx, diags, elms)
}
-func NewReplicationSpecsObjType(ctx context.Context, input *[]admin.ReplicationSpec20240805, diags *diag.Diagnostics, apiInfo *ExtraAPIInfo) types.List {
+func NewReplicationSpecsObjType(ctx context.Context, input *[]admin.ReplicationSpec20240805, diags *diag.Diagnostics, containerIDs map[string]string) types.List {
if input == nil {
return types.ListNull(ReplicationSpecsObjType)
}
- var tfModels *[]TFReplicationSpecsModel
- if apiInfo.UseNewShardingConfig {
- tfModels = convertReplicationSpecs(ctx, input, diags, apiInfo)
- } else {
- tfModels = convertReplicationSpecsLegacy(ctx, input, diags, apiInfo)
- }
+ tfModels := convertReplicationSpecs(ctx, input, diags, containerIDs)
if diags.HasError() {
return types.ListNull(ReplicationSpecsObjType)
}
@@ -144,7 +129,7 @@ func NewPinnedFCVObjType(ctx context.Context, cluster *admin.ClusterDescription2
return objType
}
-func convertReplicationSpecs(ctx context.Context, input *[]admin.ReplicationSpec20240805, diags *diag.Diagnostics, apiInfo *ExtraAPIInfo) *[]TFReplicationSpecsModel {
+func convertReplicationSpecs(ctx context.Context, input *[]admin.ReplicationSpec20240805, diags *diag.Diagnostics, containerIDs map[string]string) *[]TFReplicationSpecsModel {
tfModels := make([]TFReplicationSpecsModel, len(*input))
for i, item := range *input {
regionConfigs := NewRegionConfigsObjType(ctx, item.RegionConfigs, diags)
@@ -153,12 +138,9 @@ func convertReplicationSpecs(ctx context.Context, input *[]admin.ReplicationSpec
diags.AddError(errorZoneNameNotSet, errorZoneNameNotSet)
return &tfModels
}
- legacyID := apiInfo.ZoneNameReplicationSpecIDs[zoneName]
- containerIDs := selectContainerIDs(&item, apiInfo.ContainerIDs)
+ containerIDs := selectContainerIDs(&item, containerIDs)
tfModels[i] = TFReplicationSpecsModel{
- Id: types.StringValue(legacyID),
ExternalId: types.StringValue(conversion.SafeValue(item.Id)),
- NumShards: types.Int64Value(1),
ContainerId: conversion.ToTFMapOfString(ctx, diags, containerIDs),
RegionConfigs: regionConfigs,
ZoneId: types.StringValue(conversion.SafeValue(item.ZoneId)),
@@ -170,6 +152,10 @@ func convertReplicationSpecs(ctx context.Context, input *[]admin.ReplicationSpec
func selectContainerIDs(spec *admin.ReplicationSpec20240805, allIDs map[string]string) map[string]string {
containerIDs := map[string]string{}
+ if allIDs == nil {
+ return containerIDs
+ }
+
regions := spec.GetRegionConfigs()
for i := range regions {
regionConfig := regions[i]
@@ -185,51 +171,6 @@ func selectContainerIDs(spec *admin.ReplicationSpec20240805, allIDs map[string]s
return containerIDs
}
-func convertReplicationSpecsLegacy(ctx context.Context, input *[]admin.ReplicationSpec20240805, diags *diag.Diagnostics, apiInfo *ExtraAPIInfo) *[]TFReplicationSpecsModel {
- tfModels := []TFReplicationSpecsModel{}
- tfModelsSkipIndexes := []int{}
- for i, item := range *input {
- if slices.Contains(tfModelsSkipIndexes, i) {
- continue
- }
- regionConfigs := NewRegionConfigsObjType(ctx, item.RegionConfigs, diags)
- zoneName := item.GetZoneName()
- if zoneName == "" {
- diags.AddError(errorZoneNameNotSet, errorZoneNameNotSet)
- return &tfModels
- }
- numShards, ok := apiInfo.ZoneNameNumShards[zoneName]
- errMsg := []string{}
- if !ok {
- errMsg = append(errMsg, fmt.Sprintf(errorNumShardsNotSet, zoneName))
- }
- legacyID, ok := apiInfo.ZoneNameReplicationSpecIDs[zoneName]
- if !ok {
- errMsg = append(errMsg, fmt.Sprintf(errorReplicationSpecIDNotSet, zoneName))
- }
- if len(errMsg) > 0 {
- diags.AddError("replicationSpecsLegacySchema", strings.Join(errMsg, ", "))
- return &tfModels
- }
- if numShards > 1 {
- for j := 1; j < int(numShards); j++ {
- tfModelsSkipIndexes = append(tfModelsSkipIndexes, i+j)
- }
- }
- containerIDs := selectContainerIDs(&item, apiInfo.ContainerIDs)
- tfModels = append(tfModels, TFReplicationSpecsModel{
- ContainerId: conversion.ToTFMapOfString(ctx, diags, containerIDs),
- ExternalId: types.StringValue(""), // Not meaningful with legacy schema
- Id: types.StringValue(legacyID),
- RegionConfigs: regionConfigs,
- NumShards: types.Int64Value(numShards),
- ZoneId: types.StringValue(conversion.SafeValue(item.ZoneId)),
- ZoneName: types.StringValue(conversion.SafeValue(item.ZoneName)),
- })
- }
- return &tfModels
-}
-
func NewTagsObjType(ctx context.Context, diags *diag.Diagnostics, input *[]admin.ResourceTag) types.Map {
elms := make(map[string]string)
if input != nil {
diff --git a/internal/service/advancedclustertpf/model_ClusterDescriptionProcessArgs20240805.go b/internal/service/advancedclustertpf/model_ClusterDescriptionProcessArgs20240805.go
index 76ed6ef3e6..c144bb1983 100644
--- a/internal/service/advancedclustertpf/model_ClusterDescriptionProcessArgs20240805.go
+++ b/internal/service/advancedclustertpf/model_ClusterDescriptionProcessArgs20240805.go
@@ -15,30 +15,17 @@ func AddAdvancedConfig(ctx context.Context, tfModel *TFModel, input *ProcessArgs
var advancedConfig TFAdvancedConfigurationModel
var customCipherConfig *[]string
- if input.ArgsDefault != nil && input.ArgsLegacy != nil {
+ if input.ArgsDefault != nil {
// Using the new API as source of Truth, only use `inputLegacy` for fields not in `input`
changeStreamOptionsPreAndPostImagesExpireAfterSeconds := input.ArgsDefault.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds
if changeStreamOptionsPreAndPostImagesExpireAfterSeconds == nil {
// special behavior using -1 when it is unset by the user
changeStreamOptionsPreAndPostImagesExpireAfterSeconds = conversion.Pointer(-1)
}
- // When MongoDBMajorVersion is not 4.4 or lower, the API response for fail_index_key_too_long will always be null, to ensure no consistency issues, we need to match the config
- failIndexKeyTooLong := input.ArgsLegacy.GetFailIndexKeyTooLong()
- if tfModel != nil {
- stateConfig := tfModel.AdvancedConfiguration
- stateConfigSDK := NewAtlasReqAdvancedConfigurationLegacy(ctx, &stateConfig, diags)
- if diags.HasError() {
- return
- }
- if stateConfigSDK != nil && stateConfigSDK.GetFailIndexKeyTooLong() != failIndexKeyTooLong {
- failIndexKeyTooLong = stateConfigSDK.GetFailIndexKeyTooLong()
- }
- }
+
advancedConfig = TFAdvancedConfigurationModel{
ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds: types.Int64PointerValue(conversion.IntPtrToInt64Ptr(changeStreamOptionsPreAndPostImagesExpireAfterSeconds)),
DefaultWriteConcern: types.StringValue(conversion.SafeValue(input.ArgsDefault.DefaultWriteConcern)),
- DefaultReadConcern: types.StringValue(conversion.SafeValue(input.ArgsLegacy.DefaultReadConcern)),
- FailIndexKeyTooLong: types.BoolValue(failIndexKeyTooLong),
JavascriptEnabled: types.BoolValue(conversion.SafeValue(input.ArgsDefault.JavascriptEnabled)),
NoTableScan: types.BoolValue(conversion.SafeValue(input.ArgsDefault.NoTableScan)),
OplogMinRetentionHours: types.Float64Value(conversion.SafeValue(input.ArgsDefault.OplogMinRetentionHours)),
diff --git a/internal/service/advancedclustertpf/model_flex.go b/internal/service/advancedclustertpf/model_flex.go
index 9ee89a321d..87b2c406ee 100644
--- a/internal/service/advancedclustertpf/model_flex.go
+++ b/internal/service/advancedclustertpf/model_flex.go
@@ -133,7 +133,7 @@ func NewTFModelFlex(ctx context.Context, diags *diag.Diagnostics, flexCluster *a
if priority == nil {
priority = conversion.Pointer(defaultPriority)
}
- modelOut := NewTFModel(ctx, FlexDescriptionToClusterDescription(flexCluster, priority), diags, ExtraAPIInfo{UseNewShardingConfig: true})
+ modelOut := NewTFModel(ctx, FlexDescriptionToClusterDescription(flexCluster, priority), diags, nil)
if diags.HasError() {
return nil
}
@@ -152,7 +152,7 @@ func FlexUpgrade(ctx context.Context, diags *diag.Diagnostics, client *config.Mo
Name: waitParams.ClusterName,
}
- flexClusterResp, err := flexcluster.WaitStateTransition(ctx, flexClusterParams, client.AtlasV2.FlexClustersApi, []string{retrystrategy.RetryStrategyUpdatingState}, []string{retrystrategy.RetryStrategyIdleState}, true, &waitParams.Timeout)
+ flexClusterResp, err := flexcluster.WaitStateTransition(ctx, flexClusterParams, client.AtlasV2.FlexClustersApi, []string{retrystrategy.RetryStrategyUpdatingState}, []string{retrystrategy.RetryStrategyIdleState}, true, waitParams.Timeout)
if err != nil {
diags.AddError(fmt.Sprintf(flexcluster.ErrorUpgradeFlex, req.Name), err.Error())
return nil
diff --git a/internal/service/advancedclustertpf/model_to_AdvancedClusterDescription.go b/internal/service/advancedclustertpf/model_to_AdvancedClusterDescription.go
deleted file mode 100644
index 89656b377e..0000000000
--- a/internal/service/advancedclustertpf/model_to_AdvancedClusterDescription.go
+++ /dev/null
@@ -1,39 +0,0 @@
-package advancedclustertpf
-
-import (
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-)
-
-func newLegacyModel20240530ReplicationSpecsAndDiskGBOnly(specs *[]admin.ReplicationSpec20240805, zoneNameNumShards map[string]int64, oldDiskGB *float64, externalIDToLegacyID map[string]string) *admin20240530.AdvancedClusterDescription {
- newDiskGB := findFirstRegionDiskSizeGB(specs)
- if oldDiskGB != nil && newDiskGB != nil && (*newDiskGB-*oldDiskGB) < 0.01 {
- newDiskGB = nil
- }
- return &admin20240530.AdvancedClusterDescription{
- DiskSizeGB: newDiskGB,
- ReplicationSpecs: convertReplicationSpecs20240805to20240530(specs, zoneNameNumShards, externalIDToLegacyID),
- }
-}
-
-func convertReplicationSpecs20240805to20240530(replicationSpecs *[]admin.ReplicationSpec20240805, zoneNameNumShards map[string]int64, externalIDToLegacyID map[string]string) *[]admin20240530.ReplicationSpec {
- if replicationSpecs == nil {
- return nil
- }
- result := make([]admin20240530.ReplicationSpec, len(*replicationSpecs))
- for i, replicationSpec := range *replicationSpecs {
- numShards, ok := zoneNameNumShards[replicationSpec.GetZoneName()]
- if !ok {
- numShards = 1
- }
- legacyID := externalIDToLegacyID[replicationSpec.GetId()]
- result[i] = admin20240530.ReplicationSpec{
- NumShards: conversion.Int64PtrToIntPtr(&numShards),
- Id: conversion.StringPtr(legacyID),
- ZoneName: replicationSpec.ZoneName,
- RegionConfigs: ConvertRegionConfigSlice20241023to20240530(replicationSpec.RegionConfigs),
- }
- }
- return &result
-}
diff --git a/internal/service/advancedclustertpf/model_to_ClusterDescription20240805.go b/internal/service/advancedclustertpf/model_to_ClusterDescription20240805.go
index 08e011c363..a1c48489ea 100644
--- a/internal/service/advancedclustertpf/model_to_ClusterDescription20240805.go
+++ b/internal/service/advancedclustertpf/model_to_ClusterDescription20240805.go
@@ -15,7 +15,7 @@ import (
const defaultZoneName = "ZoneName managed by Terraform"
-func NewAtlasReq(ctx context.Context, input *TFModel, diags *diag.Diagnostics) *admin.ClusterDescription20240805 {
+func newAtlasReq(ctx context.Context, input *TFModel, diags *diag.Diagnostics) *admin.ClusterDescription20240805 {
acceptDataRisksAndForceReplicaSetReconfig, ok := conversion.StringPtrToTimePtr(input.AcceptDataRisksAndForceReplicaSetReconfig.ValueStringPointer())
if !ok {
diags.AddError("error converting AcceptDataRisksAndForceReplicaSetReconfig", fmt.Sprintf("not a valid time: %s", input.AcceptDataRisksAndForceReplicaSetReconfig.ValueString()))
diff --git a/internal/service/advancedclustertpf/model_to_ClusterDescriptionProcessArgsLegacy.go b/internal/service/advancedclustertpf/model_to_ClusterDescriptionProcessArgsLegacy.go
deleted file mode 100644
index cd8f3f60c8..0000000000
--- a/internal/service/advancedclustertpf/model_to_ClusterDescriptionProcessArgsLegacy.go
+++ /dev/null
@@ -1,28 +0,0 @@
-package advancedclustertpf
-
-import (
- "context"
-
- "github.com/hashicorp/terraform-plugin-framework/diag"
- "github.com/hashicorp/terraform-plugin-framework/types"
- "github.com/hashicorp/terraform-plugin-framework/types/basetypes"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
-)
-
-func NewAtlasReqAdvancedConfigurationLegacy(ctx context.Context, objInput *types.Object, diags *diag.Diagnostics) *admin20240530.ClusterDescriptionProcessArgs {
- var resp *admin20240530.ClusterDescriptionProcessArgs
- if objInput == nil || objInput.IsUnknown() || objInput.IsNull() {
- return resp
- }
- input := &TFAdvancedConfigurationModel{}
- if localDiags := objInput.As(ctx, input, basetypes.ObjectAsOptions{}); len(localDiags) > 0 {
- diags.Append(localDiags...)
- return resp
- }
- // Choosing to only handle legacy fields in the old API
- return &admin20240530.ClusterDescriptionProcessArgs{
- DefaultReadConcern: conversion.NilForUnknown(input.DefaultReadConcern, input.DefaultReadConcern.ValueStringPointer()),
- FailIndexKeyTooLong: conversion.NilForUnknown(input.FailIndexKeyTooLong, input.FailIndexKeyTooLong.ValueBoolPointer()),
- }
-}
diff --git a/internal/service/advancedclustertpf/move_upgrade_state.go b/internal/service/advancedclustertpf/move_upgrade_state.go
index f2d3acbd4c..bc5f449fa7 100644
--- a/internal/service/advancedclustertpf/move_upgrade_state.go
+++ b/internal/service/advancedclustertpf/move_upgrade_state.go
@@ -18,6 +18,7 @@ import (
"github.com/hashicorp/terraform-plugin-go/tftypes"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc"
)
// MoveState is used with moved block to upgrade from cluster to adv_cluster
@@ -89,13 +90,12 @@ func setStateResponse(ctx context.Context, diags *diag.Diagnostics, stateIn *tfp
model := NewTFModel(ctx, &admin.ClusterDescription20240805{
GroupId: projectID,
Name: name,
- }, diags, ExtraAPIInfo{})
+ }, diags, nil)
if diags.HasError() {
return
}
AddAdvancedConfig(ctx, model, &ProcessArgs{
ArgsDefault: nil,
- ArgsLegacy: nil,
ClusterAdvancedConfig: nil,
}, diags)
model.Timeouts = getTimeoutFromStateObj(stateObj)
@@ -112,17 +112,9 @@ func setStateResponse(ctx context.Context, diags *diag.Diagnostics, stateIn *tfp
diags.Append(stateOut.Set(ctx, model)...)
}
-func getAttrFromStateObj[T any](rawState map[string]tftypes.Value, attrName string) *T {
- var ret *T
- if err := rawState[attrName].As(&ret); err != nil {
- return nil
- }
- return ret
-}
-
func getProjectIDNameFromStateObj(diags *diag.Diagnostics, stateObj map[string]tftypes.Value) (projectID, name *string) {
- projectID = getAttrFromStateObj[string](stateObj, "project_id")
- name = getAttrFromStateObj[string](stateObj, "name")
+ projectID = schemafunc.GetAttrFromStateObj[string](stateObj, "project_id")
+ name = schemafunc.GetAttrFromStateObj[string](stateObj, "name")
if !conversion.IsStringPresent(projectID) || !conversion.IsStringPresent(name) {
diags.AddError("Unable to read project_id or name from state", fmt.Sprintf("project_id: %s, name: %s",
conversion.SafeString(projectID), conversion.SafeString(name)))
@@ -138,13 +130,13 @@ func getTimeoutFromStateObj(stateObj map[string]tftypes.Value) timeouts.Value {
"delete": types.StringType,
}
nullObj := timeouts.Value{Object: types.ObjectNull(attrTypes)}
- timeoutState := getAttrFromStateObj[map[string]tftypes.Value](stateObj, "timeouts")
+ timeoutState := schemafunc.GetAttrFromStateObj[map[string]tftypes.Value](stateObj, "timeouts")
if timeoutState == nil {
return nullObj
}
timeoutMap := make(map[string]attr.Value)
for action := range attrTypes {
- actionTimeout := getAttrFromStateObj[string](*timeoutState, action)
+ actionTimeout := schemafunc.GetAttrFromStateObj[string](*timeoutState, action)
if actionTimeout == nil {
timeoutMap[action] = types.StringNull()
} else {
@@ -159,16 +151,16 @@ func getTimeoutFromStateObj(stateObj map[string]tftypes.Value) timeouts.Value {
}
func setOptionalModelAttrs(stateObj map[string]tftypes.Value, model *TFModel) {
- if retainBackupsEnabled := getAttrFromStateObj[bool](stateObj, "retain_backups_enabled"); retainBackupsEnabled != nil {
+ if retainBackupsEnabled := schemafunc.GetAttrFromStateObj[bool](stateObj, "retain_backups_enabled"); retainBackupsEnabled != nil {
model.RetainBackupsEnabled = types.BoolPointerValue(retainBackupsEnabled)
}
- if mongoDBMajorVersion := getAttrFromStateObj[string](stateObj, "mongo_db_major_version"); mongoDBMajorVersion != nil {
+ if mongoDBMajorVersion := schemafunc.GetAttrFromStateObj[string](stateObj, "mongo_db_major_version"); mongoDBMajorVersion != nil {
model.MongoDBMajorVersion = types.StringPointerValue(mongoDBMajorVersion)
}
}
func setReplicationSpecNumShardsAttr(ctx context.Context, stateObj map[string]tftypes.Value, model *TFModel) {
- specsVal := getAttrFromStateObj[[]tftypes.Value](stateObj, "replication_specs")
+ specsVal := schemafunc.GetAttrFromStateObj[[]tftypes.Value](stateObj, "replication_specs")
if specsVal == nil {
return
}
@@ -192,12 +184,9 @@ func replicationSpecModelWithNumShards(numShardsVal tftypes.Value) *TFReplicatio
if err := numShardsVal.As(&numShardsFloat); err != nil || numShardsFloat == nil {
return nil
}
- numShards, _ := numShardsFloat.Int64()
return &TFReplicationSpecsModel{
- NumShards: types.Int64Value(numShards),
RegionConfigs: types.ListNull(RegionConfigsObjType),
ContainerId: types.MapNull(types.StringType),
- Id: types.StringNull(),
ExternalId: types.StringNull(),
ZoneId: types.StringNull(),
ZoneName: types.StringNull(),
diff --git a/internal/service/advancedclustertpf/plan_modifier.go b/internal/service/advancedclustertpf/plan_modifier.go
index 183ac46efa..508ed9fa95 100644
--- a/internal/service/advancedclustertpf/plan_modifier.go
+++ b/internal/service/advancedclustertpf/plan_modifier.go
@@ -9,13 +9,13 @@ import (
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-framework/types/basetypes"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc"
)
var (
// Change mappings uses `attribute_name`, it doesn't care about the nested level.
attributeRootChangeMapping = map[string][]string{
- "disk_size_gb": {}, // disk_size_gb can be change at any level/spec
"replication_specs": {},
"tls_cipher_config_mode": {"custom_openssl_cipher_config_tls12"},
"cluster_type": {"config_server_management_mode", "config_server_type"}, // computed values of config server change when REPLICA_SET changes to SHARDED
@@ -30,13 +30,9 @@ var (
"region_name": {"container_id"}, // container_id changes based on region_name changes
"zone_name": {"zone_id"}, // zone_id copy from state is not safe when
}
- keepUnknownsCalls = schemafunc.KeepUnknownFuncOr(keepUnkownFuncWithNodeCount, keepUnkownFuncWithNonEmptyAutoScaling)
+ keepUnknownsCalls = schemafunc.KeepUnknownFuncOr(keepUnkownFuncWithNonEmptyAutoScaling)
)
-func keepUnkownFuncWithNodeCount(name string, replacement attr.Value) bool {
- return name == "node_count" && !replacement.Equal(types.Int64Value(0))
-}
-
func keepUnkownFuncWithNonEmptyAutoScaling(name string, replacement attr.Value) bool {
autoScalingBoolValues := []string{"compute_enabled", "disk_gb_enabled", "compute_scale_down_enabled"}
autoScalingStringValues := []string{"compute_min_instance_size", "compute_max_instance_size"}
@@ -47,15 +43,8 @@ func keepUnkownFuncWithNonEmptyAutoScaling(name string, replacement attr.Value)
// useStateForUnknowns should be called only in Update, because of findClusterDiff
func useStateForUnknowns(ctx context.Context, diags *diag.Diagnostics, state, plan *TFModel) {
- shardingConfigUpgrade := isShardingConfigUpgrade(ctx, state, plan, diags)
- if diags.HasError() {
- return
- }
- // Don't adjust region_configs upgrades if it's a sharding config upgrade because it will be done only in the first shard, because state only has the first shard with num_shards > 1.
- // This avoid errors like AUTO_SCALINGS_MUST_BE_IN_EVERY_REGION_CONFIG.
- if !shardingConfigUpgrade {
- AdjustRegionConfigsChildren(ctx, diags, state, plan)
- }
+ AdjustRegionConfigsChildren(ctx, diags, state, plan)
+
diff := findClusterDiff(ctx, state, plan, diags)
if diags.HasError() || diff.isAnyUpgrade() { // Don't do anything in upgrades
return
@@ -65,11 +54,6 @@ func useStateForUnknowns(ctx context.Context, diags *diag.Diagnostics, state, pl
keepUnknown = append(keepUnknown, attributeChanges.KeepUnknown(attributeRootChangeMapping)...)
keepUnknown = append(keepUnknown, determineKeepUnknownsAutoScaling(ctx, diags, state, plan)...)
schemafunc.CopyUnknowns(ctx, state, plan, keepUnknown, nil)
- /* pending revision if logic can be reincorporated safely:
- if slices.Contains(keepUnknown, "replication_specs") {
- useStateForUnknownsReplicationSpecs(ctx, diags, state, plan, &attributeChanges)
- }
- */
}
func UseStateForUnknownsReplicationSpecs(ctx context.Context, diags *diag.Diagnostics, state, plan *TFModel, attrChanges *schemafunc.AttributeChanges) {
@@ -79,7 +63,7 @@ func UseStateForUnknownsReplicationSpecs(ctx context.Context, diags *diag.Diagno
return
}
planWithUnknowns := []TFReplicationSpecsModel{}
- keepUnknownsUnchangedSpec := determineKeepUnknownsUnchangedReplicationSpecs(ctx, diags, state, plan, attrChanges)
+ keepUnknownsUnchangedSpec := determineKeepUnknownsUnchangedReplicationSpecs(attrChanges)
keepUnknownsUnchangedSpec = append(keepUnknownsUnchangedSpec, determineKeepUnknownsAutoScaling(ctx, diags, state, plan)...)
if diags.HasError() {
return
@@ -160,11 +144,6 @@ func AdjustRegionConfigsChildren(ctx context.Context, diags *diag.Diagnostics, s
// don't get analytics_specs from state if node_count is 0 to avoid possible ANALYTICS_INSTANCE_SIZE_MUST_MATCH errors
if planAnalyticsSpecs == nil && stateAnalyticsSpecs != nil && stateAnalyticsSpecs.NodeCount.ValueInt64() > 0 {
newPlanAnalyticsSpecs := TFModelObject[TFSpecsModel](ctx, stateRegionConfigsTF[j].AnalyticsSpecs)
- // if disk_size_gb is defined at root level we cannot use analytics_specs.disk_size_gb from state as it can be outdated
- // read_only_specs implicitly covers this as it uses value from electable_specs which is unknown if not defined.
- if plan.DiskSizeGB.ValueFloat64() > 0 { // has known value in config
- newPlanAnalyticsSpecs.DiskSizeGb = types.Float64Unknown()
- }
objType, diagsLocal := types.ObjectValueFrom(ctx, SpecsObjType.AttrTypes, newPlanAnalyticsSpecs)
diags.Append(diagsLocal...)
if diags.HasError() {
@@ -219,15 +198,11 @@ func determineKeepUnknownsChangedReplicationSpec(keepUnknownsAlways []string, at
return append(keepUnknowns, attributeChanges.KeepUnknown(attributeReplicationSpecChangeMapping)...)
}
-func determineKeepUnknownsUnchangedReplicationSpecs(ctx context.Context, diags *diag.Diagnostics, state, plan *TFModel, attributeChanges *schemafunc.AttributeChanges) []string {
+func determineKeepUnknownsUnchangedReplicationSpecs(attributeChanges *schemafunc.AttributeChanges) []string {
keepUnknowns := []string{}
- // Could be set to "" if we are using an ISS cluster
- if usingNewShardingConfig(ctx, plan.ReplicationSpecs, diags) { // When using new sharding config, the legacy id must never be copied
- keepUnknowns = append(keepUnknowns, "id")
- }
- // for isShardingConfigUpgrade, it will be empty in the plan, so we need to keep it unknown
- // for listLenChanges, it might be an insertion in the middle of replication spec leading to wrong value from state copied
- if isShardingConfigUpgrade(ctx, state, plan, diags) || attributeChanges.ListLenChanges("replication_specs") {
+
+ // it might be an insertion in the middle of replication spec leading to wrong value from state copied
+ if attributeChanges.ListLenChanges("replication_specs") {
keepUnknowns = append(keepUnknowns, "external_id")
}
return keepUnknowns
diff --git a/internal/service/advancedclustertpf/plan_modifier_test.go b/internal/service/advancedclustertpf/plan_modifier_test.go
index c0ce795bd9..8a7613cd7a 100644
--- a/internal/service/advancedclustertpf/plan_modifier_test.go
+++ b/internal/service/advancedclustertpf/plan_modifier_test.go
@@ -6,6 +6,7 @@ import (
"github.com/hashicorp/terraform-plugin-testing/knownvalue"
"github.com/hashicorp/terraform-plugin-testing/plancheck"
"github.com/hashicorp/terraform-plugin-testing/tfjsonpath"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/unit"
)
@@ -14,7 +15,6 @@ var (
repSpec1 = tfjsonpath.New("replication_specs").AtSliceIndex(1)
regionConfig0 = repSpec0.AtMapKey("region_configs").AtSliceIndex(0)
regionConfig1 = repSpec1.AtMapKey("region_configs").AtSliceIndex(0)
- mockConfig = unit.MockConfigAdvancedClusterTPF
)
func autoScalingKnownValue(computeEnabled, diskEnabled, scaleDown bool, minInstanceSize, maxInstanceSize string) knownvalue.Check {
@@ -46,6 +46,13 @@ func TestPlanChecksClusterTwoRepSpecsWithAutoScalingAndSpecs(t *testing.T) {
plancheck.ExpectResourceAction(resourceName, plancheck.ResourceActionNoop),
},
},
+ {
+ ConfigFilename: "main_node_count_unknown.tf",
+ Checks: []plancheck.PlanCheck{
+ plancheck.ExpectResourceAction(resourceName, plancheck.ResourceActionUpdate),
+ plancheck.ExpectKnownValue(resourceName, regionConfig0.AtMapKey("read_only_specs").AtMapKey("node_count"), knownvalue.Int64Exact(2)),
+ },
+ },
{
ConfigFilename: "main_removed_blocks_from_config_and_instance_change.tf",
Checks: []plancheck.PlanCheck{
@@ -56,7 +63,6 @@ func TestPlanChecksClusterTwoRepSpecsWithAutoScalingAndSpecs(t *testing.T) {
plancheck.ExpectKnownValue(resourceName, regionConfig0.AtMapKey("auto_scaling"), autoScalingEnabled),
plancheck.ExpectKnownValue(resourceName, regionConfig0.AtMapKey("analytics_auto_scaling"), autoScalingEnabled),
plancheck.ExpectUnknownValue(resourceName, regionConfig0.AtMapKey("analytics_specs")), // analytics specs was defined in region_configs.0 but not in region_configs.1
- plancheck.ExpectUnknownValue(resourceName, repSpec0.AtMapKey("id")),
// checks regionConfig1
plancheck.ExpectKnownValue(resourceName, regionConfig1.AtMapKey("read_only_specs"), specInstanceSizeNodeCount("M20", 1)),
@@ -64,7 +70,6 @@ func TestPlanChecksClusterTwoRepSpecsWithAutoScalingAndSpecs(t *testing.T) {
plancheck.ExpectKnownValue(resourceName, regionConfig1.AtMapKey("auto_scaling"), autoScalingEnabled),
plancheck.ExpectKnownValue(resourceName, regionConfig1.AtMapKey("analytics_auto_scaling"), autoScalingEnabled),
plancheck.ExpectKnownValue(resourceName, regionConfig1.AtMapKey("analytics_specs"), knownvalue.NotNull()),
- plancheck.ExpectUnknownValue(resourceName, repSpec1.AtMapKey("id")),
},
},
}
diff --git a/internal/service/advancedclustertpf/plural_data_source.go b/internal/service/advancedclustertpf/plural_data_source.go
index 558fd66d2d..b636f06dd9 100644
--- a/internal/service/advancedclustertpf/plural_data_source.go
+++ b/internal/service/advancedclustertpf/plural_data_source.go
@@ -52,7 +52,6 @@ func (d *pluralDS) Read(ctx context.Context, req datasource.ReadRequest, resp *d
func (d *pluralDS) readClusters(ctx context.Context, diags *diag.Diagnostics, pluralModel *TFModelPluralDS) (*TFModelPluralDS, *diag.Diagnostics) {
projectID := pluralModel.ProjectID.ValueString()
- useReplicationSpecPerShard := pluralModel.UseReplicationSpecPerShard.ValueBool()
api := d.Client.AtlasV2.ClustersApi
params := admin.ListClustersApiParams{
GroupId: projectID,
@@ -67,12 +66,11 @@ func (d *pluralDS) readClusters(ctx context.Context, diags *diag.Diagnostics, pl
return nil, diags
}
outs := &TFModelPluralDS{
- ProjectID: pluralModel.ProjectID,
- UseReplicationSpecPerShard: pluralModel.UseReplicationSpecPerShard,
+ ProjectID: pluralModel.ProjectID,
}
for i := range list {
clusterResp := &list[i]
- modelOut, extraInfo := getBasicClusterModel(ctx, diags, d.Client, clusterResp, useReplicationSpecPerShard)
+ modelOut := getBasicClusterModel(ctx, diags, d.Client, clusterResp)
if diags.HasError() {
if DiagsHasOnlyClusterNotFoundErrors(diags) {
diags = ResetClusterNotFoundErrors(diags)
@@ -80,11 +78,7 @@ func (d *pluralDS) readClusters(ctx context.Context, diags *diag.Diagnostics, pl
}
return nil, diags
}
- if extraInfo.UseOldShardingConfigFailed {
- continue
- }
updateModelAdvancedConfig(ctx, diags, d.Client, modelOut, &ProcessArgs{
- ArgsLegacy: nil,
ArgsDefault: nil,
ClusterAdvancedConfig: clusterResp.AdvancedConfiguration,
})
@@ -96,7 +90,6 @@ func (d *pluralDS) readClusters(ctx context.Context, diags *diag.Diagnostics, pl
return nil, diags
}
modelOutDS := conversion.CopyModel[TFModelDS](modelOut)
- modelOutDS.UseReplicationSpecPerShard = pluralModel.UseReplicationSpecPerShard // attrs not in resource model
outs.Results = append(outs.Results, modelOutDS)
}
flexModels := d.getFlexClustersModels(ctx, diags, projectID)
diff --git a/internal/service/advancedclustertpf/resource.go b/internal/service/advancedclustertpf/resource.go
index d2c6692a83..99ebddae51 100644
--- a/internal/service/advancedclustertpf/resource.go
+++ b/internal/service/advancedclustertpf/resource.go
@@ -3,18 +3,15 @@ package advancedclustertpf
import (
"context"
"fmt"
- "time"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
- "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-framework/types/basetypes"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/update"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -28,23 +25,21 @@ var _ resource.ResourceWithUpgradeState = &rs{}
var _ resource.ResourceWithModifyPlan = &rs{}
const (
- resourceName = "advanced_cluster"
- errorSchemaDowngrade = "error operation not permitted, nums_shards from 1 -> > 1"
- errorPatchPayload = "error creating patch payload"
- errorDetailDefault = "cluster name: %s, API error details: %s"
- errorSchemaUpgradeReadIDs = "error reading IDs from API when upgrading schema"
- errorReadResource = "error reading advanced cluster"
- errorAdvancedConfRead = "error reading Advanced Configuration"
- errorAdvancedConfReadLegacy = "error reading Advanced Configuration from legacy API"
- errorUpdateLegacy20240530 = "error updating advanced cluster legacy API 20240530"
- errorList = "error reading advanced cluster list"
- errorListDetail = "project ID %s. Error %s"
- errorReadLegacy20240530 = "error reading cluster with legacy API 20240530"
- errorResolveContainerIDs = "error resolving container IDs"
- errorRegionPriorities = "priority values in region_configs must be in descending order"
- errorAdvancedConfUpdateLegacy = "error updating Advanced Configuration from legacy API"
-
- DeprecationOldSchemaAction = "Please refer to our examples, documentation, and 1.18.0 migration guide for more details at https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/1.18.0-upgrade-guide"
+ resourceName = "advanced_cluster"
+ errorSchemaDowngrade = "error operation not permitted, nums_shards from 1 -> > 1"
+ errorPatchPayload = "error creating patch payload"
+ errorDetailDefault = "cluster name: %s, API error details: %s"
+ errorSchemaUpgradeReadIDs = "error reading IDs from API when upgrading schema"
+ errorReadResource = "error reading advanced cluster"
+ errorAdvancedConfRead = "error reading Advanced Configuration"
+ errorAdvancedConfReadLegacy = "error reading Advanced Configuration from legacy API"
+ errorUpdateLegacy20240530 = "error updating advanced cluster legacy API 20240530"
+ errorList = "error reading advanced cluster list"
+ errorListDetail = "project ID %s. Error %s"
+ errorReadLegacy20240530 = "error reading cluster with legacy API 20240530"
+ errorResolveContainerIDs = "error resolving container IDs"
+ errorRegionPriorities = "priority values in region_configs must be in descending order"
+
ErrorCodeClusterNotFound = "CLUSTER_NOT_FOUND"
operationUpdate = "update"
operationCreate = "create"
@@ -71,14 +66,9 @@ func defaultAPIErrorDetails(clusterName string, err error) string {
return fmt.Sprintf(errorDetailDefault, clusterName, err.Error())
}
-func deprecationMsgOldSchema(name string) string {
- return fmt.Sprintf("%s Name=%s. %s", constant.DeprecationParam, name, DeprecationOldSchemaAction)
-}
-
var (
- resumeRequest = admin.ClusterDescription20240805{Paused: conversion.Pointer(false)}
- pauseRequest = admin.ClusterDescription20240805{Paused: conversion.Pointer(true)}
- errorSchemaDowngradeDetail = "Cluster name %s. " + fmt.Sprintf("cannot increase num_shards to > 1 under the current configuration. New shards can be defined by adding new replication spec objects; %s", DeprecationOldSchemaAction)
+ resumeRequest = admin.ClusterDescription20240805{Paused: conversion.Pointer(false)}
+ pauseRequest = admin.ClusterDescription20240805{Paused: conversion.Pointer(true)}
)
func Resource() resource.Resource {
@@ -134,7 +124,7 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
if diags.HasError() {
return
}
- latestReq := normalizeFromTFModel(ctx, &plan, diags, true)
+ latestReq := newAtlasReq(ctx, &plan, diags)
if diags.HasError() {
return
}
@@ -145,7 +135,7 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
isFlex := IsFlex(latestReq.ReplicationSpecs)
projectID, clusterName := waitParams.ProjectID, waitParams.ClusterName
clusterDetailStr := fmt.Sprintf("Cluster name %s (project_id=%s).", clusterName, projectID)
- if plan.DeleteOnCreateTimeout.ValueBool() {
+ if cleanup.ResolveDeleteOnCreateTimeout(plan.DeleteOnCreateTimeout) {
var deferCall func()
ctx, deferCall = cleanup.OnTimeout(
ctx, waitParams.Timeout, diags.AddWarning, clusterDetailStr, DeleteClusterNoWait(r.Client, projectID, clusterName, isFlex),
@@ -154,7 +144,7 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
}
if isFlex {
flexClusterReq := NewFlexCreateReq(latestReq.GetName(), latestReq.GetTerminationProtectionEnabled(), latestReq.Tags, latestReq.ReplicationSpecs)
- flexClusterResp, err := flexcluster.CreateFlexCluster(ctx, plan.ProjectID.ValueString(), latestReq.GetName(), flexClusterReq, r.Client.AtlasV2.FlexClustersApi)
+ flexClusterResp, err := flexcluster.CreateFlexCluster(ctx, plan.ProjectID.ValueString(), latestReq.GetName(), flexClusterReq, r.Client.AtlasV2.FlexClustersApi, &waitParams.Timeout)
if err != nil {
diags.AddError(fmt.Sprintf(flexcluster.ErrorCreateFlex, clusterDetailStr), err.Error())
return
@@ -166,19 +156,18 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
diags.Append(resp.State.Set(ctx, newFlexClusterModel)...)
return
}
- clusterResp := CreateCluster(ctx, diags, r.Client, latestReq, waitParams, usingNewShardingConfig(ctx, plan.ReplicationSpecs, diags))
+ clusterResp := CreateCluster(ctx, diags, r.Client, latestReq, waitParams)
+
emptyAdvancedConfiguration := types.ObjectNull(AdvancedConfigurationObjType.AttrTypes)
patchReqProcessArgs := update.PatchPayloadTpf(ctx, diags, &emptyAdvancedConfiguration, &plan.AdvancedConfiguration, NewAtlasReqAdvancedConfiguration)
- patchReqProcessArgsLegacy := update.PatchPayloadTpf(ctx, diags, &emptyAdvancedConfiguration, &plan.AdvancedConfiguration, NewAtlasReqAdvancedConfigurationLegacy)
if diags.HasError() {
return
}
p := &ProcessArgs{
- ArgsLegacy: patchReqProcessArgsLegacy,
ArgsDefault: patchReqProcessArgs,
ClusterAdvancedConfig: clusterResp.AdvancedConfiguration,
}
- legacyAdvConfig, advConfig, _ := UpdateAdvancedConfiguration(ctx, diags, r.Client, p, waitParams)
+ advConfig, _ := UpdateAdvancedConfiguration(ctx, diags, r.Client, p, waitParams)
if diags.HasError() {
return
}
@@ -189,14 +178,13 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
return
}
- modelOut, _ := getBasicClusterModelResource(ctx, diags, r.Client, clusterResp, &plan)
+ modelOut := getBasicClusterModelResource(ctx, diags, r.Client, clusterResp, &plan)
if diags.HasError() {
return
}
- legacyAdvConfig, advConfig = ReadIfUnsetAdvancedConfiguration(ctx, diags, r.Client, waitParams.ProjectID, waitParams.ClusterName, legacyAdvConfig, advConfig)
+ advConfig = ReadIfUnsetAdvancedConfiguration(ctx, diags, r.Client, waitParams.ProjectID, waitParams.ClusterName, advConfig)
updateModelAdvancedConfig(ctx, diags, r.Client, modelOut, &ProcessArgs{
- ArgsLegacy: legacyAdvConfig,
ArgsDefault: advConfig,
ClusterAdvancedConfig: clusterResp.AdvancedConfiguration,
})
@@ -224,19 +212,18 @@ func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.
return
}
if flexCluster != nil {
- newFlexClusterModel := NewTFModelFlexResource(ctx, diags, flexCluster, GetPriorityOfFlexReplicationSpecs(normalizeFromTFModel(ctx, &state, diags, false).ReplicationSpecs), &state)
+ newFlexClusterModel := NewTFModelFlexResource(ctx, diags, flexCluster, GetPriorityOfFlexReplicationSpecs(newAtlasReq(ctx, &state, diags).ReplicationSpecs), &state)
if diags.HasError() {
return
}
diags.Append(resp.State.Set(ctx, newFlexClusterModel)...)
return
}
- modelOut, _ := getBasicClusterModelResource(ctx, diags, r.Client, cluster, &state)
+ modelOut := getBasicClusterModelResource(ctx, diags, r.Client, cluster, &state)
if diags.HasError() {
return
}
updateModelAdvancedConfig(ctx, diags, r.Client, modelOut, &ProcessArgs{
- ArgsLegacy: nil,
ArgsDefault: nil,
ClusterAdvancedConfig: cluster.AdvancedConfiguration,
})
@@ -277,7 +264,7 @@ func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resou
}
return
case diff.isUpdateOfFlex:
- if flexOut := handleFlexUpdate(ctx, diags, r.Client, &plan); flexOut != nil {
+ if flexOut := handleFlexUpdate(ctx, diags, r.Client, waitParams, &plan); flexOut != nil {
diags.Append(resp.State.Set(ctx, flexOut)...)
}
return
@@ -298,34 +285,31 @@ func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resou
clusterResp, flexResp = GetClusterDetails(ctx, diags, waitParams.ProjectID, waitParams.ClusterName, r.Client, false)
// This should never happen since the switch case should handle the two flex cases (update/upgrade) and return, but keeping it here for safety.
if flexResp != nil {
- flexPriority := GetPriorityOfFlexReplicationSpecs(normalizeFromTFModel(ctx, &plan, diags, false).ReplicationSpecs)
+ flexPriority := GetPriorityOfFlexReplicationSpecs(newAtlasReq(ctx, &plan, diags).ReplicationSpecs)
if flexOut := NewTFModelFlexResource(ctx, diags, flexResp, flexPriority, &plan); flexOut != nil {
diags.Append(resp.State.Set(ctx, flexOut)...)
}
return
}
}
- modelOut, _ := getBasicClusterModelResource(ctx, diags, r.Client, clusterResp, &plan)
+ modelOut := getBasicClusterModelResource(ctx, diags, r.Client, clusterResp, &plan)
if diags.HasError() {
return
}
patchReqProcessArgs := update.PatchPayloadTpf(ctx, diags, &state.AdvancedConfiguration, &plan.AdvancedConfiguration, NewAtlasReqAdvancedConfiguration)
- patchReqProcessArgsLegacy := update.PatchPayloadTpf(ctx, diags, &state.AdvancedConfiguration, &plan.AdvancedConfiguration, NewAtlasReqAdvancedConfigurationLegacy)
if diags.HasError() {
return
}
p := &ProcessArgs{
- ArgsLegacy: patchReqProcessArgsLegacy,
ArgsDefault: patchReqProcessArgs,
ClusterAdvancedConfig: clusterResp.AdvancedConfiguration,
}
- legacyAdvConfig, advConfig, advConfigChanged := UpdateAdvancedConfiguration(ctx, diags, r.Client, p, waitParams)
+ advConfig, advConfigChanged := UpdateAdvancedConfiguration(ctx, diags, r.Client, p, waitParams)
if diags.HasError() {
return
}
if advConfigChanged {
updateModelAdvancedConfig(ctx, diags, r.Client, modelOut, &ProcessArgs{
- ArgsLegacy: legacyAdvConfig,
ArgsDefault: advConfig,
ClusterAdvancedConfig: clusterResp.AdvancedConfiguration,
})
@@ -403,19 +387,6 @@ func (r *rs) applyClusterChanges(ctx context.Context, diags *diag.Diagnostics, s
pauseAfterOtherChanges = true
}
- if !usingNewShardingConfig(ctx, plan.ReplicationSpecs, diags) {
- // With old sharding config we call older API (2023-02-01) for updating replication specs to avoid cluster having asymmetric autoscaling mode. Old sharding config can only represent symmetric clusters.
- r.updateLegacyReplicationSpecs(ctx, state, plan, diags, patchReq.ReplicationSpecs)
- if diags.HasError() {
- return nil
- }
- patchReq.ReplicationSpecs = nil // Already updated by 2023-02-01 API
- if update.IsZeroValues(patchReq) && !pauseAfterOtherChanges {
- return AwaitChanges(ctx, r.Client, waitParams, operationReplicationSpecsUpdateLegacy, diags)
- }
- }
-
- // latest API can be used safely because if old sharding config is used replication specs will not be included in this request
result = updateCluster(ctx, diags, r.Client, patchReq, waitParams, operationUpdate)
if pauseAfterOtherChanges {
@@ -424,75 +395,45 @@ func (r *rs) applyClusterChanges(ctx context.Context, diags *diag.Diagnostics, s
return result
}
-func (r *rs) updateLegacyReplicationSpecs(ctx context.Context, state, plan *TFModel, diags *diag.Diagnostics, specChanges *[]admin.ReplicationSpec20240805) {
- numShardsUpdates := findNumShardsUpdates(ctx, state, plan, diags)
+func getBasicClusterModelResource(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, clusterResp *admin.ClusterDescription20240805, modelIn *TFModel) *TFModel {
if diags.HasError() {
- return
- }
- if specChanges == nil && numShardsUpdates == nil { // No changes to replication specs
- return
- }
- if specChanges == nil {
- // Use state replication specs as there are no changes in plan except for numShards updates
- specChanges = newReplicationSpec20240805(ctx, state.ReplicationSpecs, diags)
- if diags.HasError() {
- return
- }
- }
- numShardsPlan := numShardsMap(ctx, plan.ReplicationSpecs, diags)
- legacyIDs := externalIDToLegacyID(ctx, state.ReplicationSpecs, diags)
- if diags.HasError() {
- return
- }
- legacyPatch := newLegacyModel20240530ReplicationSpecsAndDiskGBOnly(specChanges, numShardsPlan, state.DiskSizeGB.ValueFloat64Pointer(), legacyIDs)
- if diags.HasError() {
- return
- }
- api20240530 := r.Client.AtlasV220240530.ClustersApi
- _, _, err := api20240530.UpdateCluster(ctx, plan.ProjectID.ValueString(), plan.Name.ValueString(), legacyPatch).Execute()
- if err != nil {
- diags.AddError(errorUpdateLegacy20240530, defaultAPIErrorDetails(plan.Name.ValueString(), err))
- }
-}
-
-func getBasicClusterModelResource(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, clusterResp *admin.ClusterDescription20240805, modelIn *TFModel) (*TFModel, *ExtraAPIInfo) {
- useReplicationSpecPerShard := usingNewShardingConfig(ctx, modelIn.ReplicationSpecs, diags)
- if diags.HasError() {
- return nil, nil
+ return nil
}
- modelOut, apiInfo := getBasicClusterModel(ctx, diags, client, clusterResp, useReplicationSpecPerShard)
+ modelOut := getBasicClusterModel(ctx, diags, client, clusterResp)
if modelOut != nil {
modelOut.Timeouts = modelIn.Timeouts
overrideAttributesWithPrevStateValue(modelIn, modelOut)
}
- return modelOut, apiInfo
+ return modelOut
}
-func getBasicClusterModel(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, clusterResp *admin.ClusterDescription20240805, useReplicationSpecPerShard bool) (*TFModel, *ExtraAPIInfo) {
- extraInfo := resolveAPIInfo(ctx, diags, client, clusterResp, useReplicationSpecPerShard)
- if diags.HasError() {
- return nil, nil
- }
- if extraInfo.UseOldShardingConfigFailed { // can't create a model if the cluster does not support old sharding config
- return nil, extraInfo
+func getBasicClusterModel(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, clusterResp *admin.ClusterDescription20240805) *TFModel {
+ var (
+ projectID = clusterResp.GetGroupId()
+ clusterName = clusterResp.GetName()
+ )
+ containerIDs, err := resolveContainerIDs(ctx, projectID, clusterResp, client.AtlasV2.NetworkPeeringApi)
+ if err != nil {
+ diags.AddError(errorResolveContainerIDs, fmt.Sprintf("cluster name = %s, error details: %s", clusterName, err.Error()))
+ return nil
}
- modelOut := NewTFModel(ctx, clusterResp, diags, *extraInfo)
+
+ modelOut := NewTFModel(ctx, clusterResp, diags, containerIDs)
if diags.HasError() {
- return nil, nil
+ return nil
}
- return modelOut, extraInfo
+ return modelOut
}
func updateModelAdvancedConfig(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, model *TFModel,
p *ProcessArgs) {
projectID := model.ProjectID.ValueString()
clusterName := model.Name.ValueString()
- legacyAdvConfig, advConfig := ReadIfUnsetAdvancedConfiguration(ctx, diags, client, projectID, clusterName, p.ArgsLegacy, p.ArgsDefault)
+ advConfig := ReadIfUnsetAdvancedConfiguration(ctx, diags, client, projectID, clusterName, p.ArgsDefault)
if diags.HasError() {
return
}
p.ArgsDefault = advConfig
- p.ArgsLegacy = legacyAdvConfig
AddAdvancedConfig(ctx, model, p, diags)
}
@@ -500,7 +441,7 @@ func updateModelAdvancedConfig(ctx context.Context, diags *diag.Diagnostics, cli
func resolveClusterWaitParams(ctx context.Context, model *TFModel, diags *diag.Diagnostics, operation string) *ClusterWaitParams {
projectID := model.ProjectID.ValueString()
clusterName := model.Name.ValueString()
- operationTimeout := resolveTimeout(ctx, &model.Timeouts, operation, diags)
+ operationTimeout := cleanup.ResolveTimeout(ctx, &model.Timeouts, operation, diags)
if diags.HasError() {
return nil
}
@@ -512,27 +453,6 @@ func resolveClusterWaitParams(ctx context.Context, model *TFModel, diags *diag.D
}
}
-func resolveTimeout(ctx context.Context, t *timeouts.Value, operationName string, diags *diag.Diagnostics) time.Duration {
- var (
- timeoutDuration time.Duration
- localDiags diag.Diagnostics
- )
- switch operationName {
- case operationCreate:
- timeoutDuration, localDiags = t.Create(ctx, constant.DefaultTimeout)
- diags.Append(localDiags...)
- case operationUpdate:
- timeoutDuration, localDiags = t.Update(ctx, constant.DefaultTimeout)
- diags.Append(localDiags...)
- case operationDelete:
- timeoutDuration, localDiags = t.Delete(ctx, constant.DefaultTimeout)
- diags.Append(localDiags...)
- default:
- timeoutDuration = constant.DefaultTimeout
- }
- return timeoutDuration
-}
-
type clusterDiff struct {
clusterPatchOnlyReq *admin.ClusterDescription20240805
upgradeTenantReq *admin.LegacyAtlasTenantClusterUpgradeRequest
@@ -559,11 +479,8 @@ func (c *clusterDiff) isAnyUpgrade() bool {
// findClusterDiff should be called only in Update, e.g. it will fail for a flex cluster with no changes.
func findClusterDiff(ctx context.Context, state, plan *TFModel, diags *diag.Diagnostics) clusterDiff {
- if _ = isShardingConfigUpgrade(ctx, state, plan, diags); diags.HasError() { // Checks that there is no downgrade from new sharding config to old one
- return clusterDiff{}
- }
- stateReq := normalizeFromTFModel(ctx, state, diags, false)
- planReq := normalizeFromTFModel(ctx, plan, diags, false)
+ stateReq := newAtlasReq(ctx, state, diags)
+ planReq := newAtlasReq(ctx, plan, diags)
if diags.HasError() {
return clusterDiff{}
}
@@ -582,14 +499,6 @@ func findClusterDiff(ctx context.Context, state, plan *TFModel, diags *diag.Diag
patchOptions := update.PatchOptions{
IgnoreInStatePrefix: []string{"replicationSpecs"}, // only use config values for replicationSpecs, state values might come from the UseStateForUnknowns and shouldn't be used, `id` is added in updateLegacyReplicationSpecs
}
- if usingNewShardingConfig(ctx, plan.ReplicationSpecs, diags) {
- patchOptions.IgnoreInStateSuffix = append(patchOptions.IgnoreInStateSuffix, "id") // Not safe to send replication_spec.*.id when using the new schema: replicationSpecs.java.util.ArrayList[0].id attribute does not match expected format
- }
- if findNumShardsUpdates(ctx, state, plan, diags) != nil {
- // force update the replicationSpecs when update.PatchPayload will not detect changes by default:
- // `num_shards` updates is only in the legacy ClusterDescription
- patchOptions.ForceUpdateAttr = append(patchOptions.ForceUpdateAttr, "replicationSpecs")
- }
patchReq, err := update.PatchPayload(stateReq, planReq, patchOptions)
if err != nil {
diags.AddError(errorPatchPayload, err.Error())
@@ -607,7 +516,7 @@ func findClusterDiff(ctx context.Context, state, plan *TFModel, diags *diag.Diag
}
func handleFlexUpgrade(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, waitParams *ClusterWaitParams, plan *TFModel) *TFModel {
- configReq := normalizeFromTFModel(ctx, plan, diags, false)
+ configReq := newAtlasReq(ctx, plan, diags)
if diags.HasError() {
return nil
}
@@ -618,28 +527,18 @@ func handleFlexUpgrade(ctx context.Context, diags *diag.Diagnostics, client *con
return NewTFModelFlexResource(ctx, diags, flexCluster, GetPriorityOfFlexReplicationSpecs(configReq.ReplicationSpecs), plan)
}
-func handleFlexUpdate(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, plan *TFModel) *TFModel {
- configReq := normalizeFromTFModel(ctx, plan, diags, false)
+func handleFlexUpdate(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, waitParams *ClusterWaitParams, plan *TFModel) *TFModel {
+ configReq := newAtlasReq(ctx, plan, diags)
if diags.HasError() {
return nil
}
clusterName := plan.Name.ValueString()
flexCluster, err := flexcluster.UpdateFlexCluster(ctx, plan.ProjectID.ValueString(), clusterName,
GetFlexClusterUpdateRequest(configReq.Tags, configReq.TerminationProtectionEnabled),
- client.AtlasV2.FlexClustersApi)
+ client.AtlasV2.FlexClustersApi, waitParams.Timeout)
if err != nil {
diags.AddError(fmt.Sprintf(flexcluster.ErrorUpdateFlex, clusterName), err.Error())
return nil
}
return NewTFModelFlexResource(ctx, diags, flexCluster, GetPriorityOfFlexReplicationSpecs(configReq.ReplicationSpecs), plan)
}
-
-func isShardingConfigUpgrade(ctx context.Context, state, plan *TFModel, diags *diag.Diagnostics) bool {
- stateUsingNewSharding := usingNewShardingConfig(ctx, state.ReplicationSpecs, diags)
- planUsingNewSharding := usingNewShardingConfig(ctx, plan.ReplicationSpecs, diags)
- if stateUsingNewSharding && !planUsingNewSharding {
- diags.AddError(errorSchemaDowngrade, fmt.Sprintf(errorSchemaDowngradeDetail, plan.Name.ValueString()))
- return false
- }
- return !stateUsingNewSharding && planUsingNewSharding
-}
diff --git a/internal/service/advancedclustertpf/resource_compatibility_reuse.go b/internal/service/advancedclustertpf/resource_compatibility_reuse.go
deleted file mode 100644
index 11b04e6236..0000000000
--- a/internal/service/advancedclustertpf/resource_compatibility_reuse.go
+++ /dev/null
@@ -1,97 +0,0 @@
-package advancedclustertpf
-
-import (
- "context"
- "fmt"
- "strconv"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-)
-
-type MajorVersionOperator int
-
-const (
- EqualOrHigher MajorVersionOperator = iota
- Higher
- EqualOrLower
- Lower
-)
-
-func MajorVersionCompatible(input *string, version float64, operator MajorVersionOperator) *bool {
- if !conversion.IsStringPresent(input) {
- return nil
- }
- value, err := strconv.ParseFloat(*input, 64)
- if err != nil {
- return nil
- }
- var result bool
- switch operator {
- case EqualOrHigher:
- result = value >= version
- case Higher:
- result = value > version
- case EqualOrLower:
- result = value <= version
- case Lower:
- result = value < version
- default:
- return nil
- }
- return &result
-}
-
-func containerIDKey(providerName, regionName string) string {
- return fmt.Sprintf("%s:%s", providerName, regionName)
-}
-
-// based on flattenAdvancedReplicationSpecRegionConfigs in model_advanced_cluster.go
-func resolveContainerIDs(ctx context.Context, projectID string, cluster *admin.ClusterDescription20240805, api admin.NetworkPeeringApi) (map[string]string, error) {
- containerIDs := map[string]string{}
- responseCache := map[string]*admin.PaginatedCloudProviderContainer{}
- for _, spec := range cluster.GetReplicationSpecs() {
- for _, regionConfig := range spec.GetRegionConfigs() {
- providerName := regionConfig.GetProviderName()
- if providerName == constant.TENANT {
- continue
- }
- params := &admin.ListGroupContainersApiParams{
- GroupId: projectID,
- ProviderName: &providerName,
- }
- key := containerIDKey(providerName, regionConfig.GetRegionName())
- if _, ok := containerIDs[key]; ok {
- continue
- }
- var containersResponse *admin.PaginatedCloudProviderContainer
- var err error
- if response, ok := responseCache[providerName]; ok {
- containersResponse = response
- } else {
- containersResponse, _, err = api.ListGroupContainersWithParams(ctx, params).Execute()
- if err != nil {
- return nil, err
- }
- responseCache[providerName] = containersResponse
- }
- if results := GetAdvancedClusterContainerID(containersResponse.GetResults(), ®ionConfig); results != "" {
- containerIDs[key] = results
- } else {
- return nil, fmt.Errorf("container id not found for %s", key)
- }
- }
- }
- return containerIDs, nil
-}
-
-func replicationSpecIDsFromOldAPI(clusterRespOld *admin20240530.AdvancedClusterDescription) map[string]string {
- specs := clusterRespOld.GetReplicationSpecs()
- zoneNameSpecIDs := make(map[string]string, len(specs))
- for _, spec := range specs {
- zoneNameSpecIDs[spec.GetZoneName()] = spec.GetId()
- }
- return zoneNameSpecIDs
-}
diff --git a/internal/service/advancedclustertpf/resource_compatiblity.go b/internal/service/advancedclustertpf/resource_compatiblity.go
index 04398850b4..f1586fd908 100644
--- a/internal/service/advancedclustertpf/resource_compatiblity.go
+++ b/internal/service/advancedclustertpf/resource_compatiblity.go
@@ -3,280 +3,115 @@ package advancedclustertpf
import (
"context"
"fmt"
- "reflect"
+ "strconv"
- "github.com/hashicorp/terraform-plugin-framework/diag"
- "github.com/hashicorp/terraform-plugin-framework/types"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
-)
-
-func overrideAttributesWithPrevStateValue(modelIn, modelOut *TFModel) {
- beforeVersion := conversion.NilForUnknown(modelIn.MongoDBMajorVersion, modelIn.MongoDBMajorVersion.ValueStringPointer())
- if beforeVersion != nil && !modelIn.MongoDBMajorVersion.Equal(modelOut.MongoDBMajorVersion) {
- modelOut.MongoDBMajorVersion = types.StringPointerValue(beforeVersion)
- }
- retainBackups := conversion.NilForUnknown(modelIn.RetainBackupsEnabled, modelIn.RetainBackupsEnabled.ValueBoolPointer())
- if retainBackups != nil && !modelIn.RetainBackupsEnabled.Equal(modelOut.RetainBackupsEnabled) {
- modelOut.RetainBackupsEnabled = types.BoolPointerValue(retainBackups)
- }
- if modelIn.DeleteOnCreateTimeout.ValueBoolPointer() != nil {
- modelOut.DeleteOnCreateTimeout = modelIn.DeleteOnCreateTimeout
- }
- overrideMapStringWithPrevStateValue(&modelIn.Labels, &modelOut.Labels)
- overrideMapStringWithPrevStateValue(&modelIn.Tags, &modelOut.Tags)
-}
-func overrideMapStringWithPrevStateValue(mapIn, mapOut *types.Map) {
- if mapIn == nil || mapOut == nil || len(mapOut.Elements()) > 0 {
- return
- }
- if mapIn.IsNull() {
- *mapOut = types.MapNull(types.StringType)
- } else {
- *mapOut = types.MapValueMust(types.StringType, nil)
- }
-}
-
-func findNumShardsUpdates(ctx context.Context, state, plan *TFModel, diags *diag.Diagnostics) map[string]int64 {
- if usingNewShardingConfig(ctx, plan.ReplicationSpecs, diags) {
- return nil
- }
- stateCounts := numShardsMap(ctx, state.ReplicationSpecs, diags)
- planCounts := numShardsMap(ctx, plan.ReplicationSpecs, diags)
- if diags.HasError() {
- return nil
- }
- if reflect.DeepEqual(stateCounts, planCounts) {
- return nil
- }
- return planCounts
-}
-
-func resolveAPIInfo(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, clusterLatest *admin.ClusterDescription20240805, useReplicationSpecPerShard bool) *ExtraAPIInfo {
- var (
- api20240530 = client.AtlasV220240530.ClustersApi
- projectID = clusterLatest.GetGroupId()
- clusterName = clusterLatest.GetName()
- useOldShardingConfigFailed = false
- )
- clusterRespOld, _, err := api20240530.GetCluster(ctx, projectID, clusterName).Execute()
- if err != nil {
- if validate.ErrorClusterIsAsymmetrics(err) {
- useOldShardingConfigFailed = !useReplicationSpecPerShard
- } else {
- diags.AddError(errorReadLegacy20240530, defaultAPIErrorDetails(clusterName, err))
- return nil
- }
- }
- containerIDs, err := resolveContainerIDs(ctx, projectID, clusterLatest, client.AtlasV2.NetworkPeeringApi)
- if err != nil {
- diags.AddError(errorResolveContainerIDs, fmt.Sprintf("cluster name = %s, error details: %s", clusterName, err.Error()))
- return nil
- }
- return &ExtraAPIInfo{
- ContainerIDs: containerIDs,
- ZoneNameReplicationSpecIDs: replicationSpecIDsFromOldAPI(clusterRespOld),
- UseOldShardingConfigFailed: useOldShardingConfigFailed,
- ZoneNameNumShards: numShardsMapFromOldAPI(clusterRespOld),
- UseNewShardingConfig: useReplicationSpecPerShard,
- }
-}
-// instead of using `num_shards` expand the replication specs, and set disk_size_gb
-func normalizeFromTFModel(ctx context.Context, model *TFModel, diags *diag.Diagnostics, shouldExpandNumShards bool) *admin.ClusterDescription20240805 {
- latestModel := NewAtlasReq(ctx, model, diags)
- if diags.HasError() {
- return nil
- }
- counts := numShardsCounts(ctx, model.ReplicationSpecs, diags)
- if diags.HasError() {
- return nil
- }
- usingLegacySchema := isNumShardsGreaterThanOne(counts)
- if usingLegacySchema && shouldExpandNumShards {
- expandNumShards(latestModel, counts)
- }
- normalizeDiskSize(model, latestModel, diags)
- if diags.HasError() {
- return nil
- }
- return latestModel
-}
+ "github.com/hashicorp/terraform-plugin-framework/types"
-func normalizeDiskSize(model *TFModel, latestModel *admin.ClusterDescription20240805, diags *diag.Diagnostics) {
- rootDiskSize := conversion.NilForUnknown(model.DiskSizeGB, model.DiskSizeGB.ValueFloat64Pointer())
- regionRootDiskSize := findFirstRegionDiskSizeGB(latestModel.ReplicationSpecs)
- if rootDiskSize != nil && regionRootDiskSize != nil && (*regionRootDiskSize-*rootDiskSize) > 0.01 {
- errMsg := fmt.Sprintf("disk_size_gb @ root != disk_size_gb @ region (%.2f!=%.2f)", *rootDiskSize, *regionRootDiskSize)
- diags.AddError(errMsg, errMsg)
- return
- }
- diskSize := rootDiskSize
- // Prefer regionRootDiskSize over rootDiskSize
- if regionRootDiskSize != nil {
- diskSize = regionRootDiskSize
- }
- if diskSize != nil {
- setDiskSize(latestModel, diskSize)
- }
-}
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+)
-func expandNumShards(req *admin.ClusterDescription20240805, counts []int64) {
- specs := req.GetReplicationSpecs()
- newSpecs := []admin.ReplicationSpec20240805{}
- for i, spec := range specs {
- newSpecs = append(newSpecs, spec)
- for range counts[i] - 1 {
- newSpecs = append(newSpecs, *repSpecNoIDs(spec))
- }
- }
- req.ReplicationSpecs = &newSpecs
-}
+type MajorVersionOperator int
-func repSpecNoIDs(repspec admin.ReplicationSpec20240805) *admin.ReplicationSpec20240805 {
- repspec.Id = nil
- repspec.ZoneId = nil
- return &repspec
-}
+const (
+ EqualOrHigher MajorVersionOperator = iota
+ Higher
+ EqualOrLower
+ Lower
+)
-func numShardsCounts(ctx context.Context, input types.List, diags *diag.Diagnostics) []int64 {
- elements := make([]TFReplicationSpecsModel, len(input.Elements()))
- if len(elements) == 0 {
+func MajorVersionCompatible(input *string, version float64, operator MajorVersionOperator) *bool {
+ if !conversion.IsStringPresent(input) {
return nil
}
- if localDiags := input.ElementsAs(ctx, &elements, false); len(localDiags) > 0 {
- diags.Append(localDiags...)
- return nil
- }
- counts := make([]int64, len(elements))
- for i := range elements {
- item := &elements[i]
- counts[i] = item.NumShards.ValueInt64()
- }
- return counts
-}
-
-func usingNewShardingConfig(ctx context.Context, input types.List, diags *diag.Diagnostics) bool {
- counts := numShardsCounts(ctx, input, diags)
- if diags.HasError() {
- return true
- }
- return !isNumShardsGreaterThanOne(counts)
-}
-
-func numShardsMap(ctx context.Context, input types.List, diags *diag.Diagnostics) map[string]int64 {
- elements := make([]TFReplicationSpecsModel, len(input.Elements()))
- if len(elements) == 0 {
+ value, err := strconv.ParseFloat(*input, 64)
+ if err != nil {
return nil
}
- if localDiags := input.ElementsAs(ctx, &elements, false); len(localDiags) > 0 {
- diags.Append(localDiags...)
+ var result bool
+ switch operator {
+ case EqualOrHigher:
+ result = value >= version
+ case Higher:
+ result = value > version
+ case EqualOrLower:
+ result = value <= version
+ case Lower:
+ result = value < version
+ default:
return nil
}
- counts := map[string]int64{}
- for i := range elements {
- e := elements[i]
- zoneName := resolveZoneNameOrUseDefault(&e)
- counts[zoneName] = e.NumShards.ValueInt64()
- }
- return counts
+ return &result
}
-func numShardsMapFromOldAPI(clusterRespOld *admin20240530.AdvancedClusterDescription) map[string]int64 {
- ret := make(map[string]int64)
- for i := range clusterRespOld.GetReplicationSpecs() {
- spec := &clusterRespOld.GetReplicationSpecs()[i]
- ret[spec.GetZoneName()] = int64(spec.GetNumShards())
- }
- return ret
+func containerIDKey(providerName, regionName string) string {
+ return fmt.Sprintf("%s:%s", providerName, regionName)
}
-func isNumShardsGreaterThanOne(counts []int64) bool {
- for _, count := range counts {
- if count > 1 {
- return true
- }
- }
- return false
-}
-
-// setDiskSize use most specific disk size, prefer region > spec > root disk size
-func setDiskSize(req *admin.ClusterDescription20240805, defaultSize *float64) {
- for i, spec := range req.GetReplicationSpecs() {
- specSizeDefault := findFirstRegionDiskSizeGB(&[]admin.ReplicationSpec20240805{spec})
- if specSizeDefault == nil {
- specSizeDefault = defaultSize
- }
- for j := range spec.GetRegionConfigs() {
- actualConfig := req.GetReplicationSpecs()[i].GetRegionConfigs()[j]
- regionSize := findRegionDiskSizeGB(&actualConfig)
- if regionSize == nil {
- regionSize = specSizeDefault
+// based on flattenAdvancedReplicationSpecRegionConfigs in model_advanced_cluster.go
+func resolveContainerIDs(ctx context.Context, projectID string, cluster *admin.ClusterDescription20240805, api admin.NetworkPeeringApi) (map[string]string, error) {
+ containerIDs := map[string]string{}
+ responseCache := map[string]*admin.PaginatedCloudProviderContainer{}
+ for _, spec := range cluster.GetReplicationSpecs() {
+ for _, regionConfig := range spec.GetRegionConfigs() {
+ providerName := regionConfig.GetProviderName()
+ if providerName == constant.TENANT {
+ continue
}
- analyticsSpecs := actualConfig.AnalyticsSpecs
- if analyticsSpecs != nil {
- analyticsSpecs.DiskSizeGB = regionSize
+ params := &admin.ListGroupContainersApiParams{
+ GroupId: projectID,
+ ProviderName: &providerName,
}
- electable := actualConfig.ElectableSpecs
- if electable != nil {
- electable.DiskSizeGB = regionSize
+ key := containerIDKey(providerName, regionConfig.GetRegionName())
+ if _, ok := containerIDs[key]; ok {
+ continue
}
- readonly := actualConfig.ReadOnlySpecs
- if readonly != nil {
- readonly.DiskSizeGB = regionSize
+ var containersResponse *admin.PaginatedCloudProviderContainer
+ var err error
+ if response, ok := responseCache[providerName]; ok {
+ containersResponse = response
+ } else {
+ containersResponse, _, err = api.ListGroupContainersWithParams(ctx, params).Execute()
+ if err != nil {
+ return nil, err
+ }
+ responseCache[providerName] = containersResponse
}
- }
- }
-}
-
-func findFirstRegionDiskSizeGB(specs *[]admin.ReplicationSpec20240805) *float64 {
- if specs == nil {
- return nil
- }
- for _, spec := range *specs {
- for _, regionConfig := range spec.GetRegionConfigs() {
- diskSizeGB := findRegionDiskSizeGB(®ionConfig)
- if diskSizeGB != nil {
- return diskSizeGB
+ if results := GetAdvancedClusterContainerID(containersResponse.GetResults(), ®ionConfig); results != "" {
+ containerIDs[key] = results
+ } else {
+ return nil, fmt.Errorf("container id not found for %s", key)
}
}
}
- return nil
+ return containerIDs, nil
}
-func findRegionDiskSizeGB(regionConfig *admin.CloudRegionConfig20240805) *float64 {
- electable := regionConfig.ElectableSpecs
- if electable != nil && electable.DiskSizeGB != nil {
- return electable.DiskSizeGB
+func overrideAttributesWithPrevStateValue(modelIn, modelOut *TFModel) {
+ beforeVersion := conversion.NilForUnknown(modelIn.MongoDBMajorVersion, modelIn.MongoDBMajorVersion.ValueStringPointer())
+ if beforeVersion != nil && !modelIn.MongoDBMajorVersion.Equal(modelOut.MongoDBMajorVersion) {
+ modelOut.MongoDBMajorVersion = types.StringPointerValue(beforeVersion)
}
- analyticsSpecs := regionConfig.AnalyticsSpecs
- if analyticsSpecs != nil && analyticsSpecs.DiskSizeGB != nil {
- return analyticsSpecs.DiskSizeGB
+ retainBackups := conversion.NilForUnknown(modelIn.RetainBackupsEnabled, modelIn.RetainBackupsEnabled.ValueBoolPointer())
+ if retainBackups != nil && !modelIn.RetainBackupsEnabled.Equal(modelOut.RetainBackupsEnabled) {
+ modelOut.RetainBackupsEnabled = types.BoolPointerValue(retainBackups)
}
- readonly := regionConfig.ReadOnlySpecs
- if readonly != nil && readonly.DiskSizeGB != nil {
- return readonly.DiskSizeGB
+ if modelIn.DeleteOnCreateTimeout.ValueBoolPointer() != nil {
+ modelOut.DeleteOnCreateTimeout = modelIn.DeleteOnCreateTimeout
}
- return nil
+ overrideMapStringWithPrevStateValue(&modelIn.Labels, &modelOut.Labels)
+ overrideMapStringWithPrevStateValue(&modelIn.Tags, &modelOut.Tags)
}
-func externalIDToLegacyID(ctx context.Context, input types.List, diags *diag.Diagnostics) map[string]string {
- elements := make([]TFReplicationSpecsModel, len(input.Elements()))
- if localDiags := input.ElementsAs(ctx, &elements, false); len(localDiags) > 0 {
- diags.Append(localDiags...)
- return nil
+func overrideMapStringWithPrevStateValue(mapIn, mapOut *types.Map) {
+ if mapIn == nil || mapOut == nil || len(mapOut.Elements()) > 0 {
+ return
}
- idsMapped := map[string]string{}
- for i := range elements {
- e := elements[i]
- externalID := e.ExternalId.ValueString()
- legacyID := e.Id.ValueString()
- if externalID != "" && legacyID != "" {
- idsMapped[externalID] = legacyID
- }
+ if mapIn.IsNull() {
+ *mapOut = types.MapNull(types.StringType)
+ } else {
+ *mapOut = types.MapValueMust(types.StringType, nil)
}
- return idsMapped
}
diff --git a/internal/service/advancedclustertpf/resource_migration_test.go b/internal/service/advancedclustertpf/resource_migration_test.go
new file mode 100644
index 0000000000..e177beb329
--- /dev/null
+++ b/internal/service/advancedclustertpf/resource_migration_test.go
@@ -0,0 +1,27 @@
+package advancedclustertpf_test
+
+import (
+ "testing"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
+)
+
+func TestMigAdvancedCluster_replicaSetAWSProvider(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0")
+ mig.CreateAndRunTest(t, replicaSetAWSProviderTestCase(t))
+}
+
+func TestMigAdvancedCluster_replicaSetMultiCloud(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0")
+ mig.CreateAndRunTest(t, replicaSetMultiCloudTestCase(t))
+}
+
+func TestMigAdvancedCluster_singleShardedMultiCloud(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0")
+ mig.CreateAndRunTest(t, singleShardedMultiCloudTestCase(t))
+}
+
+func TestMigAdvancedClusterConfig_asymmetricGeoShardedNewSchema(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0")
+ mig.CreateAndRunTest(t, asymmetricGeoShardedNewSchema(t))
+}
diff --git a/internal/service/advancedclustertpf/resource_migration_v1x_test.go b/internal/service/advancedclustertpf/resource_migration_v1x_test.go
new file mode 100644
index 0000000000..6313420906
--- /dev/null
+++ b/internal/service/advancedclustertpf/resource_migration_v1x_test.go
@@ -0,0 +1,375 @@
+package advancedclustertpf_test
+
+import (
+ "fmt"
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
+)
+
+var versionBeforeTPFGARelease = os.Getenv("MONGODB_ATLAS_LAST_1X_VERSION")
+
+func TestV1xMigClusterAdvancedClusterConfig_geoShardedNewSchema(t *testing.T) {
+ projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
+ isSDKv2 := acc.IsTestSDKv2ToTPF()
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { mig.PreCheckBasic(t); mig.PreCheckLast1XVersion(t) },
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: acc.ExternalProviders(versionBeforeTPFGARelease),
+ Config: configGeoShardedTransitionOldToNewSchema(t, !isSDKv2, projectID, clusterName, true),
+ Check: checkGeoShardedTransitionOldToNewSchema(!isSDKv2, true),
+ },
+ mig.TestStepCheckEmptyPlan(configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true),
+ Check: checkGeoShardedTransitionOldToNewSchema(true, true),
+ },
+ mig.TestStepCheckEmptyPlan(configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true)),
+ },
+ })
+}
+
+func configGeoShardedTransitionOldToNewSchema(t *testing.T, isTPF bool, projectID, name string, useNewSchema bool) string {
+ t.Helper()
+ var numShardsStr string
+ var diskSizeGB string
+ if !useNewSchema {
+ numShardsStr = `num_shards = 2`
+ diskSizeGB = `disk_size_gb = 15`
+ }
+ replicationSpec := `
+ replication_specs {
+ %[1]s
+ region_configs {
+ electable_specs {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = %[2]q
+ }
+ zone_name = %[3]q
+ }
+ `
+
+ var replicationSpecs string
+ if !useNewSchema {
+ replicationSpecs = fmt.Sprintf(`
+ %[1]s
+ %[2]s
+ `, fmt.Sprintf(replicationSpec, numShardsStr, "US_EAST_1", "zone 1"), fmt.Sprintf(replicationSpec, numShardsStr, "EU_WEST_1", "zone 2"))
+ } else {
+ replicationSpecs = fmt.Sprintf(`
+ %[1]s
+ %[2]s
+ %[3]s
+ %[4]s
+ `, fmt.Sprintf(replicationSpec, numShardsStr, "US_EAST_1", "zone 1"), fmt.Sprintf(replicationSpec, numShardsStr, "US_EAST_1", "zone 1"),
+ fmt.Sprintf(replicationSpec, numShardsStr, "EU_WEST_1", "zone 2"), fmt.Sprintf(replicationSpec, numShardsStr, "EU_WEST_1", "zone 2"))
+ }
+
+ return acc.ConvertAdvancedClusterToTPF(t, isTPF, fmt.Sprintf(`
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = %[1]q
+ name = %[2]q
+ backup_enabled = false
+ cluster_type = "GEOSHARDED"
+
+ %[4]s
+
+ %[3]s
+ }
+ `, projectID, name, replicationSpecs, diskSizeGB)) + dataSourcesConfig
+}
+
+func checkGeoShardedTransitionOldToNewSchema(isTPF, useNewSchema bool) resource.TestCheckFunc {
+ if useNewSchema {
+ return checkAggrMig(isTPF, false,
+ []string{
+ "replication_specs.0.external_id", "replication_specs.1.external_id", "replication_specs.2.external_id", "replication_specs.3.external_id",
+ },
+ map[string]string{
+ "replication_specs.#": "4",
+ "replication_specs.0.zone_name": "zone 1",
+ "replication_specs.1.zone_name": "zone 1",
+ "replication_specs.2.zone_name": "zone 2",
+ "replication_specs.3.zone_name": "zone 2",
+ },
+ )
+ }
+ return checkAggrMig(isTPF, false,
+ []string{},
+ map[string]string{
+ "replication_specs.#": "2",
+ "replication_specs.0.zone_name": "zone 1",
+ "replication_specs.1.zone_name": "zone 2",
+ },
+ )
+}
+
+func TestV1xMigAdvancedCluster_oldToNewSchemaWithAutoscalingEnabled(t *testing.T) {
+ projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
+ isSDKv2 := acc.IsTestSDKv2ToTPF()
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasicSleep(t, nil, projectID, clusterName); mig.PreCheckLast1XVersion(t) },
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: acc.ExternalProviders(versionBeforeTPFGARelease),
+ Config: configShardedTransitionOldToNewSchema(t, !isSDKv2, projectID, clusterName, false, true, false),
+ Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER"),
+ },
+ mig.TestStepCheckEmptyPlan(configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, true, false)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, true, false),
+ Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER"),
+ },
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, true, true),
+ Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "SHARD"),
+ },
+ mig.TestStepCheckEmptyPlan(configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, true, true)),
+ },
+ })
+}
+
+func TestV1xMigAdvancedCluster_shardedNewSchema(t *testing.T) {
+ projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
+ versionBeforeTPFGARelease := os.Getenv("MONGODB_ATLAS_LAST_1X_VERSION")
+ isSDKv2 := acc.IsTestSDKv2ToTPF()
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { mig.PreCheckBasic(t); mig.PreCheckLast1XVersion(t) },
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: acc.ExternalProviders(versionBeforeTPFGARelease),
+ Config: configShardedTransitionOldToNewSchema(t, !isSDKv2, projectID, clusterName, true, false, false),
+ Check: checkShardedTransitionOldToNewSchema(!isSDKv2, true),
+ },
+ mig.TestStepCheckEmptyPlan(configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, false, false)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, false, false),
+ Check: checkShardedTransitionOldToNewSchema(true, true),
+ },
+ mig.TestStepCheckEmptyPlan(configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, false, false)),
+ },
+ })
+}
+
+func configShardedTransitionOldToNewSchema(t *testing.T, isTPF bool, projectID, name string, useNewSchema, autoscaling, isUpdate bool) string {
+ t.Helper()
+ var numShardsStr string
+ var diskSizeGBStr string
+ if !useNewSchema {
+ numShardsStr = `num_shards = 2`
+ diskSizeGBStr = `disk_size_gb = 15`
+ }
+ var autoscalingStr string
+ if autoscaling {
+ autoscalingStr = `auto_scaling {
+ compute_enabled = true
+ disk_gb_enabled = true
+ compute_max_instance_size = "M20"
+ }`
+
+ if isUpdate {
+ autoscalingStr = `auto_scaling {
+ compute_enabled = true
+ disk_gb_enabled = true
+ compute_max_instance_size = "M30"
+ }`
+ }
+ }
+
+ replicationSpec := fmt.Sprintf(`
+ replication_specs {
+ %[1]s
+
+ region_configs {
+ electable_specs {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ %[2]s
+ }
+ }
+ `, numShardsStr, autoscalingStr)
+
+ var replicationSpecs string
+ if useNewSchema {
+ replicationSpecs = fmt.Sprintf(`
+ %[1]s
+ %[1]s
+ `, replicationSpec)
+ } else {
+ replicationSpecs = replicationSpec
+ }
+
+ return acc.ConvertAdvancedClusterToTPF(t, isTPF, fmt.Sprintf(`
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = %[1]q
+ name = %[2]q
+ backup_enabled = false
+ cluster_type = "SHARDED"
+
+ %[4]s
+
+ %[3]s
+ }
+
+ `, projectID, name, replicationSpecs, diskSizeGBStr)) + dataSourcesConfig
+}
+
+func checkShardedTransitionOldToNewSchema(isTPF, useNewSchema bool) resource.TestCheckFunc {
+ var amtOfReplicationSpecs int
+ if useNewSchema {
+ amtOfReplicationSpecs = 2
+ } else {
+ amtOfReplicationSpecs = 1
+ }
+ var checksForNewSchema []resource.TestCheckFunc
+ if useNewSchema {
+ checksForNewSchema = []resource.TestCheckFunc{
+ checkAggrMig(isTPF, false, []string{"replication_specs.0.external_id", "replication_specs.1.external_id"},
+ map[string]string{
+ "replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
+ "replication_specs.1.region_configs.0.electable_specs.0.instance_size": "M10",
+ "replication_specs.1.region_configs.0.analytics_specs.0.instance_size": "M10",
+ }),
+ }
+ }
+
+ return checkAggrMig(isTPF, false,
+ []string{},
+ map[string]string{
+ "replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
+ "replication_specs.0.region_configs.0.electable_specs.0.instance_size": "M10",
+ "replication_specs.0.region_configs.0.analytics_specs.0.instance_size": "M10",
+ },
+ checksForNewSchema...,
+ )
+}
+
+func TestV1xMigAdvancedCluster_geoShardedMigrationFromOldToNewSchema(t *testing.T) {
+ projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
+ versionBeforeTPFGARelease := os.Getenv("MONGODB_ATLAS_LAST_1X_VERSION")
+ isSDKv2 := acc.IsTestSDKv2ToTPF()
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { mig.PreCheckBasic(t); mig.PreCheckLast1XVersion(t) },
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: acc.ExternalProviders(versionBeforeTPFGARelease),
+ Config: configGeoShardedTransitionOldToNewSchema(t, !isSDKv2, projectID, clusterName, false),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkGeoShardedTransitionOldToNewSchema(false, false),
+ acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER")),
+ },
+ mig.TestStepCheckEmptyPlan(configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkGeoShardedTransitionOldToNewSchema(true, true),
+ acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER")),
+ },
+ mig.TestStepCheckEmptyPlan(configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true)),
+ },
+ })
+}
+
+func TestV1xMigAdvancedCluster_replicaSetAWSProvider(t *testing.T) {
+ var (
+ projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 6)
+ versionBeforeTPFGARelease = os.Getenv("MONGODB_ATLAS_LAST_1X_VERSION")
+ isSDKv2 = acc.IsTestSDKv2ToTPF()
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasicSleep(t, nil, projectID, clusterName); mig.PreCheckLast1XVersion(t) },
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: acc.ExternalProviders(versionBeforeTPFGARelease),
+ Config: configAWSProvider(t, ReplicaSetAWSConfig{
+ ProjectID: projectID,
+ ClusterName: clusterName,
+ ClusterType: "REPLICASET",
+ DiskSizeGB: 60,
+ NodeCountElectable: 3,
+ }, !isSDKv2),
+ Check: checkReplicaSetAWSProvider(!isSDKv2, false, projectID, clusterName, 60, 3, true, true),
+ },
+ mig.TestStepCheckEmptyPlan(configAWSProvider(t, ReplicaSetAWSConfig{
+ ProjectID: projectID,
+ ClusterName: clusterName,
+ ClusterType: "REPLICASET",
+ DiskSizeGB: 60,
+ NodeCountElectable: 3,
+ }, true)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configAWSProvider(t, ReplicaSetAWSConfig{
+ ProjectID: projectID,
+ ClusterName: clusterName,
+ ClusterType: "REPLICASET",
+ DiskSizeGB: 60,
+ NodeCountElectable: 3,
+ }, true),
+ Check: checkReplicaSetAWSProvider(true, false, projectID, clusterName, 60, 3, true, true),
+ },
+ },
+ })
+}
+
+func TestV1xMigAdvancedCluster_replicaSetMultiCloud(t *testing.T) {
+ var (
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ projectName, clusterName = acc.ProjectIDExecutionWithCluster(t, 6)
+ isSDKv2 = acc.IsTestSDKv2ToTPF()
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t); mig.PreCheckLast1XVersion(t) },
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: acc.ExternalProviders(versionBeforeTPFGARelease),
+ Config: configReplicaSetMultiCloud(t, orgID, projectName, clusterName, !isSDKv2),
+ Check: checkReplicaSetMultiCloud(!isSDKv2, false, clusterName, 3),
+ },
+ mig.TestStepCheckEmptyPlan(configReplicaSetMultiCloud(t, orgID, projectName, clusterName, true)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configReplicaSetMultiCloud(t, orgID, projectName, clusterName, true),
+ Check: checkReplicaSetMultiCloud(true, false, clusterName, 3),
+ },
+ },
+ })
+}
diff --git a/internal/service/advancedcluster/resource_advanced_cluster_test.go b/internal/service/advancedclustertpf/resource_test.go
similarity index 51%
rename from internal/service/advancedcluster/resource_advanced_cluster_test.go
rename to internal/service/advancedclustertpf/resource_test.go
index 1e3af40411..5acc9d7141 100644
--- a/internal/service/advancedcluster/resource_advanced_cluster_test.go
+++ b/internal/service/advancedclustertpf/resource_test.go
@@ -1,6 +1,7 @@
-package advancedcluster_test
+package advancedclustertpf_test
import (
+ "errors"
"fmt"
"net/http"
"os"
@@ -18,42 +19,26 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
- "github.com/stretchr/testify/require"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedclustertpf"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/unit"
)
const (
- resourceName = "mongodbatlas_advanced_cluster.test"
- dataSourceName = "data.mongodbatlas_advanced_cluster.test"
- dataSourcePluralName = "data.mongodbatlas_advanced_clusters.test"
- dataSourcesTFOldSchema = `
+ resourceName = "mongodbatlas_advanced_cluster.test"
+ dataSourceName = "data.mongodbatlas_advanced_cluster.test"
+ dataSourcePluralName = "data.mongodbatlas_advanced_clusters.test"
+ dataSourcesConfig = `
data "mongodbatlas_advanced_cluster" "test" {
project_id = mongodbatlas_advanced_cluster.test.project_id
name = mongodbatlas_advanced_cluster.test.name
depends_on = [mongodbatlas_advanced_cluster.test]
}
-
- data "mongodbatlas_advanced_clusters" "test" {
- project_id = mongodbatlas_advanced_cluster.test.project_id
- depends_on = [mongodbatlas_advanced_cluster.test]
- }`
- dataSourcesTFNewSchema = `
- data "mongodbatlas_advanced_cluster" "test" {
- project_id = mongodbatlas_advanced_cluster.test.project_id
- name = mongodbatlas_advanced_cluster.test.name
- use_replication_spec_per_shard = true
- depends_on = [mongodbatlas_advanced_cluster.test]
- }
data "mongodbatlas_advanced_clusters" "test" {
- use_replication_spec_per_shard = true
project_id = mongodbatlas_advanced_cluster.test.project_id
depends_on = [mongodbatlas_advanced_cluster.test]
}`
@@ -65,6 +50,7 @@ var (
configServerManagementModeFixedToDedicated = "FIXED_TO_DEDICATED"
configServerManagementModeAtlasManaged = "ATLAS_MANAGED"
mockConfig = unit.MockConfigAdvancedClusterTPF
+ errGeneric = errors.New("generic")
)
func TestGetReplicationSpecAttributesFromOldAPI(t *testing.T) {
@@ -129,17 +115,17 @@ func testAccAdvancedClusterFlexUpgrade(t *testing.T, projectID, clusterName, ins
// avoid checking plural data source to reduce risk of being impacted from failure in other test using same project, allows running in parallel
steps := []resource.TestStep{
{
- Config: configTenant(t, true, projectID, clusterName, defaultZoneName, instanceSize),
- Check: checkTenant(true, projectID, clusterName, false),
+ Config: configTenant(t, projectID, clusterName, defaultZoneName, instanceSize),
+ Check: checkTenant(projectID, clusterName, false),
},
{
- Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", defaultZoneName, false),
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", defaultZoneName, "", false, nil),
Check: checkFlexClusterConfig(projectID, clusterName, "AWS", "US_EAST_1", false, false),
},
}
if includeDedicated {
steps = append(steps, resource.TestStep{
- Config: acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, acc.ConfigBasicDedicated(projectID, clusterName, defaultZoneName)),
+ Config: acc.ConfigBasicDedicated(projectID, clusterName, defaultZoneName),
Check: checksBasicDedicated(projectID, clusterName, false),
})
}
@@ -161,6 +147,7 @@ func TestAccAdvancedCluster_sharedTier_flexUpgrade(t *testing.T) {
projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 1)
resource.ParallelTest(t, testAccAdvancedClusterFlexUpgrade(t, projectID, clusterName, sharedInstanceSize, false))
}
+
func TestAccMockableAdvancedCluster_tenantUpgrade(t *testing.T) {
var (
projectID, clusterName = acc.ProjectIDExecutionWithFreeCluster(t, 3, 1)
@@ -172,11 +159,11 @@ func TestAccMockableAdvancedCluster_tenantUpgrade(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, configTenant(t, true, projectID, clusterName, defaultZoneName, freeInstanceSize)),
- Check: checkTenant(true, projectID, clusterName, true),
+ Config: configTenant(t, projectID, clusterName, defaultZoneName, freeInstanceSize),
+ Check: checkTenant(projectID, clusterName, true),
},
{
- Config: acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, acc.ConfigBasicDedicated(projectID, clusterName, defaultZoneName)),
+ Config: acc.ConfigBasicDedicated(projectID, clusterName, defaultZoneName),
Check: checksBasicDedicated(projectID, clusterName, true),
},
acc.TestStepImportCluster(resourceName),
@@ -185,61 +172,50 @@ func TestAccMockableAdvancedCluster_tenantUpgrade(t *testing.T) {
}
func TestAccClusterAdvancedCluster_replicaSetAWSProvider(t *testing.T) {
- resource.ParallelTest(t, replicaSetAWSProviderTestCase(t, true))
+ resource.ParallelTest(t, *replicaSetAWSProviderTestCase(t))
}
-func replicaSetAWSProviderTestCase(t *testing.T, usePreviewProvider bool) resource.TestCase {
+func replicaSetAWSProviderTestCase(t *testing.T) *resource.TestCase {
t.Helper()
+
var (
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 6)
)
- return resource.TestCase{
+ return &resource.TestCase{
PreCheck: acc.PreCheckBasicSleep(t, nil, projectID, clusterName),
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configAWSProvider(t, usePreviewProvider, ReplicaSetAWSConfig{
+ Config: configAWSProvider(t, ReplicaSetAWSConfig{
ProjectID: projectID,
ClusterName: clusterName,
ClusterType: "REPLICASET",
DiskSizeGB: 60,
NodeCountElectable: 3,
- WithAnalyticsSpecs: true,
- }),
- Check: checkReplicaSetAWSProvider(usePreviewProvider, projectID, clusterName, 60, 3, true, true),
- },
- // empty plan when analytics block is removed
- acc.TestStepCheckEmptyPlan(configAWSProvider(t, usePreviewProvider, ReplicaSetAWSConfig{
- ProjectID: projectID,
- ClusterName: clusterName,
- ClusterType: "REPLICASET",
- DiskSizeGB: 60,
- NodeCountElectable: 3,
- WithAnalyticsSpecs: false,
- })),
- {
- Config: configAWSProvider(t, usePreviewProvider, ReplicaSetAWSConfig{
+ }, true),
+ Check: checkReplicaSetAWSProvider(true, true, projectID, clusterName, 60, 3, true, true),
+ },
+ {
+ Config: configAWSProvider(t, ReplicaSetAWSConfig{
ProjectID: projectID,
ClusterName: clusterName,
ClusterType: "REPLICASET",
DiskSizeGB: 50,
NodeCountElectable: 5,
- WithAnalyticsSpecs: false, // other update made after removed analytics block, computed value is expected to be the same
- }),
- Check: checkReplicaSetAWSProvider(usePreviewProvider, projectID, clusterName, 50, 5, true, true),
+ }, true),
+ Check: checkReplicaSetAWSProvider(true, true, projectID, clusterName, 50, 5, true, true),
},
{ // testing transition from replica set to sharded cluster
- Config: configAWSProvider(t, usePreviewProvider, ReplicaSetAWSConfig{
+ Config: configAWSProvider(t, ReplicaSetAWSConfig{
ProjectID: projectID,
ClusterName: clusterName,
ClusterType: "SHARDED",
DiskSizeGB: 50,
NodeCountElectable: 5,
- WithAnalyticsSpecs: false,
- }),
- Check: checkReplicaSetAWSProvider(usePreviewProvider, projectID, clusterName, 50, 5, true, true),
+ }, true),
+ Check: checkReplicaSetAWSProvider(true, true, projectID, clusterName, 50, 5, true, true),
},
acc.TestStepImportCluster(resourceName, "replication_specs", "retain_backups_enabled"),
},
@@ -247,30 +223,33 @@ func replicaSetAWSProviderTestCase(t *testing.T, usePreviewProvider bool) resour
}
func TestAccClusterAdvancedCluster_replicaSetMultiCloud(t *testing.T) {
- resource.ParallelTest(t, replicaSetMultiCloudTestCase(t, true))
+ resource.ParallelTest(t, *replicaSetMultiCloudTestCase(t))
}
-func replicaSetMultiCloudTestCase(t *testing.T, usePreviewProvider bool) resource.TestCase {
+func replicaSetMultiCloudTestCase(t *testing.T, useSDKv2 ...bool) *resource.TestCase {
t.Helper()
+
var (
orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
projectName = acc.RandomProjectName() // No ProjectIDExecution to avoid cross-region limits because multi-region
clusterName = acc.RandomClusterName()
clusterNameUpdated = acc.RandomClusterName()
+ isSDKv2 = isOptionalTrue(useSDKv2...)
+ isTPF = !isSDKv2
)
- return resource.TestCase{
+ return &resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configReplicaSetMultiCloud(t, usePreviewProvider, orgID, projectName, clusterName),
- Check: checkReplicaSetMultiCloud(usePreviewProvider, clusterName, 3),
+ Config: configReplicaSetMultiCloud(t, orgID, projectName, clusterName, !isSDKv2),
+ Check: checkReplicaSetMultiCloud(isTPF, true, clusterName, 3),
},
{
- Config: configReplicaSetMultiCloud(t, usePreviewProvider, orgID, projectName, clusterNameUpdated),
- Check: checkReplicaSetMultiCloud(usePreviewProvider, clusterNameUpdated, 3),
+ Config: configReplicaSetMultiCloud(t, orgID, projectName, clusterNameUpdated, !isSDKv2),
+ Check: checkReplicaSetMultiCloud(isTPF, true, clusterNameUpdated, 3),
},
acc.TestStepImportCluster(resourceName),
},
@@ -278,28 +257,29 @@ func replicaSetMultiCloudTestCase(t *testing.T, usePreviewProvider bool) resourc
}
func TestAccClusterAdvancedCluster_singleShardedMultiCloud(t *testing.T) {
- resource.ParallelTest(t, singleShardedMultiCloudTestCase(t, true))
+ resource.ParallelTest(t, *singleShardedMultiCloudTestCase(t))
}
-func singleShardedMultiCloudTestCase(t *testing.T, usePreviewProvider bool) resource.TestCase {
+func singleShardedMultiCloudTestCase(t *testing.T) *resource.TestCase {
t.Helper()
+
var (
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 7)
clusterNameUpdated = acc.RandomClusterName()
)
- return resource.TestCase{
+ return &resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configShardedOldSchemaMultiCloud(t, usePreviewProvider, projectID, clusterName, 1, "M10", nil),
- Check: checkShardedOldSchemaMultiCloud(usePreviewProvider, clusterName, 1, "M10", true, nil),
+ Config: configShardedMultiCloud(t, projectID, clusterName, 1, "M10", nil),
+ Check: checkShardedMultiCloud(clusterName, "M10", true, nil),
},
{
- Config: configShardedOldSchemaMultiCloud(t, usePreviewProvider, projectID, clusterNameUpdated, 1, "M10", nil),
- Check: checkShardedOldSchemaMultiCloud(usePreviewProvider, clusterNameUpdated, 1, "M10", true, nil),
+ Config: configShardedMultiCloud(t, projectID, clusterNameUpdated, 1, "M10", nil),
+ Check: checkShardedMultiCloud(clusterNameUpdated, "M10", true, nil),
},
acc.TestStepImportCluster(resourceName),
},
@@ -319,15 +299,15 @@ func TestAccClusterAdvancedCluster_unpausedToPaused(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, false, instanceSize),
- Check: checkSingleProviderPaused(true, clusterName, false),
+ Config: configSingleProviderPaused(t, projectID, clusterName, false, instanceSize),
+ Check: checkSingleProviderPaused(clusterName, false),
},
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, true, instanceSize), // only pause to avoid `OPERATION_INVALID_MEMBER_REPLICATION_LAG`, more info in HELP-72502
- Check: checkSingleProviderPaused(true, clusterName, true),
+ Config: configSingleProviderPaused(t, projectID, clusterName, true, instanceSize), // only pause to avoid `OPERATION_INVALID_MEMBER_REPLICATION_LAG`, more info in HELP-72502
+ Check: checkSingleProviderPaused(clusterName, true),
},
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, true, anotherInstanceSize),
+ Config: configSingleProviderPaused(t, projectID, clusterName, true, anotherInstanceSize),
ExpectError: regexp.MustCompile("CANNOT_UPDATE_PAUSED_CLUSTER"),
},
acc.TestStepImportCluster(resourceName),
@@ -347,19 +327,19 @@ func TestAccClusterAdvancedCluster_pausedToUnpaused(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, true, instanceSize),
- Check: checkSingleProviderPaused(true, clusterName, true),
+ Config: configSingleProviderPaused(t, projectID, clusterName, true, instanceSize),
+ Check: checkSingleProviderPaused(clusterName, true),
},
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, false, instanceSize),
- Check: checkSingleProviderPaused(true, clusterName, false),
+ Config: configSingleProviderPaused(t, projectID, clusterName, false, instanceSize),
+ Check: checkSingleProviderPaused(clusterName, false),
},
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, true, instanceSize),
+ Config: configSingleProviderPaused(t, projectID, clusterName, true, instanceSize),
ExpectError: regexp.MustCompile("CANNOT_PAUSE_RECENTLY_RESUMED_CLUSTER"),
},
{
- Config: configSingleProviderPaused(t, true, projectID, clusterName, false, instanceSize),
+ Config: configSingleProviderPaused(t, projectID, clusterName, false, instanceSize),
},
acc.TestStepImportCluster(resourceName),
},
@@ -371,9 +351,7 @@ func TestAccClusterAdvancedCluster_advancedConfig_oldMongoDBVersion(t *testing.T
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 4)
processArgs20240530 = &admin20240530.ClusterDescriptionProcessArgs{
- DefaultReadConcern: conversion.StringPtr("available"),
DefaultWriteConcern: conversion.StringPtr("1"),
- FailIndexKeyTooLong: conversion.Pointer(false),
JavascriptEnabled: conversion.Pointer(true),
MinimumEnabledTlsProtocol: conversion.StringPtr("TLS1_2"),
NoTableScan: conversion.Pointer(false),
@@ -399,12 +377,12 @@ func TestAccClusterAdvancedCluster_advancedConfig_oldMongoDBVersion(t *testing.T
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configAdvanced(t, true, projectID, clusterName, "7.0", processArgs20240530, processArgs),
+ Config: configAdvanced(t, projectID, clusterName, "7.0", processArgs20240530, processArgs),
ExpectError: regexp.MustCompile(advancedcluster.ErrorDefaultMaxTimeMinVersion),
},
{
- Config: configAdvanced(t, true, projectID, clusterName, "7.0", processArgs20240530, processArgsCipherConfig),
- Check: checkAdvanced(true, clusterName, "TLS1_2", processArgsCipherConfig),
+ Config: configAdvanced(t, projectID, clusterName, "7.0", processArgs20240530, processArgsCipherConfig),
+ Check: checkAdvanced(clusterName, "TLS1_2", processArgsCipherConfig),
},
acc.TestStepImportCluster(resourceName),
},
@@ -416,9 +394,7 @@ func TestAccClusterAdvancedCluster_advancedConfig(t *testing.T) {
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 4)
clusterNameUpdated = acc.RandomClusterName()
processArgs20240530 = &admin20240530.ClusterDescriptionProcessArgs{
- DefaultReadConcern: conversion.StringPtr("available"),
DefaultWriteConcern: conversion.StringPtr("1"),
- FailIndexKeyTooLong: conversion.Pointer(false),
JavascriptEnabled: conversion.Pointer(true),
MinimumEnabledTlsProtocol: conversion.StringPtr("TLS1_2"),
NoTableScan: conversion.Pointer(false),
@@ -433,9 +409,7 @@ func TestAccClusterAdvancedCluster_advancedConfig(t *testing.T) {
}
processArgs20240530Updated = &admin20240530.ClusterDescriptionProcessArgs{
- DefaultReadConcern: conversion.StringPtr("available"),
DefaultWriteConcern: conversion.StringPtr("0"),
- FailIndexKeyTooLong: conversion.Pointer(false),
JavascriptEnabled: conversion.Pointer(true),
MinimumEnabledTlsProtocol: conversion.StringPtr("TLS1_2"),
NoTableScan: conversion.Pointer(false),
@@ -463,16 +437,16 @@ func TestAccClusterAdvancedCluster_advancedConfig(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configAdvanced(t, true, projectID, clusterName, "", processArgs20240530, processArgs),
- Check: checkAdvanced(true, clusterName, "TLS1_2", processArgs),
+ Config: configAdvanced(t, projectID, clusterName, "", processArgs20240530, processArgs),
+ Check: checkAdvanced(clusterName, "TLS1_2", processArgs),
},
{
- Config: configAdvanced(t, true, projectID, clusterNameUpdated, "", processArgs20240530Updated, processArgsUpdated),
- Check: checkAdvanced(true, clusterNameUpdated, "TLS1_2", processArgsUpdated),
+ Config: configAdvanced(t, projectID, clusterNameUpdated, "", processArgs20240530Updated, processArgsUpdated),
+ Check: checkAdvanced(clusterNameUpdated, "TLS1_2", processArgsUpdated),
},
{
- Config: configAdvanced(t, true, projectID, clusterNameUpdated, "", processArgs20240530Updated, processArgsUpdatedCipherConfig),
- Check: checkAdvanced(true, clusterNameUpdated, "TLS1_2", processArgsUpdatedCipherConfig),
+ Config: configAdvanced(t, projectID, clusterNameUpdated, "", processArgs20240530Updated, processArgsUpdatedCipherConfig),
+ Check: checkAdvanced(clusterNameUpdated, "TLS1_2", processArgsUpdatedCipherConfig),
},
acc.TestStepImportCluster(resourceName),
},
@@ -484,7 +458,6 @@ func TestAccClusterAdvancedCluster_defaultWrite(t *testing.T) {
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 4)
clusterNameUpdated = acc.RandomClusterName()
processArgs = &admin20240530.ClusterDescriptionProcessArgs{
- DefaultReadConcern: conversion.StringPtr("available"),
DefaultWriteConcern: conversion.StringPtr("1"),
JavascriptEnabled: conversion.Pointer(true),
MinimumEnabledTlsProtocol: conversion.StringPtr("TLS1_2"),
@@ -494,7 +467,6 @@ func TestAccClusterAdvancedCluster_defaultWrite(t *testing.T) {
SampleSizeBIConnector: conversion.Pointer(110),
}
processArgsUpdated = &admin20240530.ClusterDescriptionProcessArgs{
- DefaultReadConcern: conversion.StringPtr("available"),
DefaultWriteConcern: conversion.StringPtr("majority"),
JavascriptEnabled: conversion.Pointer(true),
MinimumEnabledTlsProtocol: conversion.StringPtr("TLS1_2"),
@@ -512,12 +484,12 @@ func TestAccClusterAdvancedCluster_defaultWrite(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configAdvancedDefaultWrite(t, true, projectID, clusterName, processArgs),
- Check: checkAdvancedDefaultWrite(true, clusterName, "1", "TLS1_2"),
+ Config: configAdvancedDefaultWrite(t, projectID, clusterName, processArgs),
+ Check: checkAdvancedDefaultWrite(clusterName, "1", "TLS1_2"),
},
{
- Config: configAdvancedDefaultWrite(t, true, projectID, clusterNameUpdated, processArgsUpdated),
- Check: checkAdvancedDefaultWrite(true, clusterNameUpdated, "majority", "TLS1_2"),
+ Config: configAdvancedDefaultWrite(t, projectID, clusterNameUpdated, processArgsUpdated),
+ Check: checkAdvancedDefaultWrite(clusterNameUpdated, "majority", "TLS1_2"),
},
acc.TestStepImportCluster(resourceName),
},
@@ -543,38 +515,38 @@ func TestAccClusterAdvancedClusterConfig_replicationSpecsAutoScaling(t *testing.
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configReplicationSpecsAutoScaling(t, true, projectID, clusterName, autoScaling, "M10", 10, 1),
+ Config: configReplicationSpecsAutoScaling(t, projectID, clusterName, autoScaling, "M10", 10, 1),
Check: resource.ComposeAggregateTestCheckFunc(
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "name", clusterName),
- acc.TestCheckResourceAttrSetPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.auto_scaling.0.compute_enabled", "false"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "advanced_configuration.0.oplog_min_retention_hours", "5.5"),
+ resource.TestCheckResourceAttr(resourceName, "name", clusterName),
+ resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.auto_scaling.compute_enabled", "false"),
+ resource.TestCheckResourceAttr(resourceName, "advanced_configuration.oplog_min_retention_hours", "5.5"),
),
},
{
- Config: configReplicationSpecsAutoScaling(t, true, projectID, clusterName, autoScalingUpdated, "M20", 20, 1),
+ Config: configReplicationSpecsAutoScaling(t, projectID, clusterName, autoScalingUpdated, "M20", 20, 1),
Check: resource.ComposeAggregateTestCheckFunc(
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "name", clusterName),
- acc.TestCheckResourceAttrSetPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.auto_scaling.0.compute_enabled", "true"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.instance_size", "M10"), // modified instance size in config is ignored
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb", "10"), // modified disk size gb in config is ignored
+ resource.TestCheckResourceAttr(resourceName, "name", clusterName),
+ resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.auto_scaling.compute_enabled", "true"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.electable_specs.instance_size", "M10"), // modified instance size in config is ignored
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.electable_specs.disk_size_gb", "10"), // modified disk size gb in config is ignored
),
},
// empty plan when auto_scaling block is removed (also aligns instance_size/disk_size_gb to values in state)
- acc.TestStepCheckEmptyPlan(configReplicationSpecsAutoScaling(t, true, projectID, clusterName, nil, "M10", 10, 1)),
+ acc.TestStepCheckEmptyPlan(configReplicationSpecsAutoScaling(t, projectID, clusterName, nil, "M10", 10, 1)),
{
- Config: configReplicationSpecsAutoScaling(t, true, projectID, clusterName, nil, "M10", 10, 2), // other change after autoscaling block removed, preserves previous state
+ Config: configReplicationSpecsAutoScaling(t, projectID, clusterName, nil, "M10", 10, 2), // other change after autoscaling block removed, preserves previous state
Check: resource.ComposeAggregateTestCheckFunc(
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "name", clusterName),
- acc.TestCheckResourceAttrSetPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.auto_scaling.0.compute_enabled", "true"), // autoscaling value is preserved
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.analytics_specs.0.node_count", "2"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.instance_size", "M10"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb", "10"),
+ resource.TestCheckResourceAttr(resourceName, "name", clusterName),
+ resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.auto_scaling.compute_enabled", "true"), // autoscaling value is preserved
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.analytics_specs.node_count", "2"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.electable_specs.instance_size", "M10"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.electable_specs.disk_size_gb", "10"),
),
},
acc.TestStepImportCluster(resourceName),
@@ -602,32 +574,32 @@ func TestAccClusterAdvancedClusterConfig_replicationSpecsAnalyticsAutoScaling(t
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configReplicationSpecsAnalyticsAutoScaling(t, true, projectID, clusterName, autoScaling, 1),
+ Config: configReplicationSpecsAnalyticsAutoScaling(t, projectID, clusterName, autoScaling, 1),
Check: resource.ComposeAggregateTestCheckFunc(
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "name", clusterName),
- acc.TestCheckResourceAttrSetPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.analytics_auto_scaling.0.compute_enabled", "false"),
+ resource.TestCheckResourceAttr(resourceName, "name", clusterName),
+ resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.analytics_auto_scaling.compute_enabled", "false"),
),
},
{
- Config: configReplicationSpecsAnalyticsAutoScaling(t, true, projectID, clusterNameUpdated, autoScalingUpdated, 1),
+ Config: configReplicationSpecsAnalyticsAutoScaling(t, projectID, clusterNameUpdated, autoScalingUpdated, 1),
Check: resource.ComposeAggregateTestCheckFunc(
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "name", clusterNameUpdated),
- acc.TestCheckResourceAttrSetPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.analytics_auto_scaling.0.compute_enabled", "true"),
+ resource.TestCheckResourceAttr(resourceName, "name", clusterNameUpdated),
+ resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.analytics_auto_scaling.compute_enabled", "true"),
),
},
// empty plan when analytics_auto_scaling block is removed
- acc.TestStepCheckEmptyPlan(configReplicationSpecsAnalyticsAutoScaling(t, true, projectID, clusterNameUpdated, nil, 1)),
+ acc.TestStepCheckEmptyPlan(configReplicationSpecsAnalyticsAutoScaling(t, projectID, clusterNameUpdated, nil, 1)),
{
- Config: configReplicationSpecsAnalyticsAutoScaling(t, true, projectID, clusterNameUpdated, nil, 2), // other changes after analytics_auto_scaling block removed, preserves previous state
+ Config: configReplicationSpecsAnalyticsAutoScaling(t, projectID, clusterNameUpdated, nil, 2), // other changes after analytics_auto_scaling block removed, preserves previous state
Check: resource.ComposeAggregateTestCheckFunc(
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "name", clusterNameUpdated),
- acc.TestCheckResourceAttrSetPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.0.analytics_auto_scaling.0.compute_enabled", "true"),
+ resource.TestCheckResourceAttr(resourceName, "name", clusterNameUpdated),
+ resource.TestCheckResourceAttrSet(resourceName, "replication_specs.0.region_configs.#"),
+ resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.0.analytics_auto_scaling.compute_enabled", "true"),
),
},
acc.TestStepImportCluster(resourceName),
@@ -635,27 +607,6 @@ func TestAccClusterAdvancedClusterConfig_replicationSpecsAnalyticsAutoScaling(t
})
}
-func TestAccClusterAdvancedClusterConfig_singleShardedTransitionToOldSchemaExpectsError(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 9)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configGeoShardedOldSchema(t, true, projectID, clusterName, 1, 1, false),
- Check: checkGeoShardedOldSchema(true, clusterName, 1, 1, true, true),
- },
- acc.TestStepImportCluster(resourceName),
- {
- Config: configGeoShardedOldSchema(t, true, projectID, clusterName, 1, 2, false),
- ExpectError: regexp.MustCompile(advancedcluster.ErrorOperationNotPermitted),
- },
- },
- })
-}
-
func TestAccClusterAdvancedCluster_withTags(t *testing.T) {
var (
orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
@@ -669,16 +620,16 @@ func TestAccClusterAdvancedCluster_withTags(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "tags"),
- Check: checkKeyValueBlocks(true, true, "tags"),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "tags"),
+ Check: checkKeyValueBlocks(true, "tags"),
},
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "tags", acc.ClusterTagsMap1, acc.ClusterTagsMap2),
- Check: checkKeyValueBlocks(true, true, "tags", acc.ClusterTagsMap1, acc.ClusterTagsMap2),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "tags", acc.ClusterTagsMap1, acc.ClusterTagsMap2),
+ Check: checkKeyValueBlocks(true, "tags", acc.ClusterTagsMap1, acc.ClusterTagsMap2),
},
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "tags", acc.ClusterTagsMap3),
- Check: checkKeyValueBlocks(true, true, "tags", acc.ClusterTagsMap3),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "tags", acc.ClusterTagsMap3),
+ Check: checkKeyValueBlocks(true, "tags", acc.ClusterTagsMap3),
},
acc.TestStepImportCluster(resourceName),
},
@@ -698,16 +649,16 @@ func TestAccClusterAdvancedCluster_withLabels(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "labels"),
- Check: checkKeyValueBlocks(true, true, "labels"),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "labels"),
+ Check: checkKeyValueBlocks(true, "labels"),
},
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "labels", acc.ClusterLabelsMap1, acc.ClusterLabelsMap2),
- Check: checkKeyValueBlocks(true, true, "labels", acc.ClusterLabelsMap1, acc.ClusterLabelsMap2),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "labels", acc.ClusterLabelsMap1, acc.ClusterLabelsMap2),
+ Check: checkKeyValueBlocks(true, "labels", acc.ClusterLabelsMap1, acc.ClusterLabelsMap2),
},
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "labels", acc.ClusterLabelsMap3),
- Check: checkKeyValueBlocks(true, true, "labels", acc.ClusterLabelsMap3),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "labels", acc.ClusterLabelsMap3),
+ Check: checkKeyValueBlocks(true, "labels", acc.ClusterLabelsMap3),
},
acc.TestStepImportCluster(resourceName),
},
@@ -726,7 +677,7 @@ func TestAccClusterAdvancedCluster_withLabelIgnored(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configWithKeyValueBlocks(t, true, orgID, projectName, clusterName, "labels", acc.ClusterLabelsMapIgnored),
+ Config: configWithKeyValueBlocks(t, orgID, projectName, clusterName, "labels", acc.ClusterLabelsMapIgnored),
ExpectError: regexp.MustCompile(advancedclustertpf.ErrLegacyIgnoreLabel.Error()),
},
},
@@ -738,8 +689,8 @@ func TestAccClusterAdvancedClusterConfig_selfManagedSharding(t *testing.T) {
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 6)
checks = []resource.TestCheckFunc{
acc.CheckExistsCluster(resourceName),
- acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "global_cluster_self_managed_sharding", "true"),
- acc.TestCheckResourceAttrPreviewProviderV2(true, dataSourceName, "global_cluster_self_managed_sharding", "true"),
+ resource.TestCheckResourceAttr(resourceName, "global_cluster_self_managed_sharding", "true"),
+ resource.TestCheckResourceAttr(dataSourceName, "global_cluster_self_managed_sharding", "true"),
}
)
@@ -749,13 +700,13 @@ func TestAccClusterAdvancedClusterConfig_selfManagedSharding(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configGeoShardedOldSchema(t, true, projectID, clusterName, 1, 1, true),
+ Config: configGeoSharded(t, projectID, clusterName, 1, 1, true),
Check: resource.ComposeAggregateTestCheckFunc(checks...,
),
},
acc.TestStepImportCluster(resourceName),
{
- Config: configGeoShardedOldSchema(t, true, projectID, clusterName, 1, 1, false),
+ Config: configGeoSharded(t, projectID, clusterName, 1, 1, false),
ExpectError: regexp.MustCompile("CANNOT_MODIFY_GLOBAL_CLUSTER_MANAGEMENT_SETTING"),
},
},
@@ -773,14 +724,14 @@ func TestAccClusterAdvancedClusterConfig_selfManagedShardingIncorrectType(t *tes
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configIncorrectTypeGobalClusterSelfManagedSharding(t, true, projectID, clusterName),
+ Config: configIncorrectTypeGobalClusterSelfManagedSharding(t, projectID, clusterName),
ExpectError: regexp.MustCompile("CANNOT_SET_SELF_MANAGED_SHARDING_FOR_NON_GLOBAL_CLUSTER"),
},
},
})
}
-func TestAccMockableAdvancedCluster_symmetricShardedOldSchema(t *testing.T) {
+func TestAccMockableAdvancedCluster_symmetricSharded(t *testing.T) {
var (
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 12)
)
@@ -791,63 +742,12 @@ func TestAccMockableAdvancedCluster_symmetricShardedOldSchema(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configShardedOldSchemaMultiCloud(t, true, projectID, clusterName, 2, "M10", &configServerManagementModeFixedToDedicated),
- Check: checkShardedOldSchemaMultiCloud(true, clusterName, 2, "M10", false, &configServerManagementModeFixedToDedicated),
- },
- {
- Config: configShardedOldSchemaMultiCloud(t, true, projectID, clusterName, 2, "M20", &configServerManagementModeAtlasManaged),
- Check: checkShardedOldSchemaMultiCloud(true, clusterName, 2, "M20", false, &configServerManagementModeAtlasManaged),
- },
- acc.TestStepImportCluster(resourceName, "replication_specs"), // Import with old schema will NOT use `num_shards`
- },
- })
-}
-
-func TestAccClusterAdvancedClusterConfig_symmetricGeoShardedOldSchema(t *testing.T) {
- resource.ParallelTest(t, symmetricGeoShardedOldSchemaTestCase(t, true))
-}
-
-func symmetricGeoShardedOldSchemaTestCase(t *testing.T, usePreviewProvider bool) resource.TestCase {
- t.Helper()
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 18)
-
- return resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configGeoShardedOldSchema(t, usePreviewProvider, projectID, clusterName, 2, 2, false),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkGeoShardedOldSchema(usePreviewProvider, clusterName, 2, 2, true, false),
- acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER")),
- },
- {
- Config: configGeoShardedOldSchema(t, usePreviewProvider, projectID, clusterName, 3, 3, false),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkGeoShardedOldSchema(usePreviewProvider, clusterName, 3, 3, true, false),
- acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER")),
- },
- acc.TestStepImportCluster(resourceName, "replication_specs"), // Import with old schema will NOT use `num_shards`
- },
- }
-}
-
-func TestAccMockableAdvancedCluster_symmetricShardedOldSchemaDiskSizeGBAtElectableLevel(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 6)
-
- unit.CaptureOrMockTestCaseAndRun(t, mockConfig, &resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configShardedOldSchemaDiskSizeGBElectableLevel(t, true, projectID, clusterName, 50),
- Check: checkShardedOldSchemaDiskSizeGBElectableLevel(true, 50),
+ Config: configShardedMultiCloud(t, projectID, clusterName, 2, "M10", &configServerManagementModeFixedToDedicated),
+ Check: checkShardedMultiCloud(clusterName, "M10", false, &configServerManagementModeFixedToDedicated),
},
{
- Config: configShardedOldSchemaDiskSizeGBElectableLevel(t, true, projectID, clusterName, 55),
- Check: checkShardedOldSchemaDiskSizeGBElectableLevel(true, 55),
+ Config: configShardedMultiCloud(t, projectID, clusterName, 2, "M20", &configServerManagementModeAtlasManaged),
+ Check: checkShardedMultiCloud(clusterName, "M20", false, &configServerManagementModeAtlasManaged),
},
acc.TestStepImportCluster(resourceName, "replication_specs"), // Import with old schema will NOT use `num_shards`
},
@@ -867,15 +767,15 @@ func TestAccClusterAdvancedClusterConfig_symmetricShardedNewSchemaToAsymmetricAd
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configShardedNewSchema(t, true, orgID, projectName, clusterName, 50, "M10", "M10", nil, nil, false, false),
+ Config: configShardedNewSchema(t, orgID, projectName, clusterName, 50, "M10", "M10", nil, nil, false, false, false),
Check: checkShardedNewSchema(true, 50, "M10", "M10", nil, nil, false, false),
},
{
- Config: configShardedNewSchema(t, true, orgID, projectName, clusterName, 55, "M10", "M20", nil, nil, true, false), // add middle replication spec and transition to asymmetric
+ Config: configShardedNewSchema(t, orgID, projectName, clusterName, 55, "M10", "M20", nil, nil, true, false, false), // add middle replication spec and transition to asymmetric
Check: checkShardedNewSchema(true, 55, "M10", "M20", nil, nil, true, true),
},
{
- Config: configShardedNewSchema(t, true, orgID, projectName, clusterName, 55, "M10", "M20", nil, nil, false, false), // removes middle replication spec
+ Config: configShardedNewSchema(t, orgID, projectName, clusterName, 55, "M10", "M20", nil, nil, false, false, false), // removes middle replication spec
Check: checkShardedNewSchema(true, 55, "M10", "M20", nil, nil, true, false),
},
acc.TestStepImportCluster(resourceName),
@@ -884,15 +784,18 @@ func TestAccClusterAdvancedClusterConfig_symmetricShardedNewSchemaToAsymmetricAd
}
func TestAccClusterAdvancedClusterConfig_asymmetricShardedNewSchema(t *testing.T) {
- resource.ParallelTest(t, asymmetricShardedNewSchemaTestCase(t, true))
+ resource.ParallelTest(t, asymmetricShardedNewSchemaTestCase(t))
}
-func asymmetricShardedNewSchemaTestCase(t *testing.T, usePreviewProvider bool) resource.TestCase {
+func asymmetricShardedNewSchemaTestCase(t *testing.T, useSDKv2 ...bool) resource.TestCase {
t.Helper()
+
var (
orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
projectName = acc.RandomProjectName()
clusterName = acc.RandomClusterName()
+ isSDKv2 = isOptionalTrue(useSDKv2...)
+ isTPF = !isSDKv2
)
return resource.TestCase{
@@ -901,10 +804,9 @@ func asymmetricShardedNewSchemaTestCase(t *testing.T, usePreviewProvider bool) r
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configShardedNewSchema(t, usePreviewProvider, orgID, projectName, clusterName, 50, "M30", "M40", admin.PtrInt(2000), admin.PtrInt(2500), false, false),
+ Config: configShardedNewSchema(t, orgID, projectName, clusterName, 50, "M30", "M40", admin.PtrInt(2000), admin.PtrInt(2500), false, false, isSDKv2),
Check: resource.ComposeAggregateTestCheckFunc(
- checkShardedNewSchema(usePreviewProvider, 50, "M30", "M40", admin.PtrInt(2000), admin.PtrInt(2500), true, false),
- resource.TestCheckResourceAttr("data.mongodbatlas_advanced_clusters.test-replication-specs-per-shard-false", "results.#", "0"),
+ checkShardedNewSchema(isTPF, 50, "M30", "M40", admin.PtrInt(2000), admin.PtrInt(2500), true, false),
acc.CheckIndependentShardScalingMode(resourceName, clusterName, "SHARD")),
},
acc.TestStepImportCluster(resourceName),
@@ -925,7 +827,7 @@ func TestAccClusterAdvancedClusterConfig_asymmetricShardedNewSchemaInconsistentD
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configShardedNewSchema(t, true, orgID, projectName, clusterName, 50, "M30", "M40", admin.PtrInt(2000), admin.PtrInt(2500), false, true),
+ Config: configShardedNewSchema(t, orgID, projectName, clusterName, 50, "M30", "M40", admin.PtrInt(2000), admin.PtrInt(2500), false, true),
ExpectError: regexp.MustCompile("DISK_SIZE_GB_INCONSISTENT"), // API Error when disk size is not consistent across all shards
},
},
@@ -933,71 +835,33 @@ func TestAccClusterAdvancedClusterConfig_asymmetricShardedNewSchemaInconsistentD
}
func TestAccClusterAdvancedClusterConfig_asymmetricGeoShardedNewSchemaAddingRemovingShard(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 9)
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configGeoShardedNewSchema(t, true, projectID, clusterName, false),
- Check: checkGeoShardedNewSchema(true, false),
- },
- {
- Config: configGeoShardedNewSchema(t, true, projectID, clusterName, true),
- Check: checkGeoShardedNewSchema(true, true),
- },
- {
- Config: configGeoShardedNewSchema(t, true, projectID, clusterName, false),
- Check: checkGeoShardedNewSchema(true, false),
- },
- acc.TestStepImportCluster(resourceName),
- },
- })
+ resource.ParallelTest(t, *asymmetricGeoShardedNewSchema(t))
}
-func TestAccClusterAdvancedClusterConfig_shardedTransitionFromOldToNewSchema(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
+func asymmetricGeoShardedNewSchema(t *testing.T) *resource.TestCase {
+ t.Helper()
+ projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 9)
- resource.ParallelTest(t, resource.TestCase{
+ return &resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, false, false),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkShardedTransitionOldToNewSchema(true, false),
- acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER")),
- },
- {
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, false),
- Check: checkShardedTransitionOldToNewSchema(true, true),
+ Config: configGeoShardedNewSchema(t, projectID, clusterName, false),
+ Check: checkGeoShardedNewSchema(false),
},
- acc.TestStepImportCluster(resourceName),
- },
- })
-}
-
-func TestAccClusterAdvancedClusterConfig_geoShardedTransitionFromOldToNewSchema(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
{
- Config: configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, false),
- Check: checkGeoShardedTransitionOldToNewSchema(true, false),
+ Config: configGeoShardedNewSchema(t, projectID, clusterName, true),
+ Check: checkGeoShardedNewSchema(true),
},
{
- Config: configGeoShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true),
- Check: checkGeoShardedTransitionOldToNewSchema(true, true),
+ Config: configGeoShardedNewSchema(t, projectID, clusterName, false),
+ Check: checkGeoShardedNewSchema(false),
},
acc.TestStepImportCluster(resourceName),
},
- })
+ }
}
func TestAccAdvancedCluster_replicaSetScalingStrategyAndRedactClientLogData(t *testing.T) {
@@ -1013,87 +877,28 @@ func TestAccAdvancedCluster_replicaSetScalingStrategyAndRedactClientLogData(t *t
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, true, orgID, projectName, clusterName, "WORKLOAD_TYPE", true),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "WORKLOAD_TYPE", true),
+ Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, orgID, projectName, clusterName, "WORKLOAD_TYPE", true),
+ Check: checkReplicaSetScalingStrategyAndRedactClientLogData("WORKLOAD_TYPE", true),
},
{
- Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, true, orgID, projectName, clusterName, "SEQUENTIAL", false),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "SEQUENTIAL", false),
+ Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, orgID, projectName, clusterName, "SEQUENTIAL", false),
+ Check: checkReplicaSetScalingStrategyAndRedactClientLogData("SEQUENTIAL", false),
},
{
- Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, true, orgID, projectName, clusterName, "NODE_TYPE", true),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "NODE_TYPE", true),
+ Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, orgID, projectName, clusterName, "NODE_TYPE", true),
+ Check: checkReplicaSetScalingStrategyAndRedactClientLogData("NODE_TYPE", true),
},
{
- Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, true, orgID, projectName, clusterName, "NODE_TYPE", false),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "NODE_TYPE", false),
+ Config: configReplicaSetScalingStrategyAndRedactClientLogData(t, orgID, projectName, clusterName, "NODE_TYPE", false),
+ Check: checkReplicaSetScalingStrategyAndRedactClientLogData("NODE_TYPE", false),
},
acc.TestStepImportCluster(resourceName),
},
})
}
-func TestAccAdvancedCluster_replicaSetScalingStrategyAndRedactClientLogDataOldSchema(t *testing.T) {
- var (
- orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
- projectName = acc.RandomProjectName()
- clusterName = acc.RandomClusterName()
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configReplicaSetScalingStrategyAndRedactClientLogDataOldSchema(t, true, orgID, projectName, clusterName, "WORKLOAD_TYPE", false),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "WORKLOAD_TYPE", false),
- },
- {
- Config: configReplicaSetScalingStrategyAndRedactClientLogDataOldSchema(t, true, orgID, projectName, clusterName, "SEQUENTIAL", true),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "SEQUENTIAL", true),
- },
- {
- Config: configReplicaSetScalingStrategyAndRedactClientLogDataOldSchema(t, true, orgID, projectName, clusterName, "NODE_TYPE", false),
- Check: checkReplicaSetScalingStrategyAndRedactClientLogData(true, "NODE_TYPE", false),
- },
- acc.TestStepImportCluster(resourceName, "replication_specs"), // Import with old schema will NOT use `num_shards`
- },
- })
-}
-
-// TestAccClusterAdvancedCluster_priorityOldSchema will be able to be simplied or deleted in CLOUDP-275825
-func TestAccClusterAdvancedCluster_priorityOldSchema(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 6)
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configPriority(t, true, projectID, clusterName, true, true),
- ExpectError: regexp.MustCompile("priority values in region_configs must be in descending order"),
- },
- {
- Config: configPriority(t, true, projectID, clusterName, true, false),
- Check: acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#", "2"),
- },
- {
- Config: configPriority(t, true, projectID, clusterName, true, true),
- ExpectError: regexp.MustCompile("priority values in region_configs must be in descending order"),
- },
- // Extra step added to allow deletion, otherwise we get `Error running post-test destroy` since validation of TF fails
- {
- Config: configPriority(t, true, projectID, clusterName, true, false),
- Check: acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#", "2"),
- },
- acc.TestStepImportCluster(resourceName, "replication_specs"), // Import with old schema will NOT use `num_shards`
- },
- })
-}
-
-// TestAccClusterAdvancedCluster_priorityNewSchema will be able to be simplied or deleted in CLOUDP-275825
-func TestAccClusterAdvancedCluster_priorityNewSchema(t *testing.T) {
+// TestAccClusterAdvancedCluster_priority will be able to be simplied or deleted in CLOUDP-275825
+func TestAccClusterAdvancedCluster_priority(t *testing.T) {
projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 3)
resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
@@ -1101,21 +906,21 @@ func TestAccClusterAdvancedCluster_priorityNewSchema(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configPriority(t, true, projectID, clusterName, false, true),
+ Config: configPriority(t, projectID, clusterName, true),
ExpectError: regexp.MustCompile("priority values in region_configs must be in descending order"),
},
{
- Config: configPriority(t, true, projectID, clusterName, false, false),
- Check: acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#", "2"),
+ Config: configPriority(t, projectID, clusterName, false),
+ Check: resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.#", "2"),
},
{
- Config: configPriority(t, true, projectID, clusterName, false, true),
+ Config: configPriority(t, projectID, clusterName, true),
ExpectError: regexp.MustCompile("priority values in region_configs must be in descending order"),
},
// Extra step added to allow deletion, otherwise we get `Error running post-test destroy` since validation of TF fails
{
- Config: configPriority(t, true, projectID, clusterName, false, false),
- Check: acc.TestCheckResourceAttrPreviewProviderV2(true, resourceName, "replication_specs.0.region_configs.#", "2"),
+ Config: configPriority(t, projectID, clusterName, false),
+ Check: resource.TestCheckResourceAttr(resourceName, "replication_specs.0.region_configs.#", "2"),
},
},
})
@@ -1131,12 +936,12 @@ func TestAccClusterAdvancedCluster_biConnectorConfig(t *testing.T) {
CheckDestroy: acc.CheckDestroyCluster,
Steps: []resource.TestStep{
{
- Config: configBiConnectorConfig(t, true, projectID, clusterName, false),
- Check: checkTenantBiConnectorConfig(true, projectID, clusterName, false),
+ Config: configBiConnectorConfig(t, projectID, clusterName, false),
+ Check: checkTenantBiConnectorConfig(projectID, clusterName, false),
},
{
- Config: configBiConnectorConfig(t, true, projectID, clusterName, true),
- Check: checkTenantBiConnectorConfig(true, projectID, clusterName, true),
+ Config: configBiConnectorConfig(t, projectID, clusterName, true),
+ Check: checkTenantBiConnectorConfig(projectID, clusterName, true),
},
acc.TestStepImportCluster(resourceName),
},
@@ -1165,11 +970,11 @@ func TestAccClusterAdvancedCluster_pinnedFCVWithVersionUpgradeAndDowngrade(t *te
Steps: []resource.TestStep{
{
Config: configFCVPinning(t, orgID, projectName, clusterName, nil, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
+ Check: acc.CheckFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
},
{ // pins fcv
Config: configFCVPinning(t, orgID, projectName, clusterName, &firstExpirationDate, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, admin.PtrString(firstExpirationDate), admin.PtrInt(7)),
+ Check: acc.CheckFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, admin.PtrString(firstExpirationDate), admin.PtrInt(7)),
},
{ // using incorrect format
Config: configFCVPinning(t, orgID, projectName, clusterName, &invalidDateFormat, "7.0"),
@@ -1177,65 +982,19 @@ func TestAccClusterAdvancedCluster_pinnedFCVWithVersionUpgradeAndDowngrade(t *te
},
{ // updates expiration date of fcv
Config: configFCVPinning(t, orgID, projectName, clusterName, &updatedExpirationDate, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, admin.PtrString(updatedExpirationDate), admin.PtrInt(7)),
+ Check: acc.CheckFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, admin.PtrString(updatedExpirationDate), admin.PtrInt(7)),
},
{ // upgrade mongodb version with fcv pinned
Config: configFCVPinning(t, orgID, projectName, clusterName, &updatedExpirationDate, "8.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 8, admin.PtrString(updatedExpirationDate), admin.PtrInt(7)),
+ Check: acc.CheckFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 8, admin.PtrString(updatedExpirationDate), admin.PtrInt(7)),
},
{ // downgrade mongodb version with fcv pinned
Config: configFCVPinning(t, orgID, projectName, clusterName, &updatedExpirationDate, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, admin.PtrString(updatedExpirationDate), admin.PtrInt(7)),
+ Check: acc.CheckFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, admin.PtrString(updatedExpirationDate), admin.PtrInt(7)),
},
{ // unpins fcv
Config: configFCVPinning(t, orgID, projectName, clusterName, nil, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
- },
- acc.TestStepImportCluster(resourceName),
- },
- })
-}
-
-func TestAccAdvancedCluster_oldToNewSchemaWithAutoscalingEnabled(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: acc.PreCheckBasicSleep(t, nil, projectID, clusterName),
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, false, true),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER"),
- },
- {
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, true),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "SHARD"),
- },
- acc.TestStepImportCluster(resourceName),
- },
- })
-}
-
-func TestAccAdvancedCluster_oldToNewSchemaWithAutoscalingDisabledToEnabled(t *testing.T) {
- projectID, clusterName := acc.ProjectIDExecutionWithCluster(t, 8)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: acc.PreCheckBasicSleep(t, nil, projectID, clusterName),
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyCluster,
- Steps: []resource.TestStep{
- {
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, false, false),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER"),
- },
- {
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, false),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "CLUSTER"),
- },
- {
- Config: configShardedTransitionOldToNewSchema(t, true, projectID, clusterName, true, true),
- Check: acc.CheckIndependentShardScalingMode(resourceName, clusterName, "SHARD"),
+ Check: acc.CheckFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
},
acc.TestStepImportCluster(resourceName),
},
@@ -1254,54 +1013,50 @@ func TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate(t *testing.T) {
}
timeoutCheck = resource.TestCheckResourceAttr(resourceName, "timeouts.create", "6000s") // timeouts.create is not set on data sources
tagsLabelsMap = map[string]string{"key": "env", "value": "test"}
- tagsCheck = checkKeyValueBlocks(true, false, "tags", tagsLabelsMap)
- labelsCheck = checkKeyValueBlocks(true, false, "labels", tagsLabelsMap)
- checks = checkAggr(true, checksSet, checksMap, timeoutCheck)
+ tagsCheck = checkKeyValueBlocks(false, "tags", tagsLabelsMap)
+ labelsCheck = checkKeyValueBlocks(false, "labels", tagsLabelsMap)
+ checks = checkAggr(checksSet, checksMap, timeoutCheck)
afterUpdateMap = map[string]string{
- "state_name": "IDLE",
- "backup_enabled": "true",
- "bi_connector_config.0.enabled": "true",
- "pit_enabled": "true",
- "redact_client_log_data": "true",
- "replica_set_scaling_strategy": "NODE_TYPE",
- "root_cert_type": "ISRGROOTX1",
- "version_release_system": "CONTINUOUS",
- "advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds": "100",
- "advanced_configuration.0.default_read_concern": "available",
- "advanced_configuration.0.default_write_concern": "majority",
- "advanced_configuration.0.javascript_enabled": "true",
- "advanced_configuration.0.minimum_enabled_tls_protocol": "TLS1_2",
- "advanced_configuration.0.no_table_scan": "true",
- "advanced_configuration.0.sample_refresh_interval_bi_connector": "310",
- "advanced_configuration.0.sample_size_bi_connector": "110",
- "advanced_configuration.0.transaction_lifetime_limit_seconds": "300",
- "advanced_configuration.0.tls_cipher_config_mode": "CUSTOM",
- "advanced_configuration.0.custom_openssl_cipher_config_tls12.#": "1",
- "advanced_configuration.0.default_max_time_ms": "65",
- }
- checksUpdate = checkAggr(true, checksSet, afterUpdateMap, timeoutCheck, tagsCheck, labelsCheck)
+ "state_name": "IDLE",
+ "backup_enabled": "true",
+ "bi_connector_config.enabled": "true",
+ "pit_enabled": "true",
+ "redact_client_log_data": "true",
+ "replica_set_scaling_strategy": "NODE_TYPE",
+ "root_cert_type": "ISRGROOTX1",
+ "version_release_system": "CONTINUOUS",
+ "advanced_configuration.change_stream_options_pre_and_post_images_expire_after_seconds": "100",
+ "advanced_configuration.default_write_concern": "majority",
+ "advanced_configuration.javascript_enabled": "true",
+ "advanced_configuration.minimum_enabled_tls_protocol": "TLS1_2",
+ "advanced_configuration.no_table_scan": "true",
+ "advanced_configuration.sample_refresh_interval_bi_connector": "310",
+ "advanced_configuration.sample_size_bi_connector": "110",
+ "advanced_configuration.transaction_lifetime_limit_seconds": "300",
+ "advanced_configuration.tls_cipher_config_mode": "CUSTOM",
+ "advanced_configuration.custom_openssl_cipher_config_tls12.#": "1",
+ "advanced_configuration.default_max_time_ms": "65",
+ }
+ checksUpdate = checkAggr(checksSet, afterUpdateMap, timeoutCheck, tagsCheck, labelsCheck)
fullUpdate = `
backup_enabled = true
- bi_connector_config {
+ bi_connector_config = {
enabled = true
}
- labels {
- key = "env"
- value = "test"
+ labels = {
+ "env" = "test"
}
- tags {
- key = "env"
- value = "test"
+ tags = {
+ "env" = "test"
}
pit_enabled = true
redact_client_log_data = true
replica_set_scaling_strategy = "NODE_TYPE"
root_cert_type = "ISRGROOTX1"
version_release_system = "CONTINUOUS"
-
- advanced_configuration {
+
+ advanced_configuration = {
change_stream_options_pre_and_post_images_expire_after_seconds = 100
- default_read_concern = "available"
default_write_concern = "majority"
javascript_enabled = true
minimum_enabled_tls_protocol = "TLS1_2" # This cluster does not support TLS1.0 or TLS1.1. If you must use old TLS versions contact MongoDB support
@@ -1342,28 +1097,27 @@ func TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling(t *testing
"state_name": "IDLE",
"project_id": projectID,
"name": clusterName,
+ "replication_specs.0.region_configs.0.electable_specs.instance_size": "M30",
+ "replication_specs.0.region_configs.0.analytics_specs.node_count": "0",
}
checksUpdatedMap = map[string]string{
- "replication_specs.0.region_configs.0.auto_scaling.0.disk_gb_enabled": "true",
- "replication_specs.0.region_configs.0.electable_specs.0.instance_size": "M30",
- "replication_specs.0.region_configs.0.analytics_specs.0.instance_size": "M30",
- "replication_specs.0.region_configs.0.analytics_specs.0.node_count": "1",
- "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops": "2000",
- "replication_specs.0.region_configs.0.analytics_specs.0.ebs_volume_type": "PROVISIONED",
- "replication_specs.1.region_configs.0.analytics_specs.0.instance_size": "M30",
- "replication_specs.1.region_configs.0.analytics_specs.0.node_count": "1",
- "replication_specs.1.region_configs.0.analytics_specs.0.ebs_volume_type": "PROVISIONED",
- "replication_specs.1.region_configs.0.analytics_specs.0.disk_iops": "1000",
- }
- checksUpdated = checkAggr(true, nil, checksUpdatedMap)
+ "replication_specs.0.region_configs.0.auto_scaling.disk_gb_enabled": "true",
+ "replication_specs.0.region_configs.0.electable_specs.instance_size": "M30",
+ "replication_specs.0.region_configs.0.analytics_specs.instance_size": "M30",
+ "replication_specs.0.region_configs.0.analytics_specs.node_count": "1",
+ "replication_specs.0.region_configs.0.analytics_specs.disk_iops": "2000",
+ "replication_specs.0.region_configs.0.analytics_specs.ebs_volume_type": "PROVISIONED",
+ "replication_specs.1.region_configs.0.analytics_specs.instance_size": "M30",
+ "replication_specs.1.region_configs.0.analytics_specs.node_count": "1",
+ "replication_specs.1.region_configs.0.analytics_specs.ebs_volume_type": "PROVISIONED",
+ "replication_specs.1.region_configs.0.analytics_specs.disk_iops": "1000",
+ }
+ checksUpdated = checkAggr(nil, checksUpdatedMap)
)
- if config.PreviewProviderV2AdvancedCluster() { // SDKv2 don't set "computed" specs in the state
- checksMap["replication_specs.0.region_configs.0.electable_specs.0.instance_size"] = "M30"
- checksMap["replication_specs.0.region_configs.0.analytics_specs.0.node_count"] = "0"
- }
- checks := checkAggr(true, nil, checksMap)
- checksMap["replication_specs.0.region_configs.0.analytics_specs.0.node_count"] = "1" // analytics_specs is kept even if it's removed from the config
- checksAfter := checkAggr(true, nil, checksMap)
+
+ checks := checkAggr(nil, checksMap)
+ checksMap["replication_specs.0.region_configs.0.analytics_specs.node_count"] = "1" // analytics_specs is kept even if it's removed from the config
+ checksAfter := checkAggr(nil, checksMap)
unit.CaptureOrMockTestCaseAndRun(t, mockConfig, &resource.TestCase{
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
Steps: []resource.TestStep{
@@ -1385,9 +1139,6 @@ func TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling(t *testing
}
func TestAccAdvancedCluster_removeBlocksFromConfig(t *testing.T) {
- if !config.PreviewProviderV2AdvancedCluster() { // SDKv2 don't set "computed" specs in the state
- t.Skip("This test is not applicable for SDKv2")
- }
var (
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 15)
)
@@ -1434,44 +1185,11 @@ func TestAccAdvancedCluster_createTimeoutWithDeleteOnCreateReplicaset(t *testing
resource.ParallelTest(t, *createCleanupTest(t, configCall, waitOnClusterDeleteDone, true))
}
-func TestAccAdvancedCluster_createTimeoutWithDeleteOnCreateFlex(t *testing.T) {
- var (
- projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 1)
- configCall = func(t *testing.T, timeoutSection string) string {
- t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, fmt.Sprintf(`
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- provider_name = "FLEX"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- }
- %[3]s
- }`, projectID, clusterName, timeoutSection))
- }
- waitOnClusterDeleteDone = func() {
- err := flexcluster.WaitStateTransitionDelete(t.Context(), &admin.GetFlexClusterApiParams{
- GroupId: projectID,
- Name: clusterName,
- }, acc.ConnV2().FlexClustersApi)
- require.NoError(t, err)
- time.Sleep(1 * time.Minute) // decrease the chance of `CONTAINER_WAITING_FOR_FAST_RECORD_CLEAN_UP`: "A transient error occurred. Please try again in a minute or use a different name"
- }
- )
- resource.ParallelTest(t, *createCleanupTest(t, configCall, waitOnClusterDeleteDone, false))
-}
-
func createCleanupTest(t *testing.T, configCall func(t *testing.T, timeoutSection string) string, waitOnClusterDeleteDone func(), isUpdateSupported bool) *resource.TestCase {
t.Helper()
var (
timeoutsStrShort = `
- timeouts {
+ timeouts = {
create = "2s"
}
delete_on_create_timeout = true
@@ -1505,17 +1223,12 @@ func createCleanupTest(t *testing.T, configCall func(t *testing.T, timeoutSectio
},
)
deleteOnCreateTimeoutRemoved := configCall(t, "")
- if config.PreviewProviderV2AdvancedCluster() {
- steps = append(steps,
- resource.TestStep{
- Config: deleteOnCreateTimeoutRemoved,
- Check: resource.TestCheckNoResourceAttr(resourceName, "delete_on_create_timeout"),
- })
- } else {
- // removing an optional false value has no affect in SDKv2, as false==null and no-plan-change
- steps = append(steps, acc.TestStepCheckEmptyPlan(deleteOnCreateTimeoutRemoved))
- }
- steps = append(steps, acc.TestStepImportCluster(resourceName))
+ steps = append(steps,
+ resource.TestStep{
+ Config: deleteOnCreateTimeoutRemoved,
+ Check: resource.TestCheckNoResourceAttr(resourceName, "delete_on_create_timeout"),
+ },
+ acc.TestStepImportCluster(resourceName))
}
return &resource.TestCase{
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
@@ -1527,36 +1240,36 @@ func configBasicReplicaset(t *testing.T, projectID, clusterName, extra, timeoutS
t.Helper()
if timeoutStr == "" {
timeoutStr = `
- timeouts {
+ timeouts = {
create = "6000s"
}`
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
%[4]s
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- auto_scaling {
+ auto_scaling = {
compute_scale_down_enabled = false
compute_enabled = false
disk_gb_enabled = true
}
- electable_specs {
+ electable_specs = {
node_count = 3
instance_size = "M10"
disk_size_gb = 10
}
- }
- }
+ }]
+ }]
%[3]s
}
- `, projectID, clusterName, extra, timeoutStr)) + dataSourcesTFNewSchema
+ `, projectID, clusterName, extra, timeoutStr) + dataSourcesConfig
}
func configSharded(t *testing.T, projectID, clusterName string, withUpdate bool) string {
@@ -1564,11 +1277,11 @@ func configSharded(t *testing.T, projectID, clusterName string, withUpdate bool)
var autoScaling, analyticsSpecs string
if withUpdate {
autoScaling = `
- auto_scaling {
+ auto_scaling = {
disk_gb_enabled = true
}`
analyticsSpecs = `
- analytics_specs {
+ analytics_specs = {
instance_size = "M30"
node_count = 1
ebs_volume_type = "PROVISIONED"
@@ -1580,15 +1293,15 @@ func configSharded(t *testing.T, projectID, clusterName string, withUpdate bool)
// The rule is: For any replication spec, the `(analytics|electable|read_only)_spec.disk_iops` must be the same across all region_configs
// The API raises no errors, but the response reflects this rule
analyticsSpecsForSpec2 := strings.ReplaceAll(analyticsSpecs, "2000", "1000")
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "SHARDED"
- replication_specs { # shard 1
- region_configs {
- electable_specs {
+ replication_specs = [{ # shard 1
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
disk_iops = 2000
node_count = 3
@@ -1599,11 +1312,11 @@ func configSharded(t *testing.T, projectID, clusterName string, withUpdate bool)
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
- replication_specs { # shard 2
- region_configs {
- electable_specs {
+ }]
+ },
+ { # shard 2
+ region_configs = [{
+ electable_specs = {
instance_size = "M30"
ebs_volume_type = "PROVISIONED"
disk_iops = 1000
@@ -1614,24 +1327,24 @@ func configSharded(t *testing.T, projectID, clusterName string, withUpdate bool)
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
- `, projectID, clusterName, autoScaling, analyticsSpecs, analyticsSpecsForSpec2)) + dataSourcesTFNewSchema
+ `, projectID, clusterName, autoScaling, analyticsSpecs, analyticsSpecsForSpec2) + dataSourcesConfig
}
func configBlocks(t *testing.T, projectID, clusterName, instanceSize string, defineBlocks bool) string {
t.Helper()
var extraConfig0, extraConfig1, electableSpecs0 string
autoScalingBlocks := `
- auto_scaling {
+ auto_scaling = {
disk_gb_enabled = true
compute_enabled = true
compute_min_instance_size = "M10"
compute_max_instance_size = "M30"
compute_scale_down_enabled = true
}
- analytics_auto_scaling {
+ analytics_auto_scaling = {
disk_gb_enabled = true
compute_enabled = true
compute_min_instance_size = "M10"
@@ -1641,68 +1354,67 @@ func configBlocks(t *testing.T, projectID, clusterName, instanceSize string, def
`
if defineBlocks {
electableSpecs0 = `
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 5
}
`
// read only + autoscaling blocks
extraConfig0 = `
- read_only_specs {
+ read_only_specs = {
instance_size = "M10"
node_count = 2
}
` + autoScalingBlocks
// read only + analytics + autoscaling blocks
extraConfig1 = `
- read_only_specs {
+ read_only_specs = {
instance_size = "M10"
node_count = 1
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 4
}
` + autoScalingBlocks
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "GEOSHARDED"
- replication_specs {
+ replication_specs = [{
zone_name = "Zone 1"
- region_configs {
+ region_configs = [{
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
%[6]s
%[4]s
- }
- }
-
- replication_specs {
+ }]
+ },
+ {
zone_name = "Zone 2"
- region_configs {
+ region_configs = [{
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- electable_specs {
+ electable_specs = {
instance_size = %[3]q
node_count = 3
}
%[5]s
- }
- region_configs { // region with no electable specs
+ },
+ { // region with no electable specs
provider_name = "AWS"
priority = 0
region_name = "US_EAST_1"
%[4]s
- }
- }
+ }]
+ }]
}
- `, projectID, clusterName, instanceSize, extraConfig0, extraConfig1, electableSpecs0))
+ `, projectID, clusterName, instanceSize, extraConfig0, extraConfig1, electableSpecs0)
}
func checkBlocks(instanceSize string) resource.TestCheckFunc {
@@ -1732,51 +1444,52 @@ func checkBlocks(instanceSize string) resource.TestCheckFunc {
checksMap[fmt.Sprintf("replication_specs.%d.region_configs.0.%s.compute_max_instance_size", repSpecsIdx, block)] = "M30"
}
}
- return resource.ComposeAggregateTestCheckFunc(acc.AddAttrChecksPreviewProviderV2(true, resourceName, nil, checksMap)...)
+ return resource.ComposeAggregateTestCheckFunc(acc.AddAttrChecksMigTPF(true, resourceName, nil, checksMap)...)
}
-func checkAggr(usePreviewProvider bool, attrsSet []string, attrsMap map[string]string, extra ...resource.TestCheckFunc) resource.TestCheckFunc {
+func checkAggr(attrsSet []string, attrsMap map[string]string, extra ...resource.TestCheckFunc) resource.TestCheckFunc {
extraChecks := extra
extraChecks = append(extraChecks, acc.CheckExistsCluster(resourceName))
- return acc.CheckRSAndDSPreviewProviderV2(usePreviewProvider, resourceName, admin.PtrString(dataSourceName), nil, attrsSet, attrsMap, extraChecks...)
+ return acc.CheckRSAndDS(resourceName, admin.PtrString(dataSourceName), nil, attrsSet, attrsMap, extraChecks...)
}
-func configTenant(t *testing.T, usePreviewProvider bool, projectID, name, zoneName, instanceSize string) string {
+func configTenant(t *testing.T, projectID, name, zoneName, instanceSize string) string {
t.Helper()
zoneNameLine := ""
if zoneName != "" {
zoneNameLine = fmt.Sprintf("zone_name = %q", zoneName)
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = %[4]q
- }
- provider_name = "TENANT"
- backing_provider_name = "AWS"
- region_name = "US_EAST_1"
- priority = 7
- }
- %[3]s
+ return fmt.Sprintf(`
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = %[1]q
+ name = %[2]q
+ cluster_type = "REPLICASET"
+
+ replication_specs = [{
+ region_configs = [{
+ backing_provider_name = "AWS"
+ electable_specs = {
+ instance_size = %[4]q
}
- }
- `, projectID, name, zoneNameLine, instanceSize)) + dataSourcesTFNewSchema
+ priority = 7
+ provider_name = "TENANT"
+ region_name = "US_EAST_1"
+ }]
+ %[3]s
+ }]
+ }
+`, projectID, name, zoneNameLine, instanceSize) + dataSourcesConfig
}
-func checkTenant(usePreviewProvider bool, projectID, name string, checkPlural bool) resource.TestCheckFunc {
+func checkTenant(projectID, name string, checkPlural bool) resource.TestCheckFunc {
var pluralChecks []resource.TestCheckFunc
if checkPlural {
- pluralChecks = acc.AddAttrSetChecksPreviewProviderV2(usePreviewProvider, dataSourcePluralName, nil,
+ pluralChecks = acc.AddAttrSetChecks(dataSourcePluralName, nil,
[]string{"results.#", "results.0.replication_specs.#", "results.0.name", "results.0.termination_protection_enabled", "results.0.global_cluster_self_managed_sharding"}...)
}
- return checkAggr(usePreviewProvider,
- []string{"replication_specs.#", "replication_specs.0.id", "replication_specs.0.region_configs.#"},
+ return checkAggr(
+ []string{"replication_specs.#", "replication_specs.0.region_configs.#"},
map[string]string{
"project_id": projectID,
"name": name,
@@ -1786,28 +1499,32 @@ func checkTenant(usePreviewProvider bool, projectID, name string, checkPlural bo
}
func checksBasicDedicated(projectID, name string, checkPlural bool) resource.TestCheckFunc {
- originalChecks := checkTenant(true, projectID, name, checkPlural)
+ originalChecks := checkTenant(projectID, name, checkPlural)
checkMap := map[string]string{
- "replication_specs.0.region_configs.0.electable_specs.0.node_count": "3",
- "replication_specs.0.region_configs.0.electable_specs.0.instance_size": "M10",
- "replication_specs.0.region_configs.0.provider_name": "AWS",
+ "replication_specs.0.region_configs.0.electable_specs.node_count": "3",
+ "replication_specs.0.region_configs.0.electable_specs.instance_size": "M10",
+ "replication_specs.0.region_configs.0.provider_name": "AWS",
}
- return checkAggr(true, nil, checkMap, originalChecks)
+ return checkAggr(nil, checkMap, originalChecks)
}
-func configWithKeyValueBlocks(t *testing.T, usePreviewProvider bool, orgID, projectName, clusterName, blockName string, blocks ...map[string]string) string {
+func configWithKeyValueBlocks(t *testing.T, orgID, projectName, clusterName, blockName string, blocks ...map[string]string) string {
t.Helper()
var extraConfig string
- for _, block := range blocks {
- extraConfig += fmt.Sprintf(`
- %[1]s {
- key = %[2]q
- value = %[3]q
+ if len(blocks) > 0 {
+ var keyValuePairs string
+ for _, block := range blocks {
+ keyValuePairs += fmt.Sprintf(`
+ %[1]q = %[2]q`, block["key"], block["value"])
+ }
+ extraConfig = fmt.Sprintf(`
+ %[1]s = {
+ %[2]s
}
- `, blockName, block["key"], block["value"])
+ `, blockName, keyValuePairs)
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_project" "cluster_project" {
org_id = %[1]q
name = %[2]q
@@ -1818,78 +1535,49 @@ func configWithKeyValueBlocks(t *testing.T, usePreviewProvider bool, orgID, proj
name = %[3]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ }]
%[4]s
}
- `, orgID, projectName, clusterName, extraConfig)) + dataSourcesTFNewSchema
-}
-
-func checkKeyValueBlocks(usePreviewProvider, includeDataSources bool, blockName string, blocks ...map[string]string) resource.TestCheckFunc {
- if config.PreviewProviderV2AdvancedCluster() {
- return checkKeyValueBlocksPreviewProviderV2(usePreviewProvider, includeDataSources, blockName, blocks...)
- }
- const pluralPrefix = "results.0."
- lenStr := strconv.Itoa(len(blocks))
- keyHash := blockName + ".#"
- keyStar := blockName + ".*"
- checks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, resourceName, keyHash, lenStr),
- }
- if includeDataSources {
- checks = append(checks,
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourceName, keyHash, lenStr),
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralPrefix+keyHash, lenStr))
- }
- for _, block := range blocks {
- checks = append(checks,
- acc.TestCheckTypeSetElemNestedAttrsPreviewProviderV2(usePreviewProvider, resourceName, keyStar, block),
- )
- if includeDataSources {
- checks = append(checks,
- acc.TestCheckTypeSetElemNestedAttrsPreviewProviderV2(usePreviewProvider, dataSourceName, keyStar, block),
- acc.TestCheckTypeSetElemNestedAttrsPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralPrefix+keyStar, block))
- }
- }
- return resource.ComposeAggregateTestCheckFunc(checks...)
+ `, orgID, projectName, clusterName, extraConfig) + dataSourcesConfig
}
-func checkKeyValueBlocksPreviewProviderV2(usePreviewProvider, includeDataSources bool, blockName string, blocks ...map[string]string) resource.TestCheckFunc {
+func checkKeyValueBlocks(includeDataSources bool, blockName string, blocks ...map[string]string) resource.TestCheckFunc {
const pluralPrefix = "results.0."
lenStr := strconv.Itoa(len(blocks))
keyPct := blockName + ".%"
checks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, resourceName, keyPct, lenStr),
+ resource.TestCheckResourceAttr(resourceName, keyPct, lenStr),
}
if includeDataSources {
checks = append(checks,
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourceName, keyPct, lenStr),
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralPrefix+keyPct, lenStr))
+ resource.TestCheckResourceAttr(dataSourceName, keyPct, lenStr),
+ resource.TestCheckResourceAttr(dataSourcePluralName, pluralPrefix+keyPct, lenStr))
}
for _, block := range blocks {
key := blockName + "." + block["key"]
value := block["value"]
checks = append(checks,
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, resourceName, key, value),
+ resource.TestCheckResourceAttr(resourceName, key, value),
)
if includeDataSources {
checks = append(checks,
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourceName, key, value),
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralPrefix+key, value))
+ resource.TestCheckResourceAttr(dataSourceName, key, value),
+ resource.TestCheckResourceAttr(dataSourcePluralName, pluralPrefix+key, value))
}
}
return resource.ComposeAggregateTestCheckFunc(checks...)
@@ -1901,20 +1589,13 @@ type ReplicaSetAWSConfig struct {
ClusterType string
DiskSizeGB int
NodeCountElectable int
- WithAnalyticsSpecs bool
}
-func configAWSProvider(t *testing.T, usePreviewProvider bool, configInfo ReplicaSetAWSConfig) string {
+func configAWSProvider(t *testing.T, configInfo ReplicaSetAWSConfig, isTPF bool) string {
t.Helper()
- analyticsSpecs := ""
- if configInfo.WithAnalyticsSpecs {
- analyticsSpecs = `
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }`
- }
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+
+ if !isTPF {
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
@@ -1928,27 +1609,59 @@ func configAWSProvider(t *testing.T, usePreviewProvider bool, configInfo Replica
instance_size = "M10"
node_count = %[5]d
}
- %[6]s
+ analytics_specs {
+ instance_size = "M10"
+ node_count = 1
+ }
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
}
}
}
- `, configInfo.ProjectID, configInfo.ClusterName, configInfo.ClusterType, configInfo.DiskSizeGB, configInfo.NodeCountElectable, analyticsSpecs)) + dataSourcesTFOldSchema
+ `, configInfo.ProjectID, configInfo.ClusterName, configInfo.ClusterType, configInfo.DiskSizeGB, configInfo.NodeCountElectable) + dataSourcesConfig
+ }
+
+ return fmt.Sprintf(`
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = %[1]q
+ name = %[2]q
+ cluster_type = %[3]q
+ retain_backups_enabled = "true"
+
+
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M10"
+ node_count = %[5]d
+ disk_size_gb = %[4]d
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ disk_size_gb = %[4]d
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_WEST_2"
+ }]
+ }]
+ }
+ `, configInfo.ProjectID, configInfo.ClusterName, configInfo.ClusterType, configInfo.DiskSizeGB, configInfo.NodeCountElectable) + dataSourcesConfig
}
-func checkReplicaSetAWSProvider(usePreviewProvider bool, projectID, name string, diskSizeGB, nodeCountElectable int, checkDiskSizeGBInnerLevel, checkExternalID bool) resource.TestCheckFunc {
+func checkReplicaSetAWSProvider(isTPF, useDataSource bool, projectID, name string, diskSizeGB, nodeCountElectable int, checkDiskSizeGBInnerLevel, checkExternalID bool) resource.TestCheckFunc {
additionalChecks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, resourceName, "retain_backups_enabled", "true"),
+ acc.TestCheckResourceAttrMigTPF(isTPF, resourceName, "retain_backups_enabled", "true"),
}
additionalChecks = append(additionalChecks,
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, dataSourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)))
+ acc.TestCheckResourceAttrWithMigTPF(isTPF, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(isTPF, dataSourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)))
if checkDiskSizeGBInnerLevel {
additionalChecks = append(additionalChecks,
- checkAggr(usePreviewProvider, []string{}, map[string]string{
+ checkAggrMig(isTPF, useDataSource, []string{}, map[string]string{
"replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
"replication_specs.0.region_configs.0.analytics_specs.0.disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
}),
@@ -1956,14 +1669,13 @@ func checkReplicaSetAWSProvider(usePreviewProvider bool, projectID, name string,
}
if checkExternalID {
- additionalChecks = append(additionalChecks, acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.external_id"))
+ additionalChecks = append(additionalChecks, acc.TestCheckResourceAttrSetMigTPF(isTPF, resourceName, "replication_specs.0.external_id"))
}
- return checkAggr(usePreviewProvider,
- []string{"replication_specs.#", "replication_specs.0.id", "replication_specs.0.region_configs.#"},
+ return checkAggrMig(isTPF, useDataSource,
+ []string{"replication_specs.#", "replication_specs.0.region_configs.#"},
map[string]string{
- "project_id": projectID,
- "disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
+ "project_id": projectID,
"replication_specs.0.region_configs.0.electable_specs.0.node_count": fmt.Sprintf("%d", nodeCountElectable),
"replication_specs.0.region_configs.0.analytics_specs.0.node_count": "1",
"name": name},
@@ -1971,9 +1683,9 @@ func checkReplicaSetAWSProvider(usePreviewProvider bool, projectID, name string,
)
}
-func configIncorrectTypeGobalClusterSelfManagedSharding(t *testing.T, usePreviewProvider bool, projectID, name string) string {
+func configIncorrectTypeGobalClusterSelfManagedSharding(t *testing.T, projectID, name string) string {
t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
@@ -1981,36 +1693,42 @@ func configIncorrectTypeGobalClusterSelfManagedSharding(t *testing.T, usePreview
cluster_type = "REPLICASET"
global_cluster_self_managed_sharding = true # invalid, can only by used with GEOSHARDED clusters
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
}
- `, projectID, name))
+ `, projectID, name)
}
-func configReplicaSetMultiCloud(t *testing.T, usePreviewProvider bool, orgID, projectName, name string) string {
+func configReplicaSetMultiCloud(t *testing.T, orgID, projectName, name string, isTPF bool) string {
t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+
+ projectConfig := fmt.Sprintf(`
resource "mongodbatlas_project" "cluster_project" {
org_id = %[1]q
name = %[2]q
}
+ `, orgID, projectName)
+
+ advClusterConfig := ""
+ if !isTPF {
+ advClusterConfig = fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = mongodbatlas_project.cluster_project.id
- name = %[3]q
+ name = %[1]q
cluster_type = "REPLICASET"
retain_backups_enabled = false
@@ -2050,31 +1768,76 @@ func configReplicaSetMultiCloud(t *testing.T, usePreviewProvider bool, orgID, pr
}
}
}
- `, orgID, projectName, name)) + dataSourcesTFNewSchema
-}
-
-func checkReplicaSetMultiCloud(usePreviewProvider bool, name string, regionConfigs int) resource.TestCheckFunc {
+ `, name)
+ } else {
+ advClusterConfig = fmt.Sprintf(`
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_project.cluster_project.id
+ name = %[1]q
+ cluster_type = "REPLICASET"
+ retain_backups_enabled = false
+
+ replication_specs = [{
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ priority = 0
+ provider_name = "GCP"
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ region_name = "US_EAST_4"
+ }, {
+ priority = 0
+ provider_name = "GCP"
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ region_name = "NORTH_AMERICA_NORTHEAST_1"
+ }]
+ }]
+}
+ `, name)
+ }
+
+ return projectConfig + advClusterConfig + dataSourcesConfig
+}
+
+func checkReplicaSetMultiCloud(isTPF, useDataSource bool, name string, regionConfigs int) resource.TestCheckFunc {
additionalChecks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, resourceName, "retain_backups_enabled", "false"),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.region_configs.#", acc.JSONEquals(strconv.Itoa(regionConfigs))),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.external_id"),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, dataSourceName, "replication_specs.0.region_configs.#", acc.JSONEquals(strconv.Itoa(regionConfigs))),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.replication_specs.0.region_configs.#", acc.JSONEquals(strconv.Itoa(regionConfigs))),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.#"),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.replication_specs.#"),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.name"),
- }
- return checkAggr(usePreviewProvider,
- []string{"project_id", "replication_specs.#", "replication_specs.0.id"},
+ acc.TestCheckResourceAttrMigTPF(isTPF, resourceName, "retain_backups_enabled", "false"),
+ acc.TestCheckResourceAttrWithMigTPF(isTPF, resourceName, "replication_specs.0.region_configs.#", acc.JSONEquals(strconv.Itoa(regionConfigs))),
+ acc.TestCheckResourceAttrSetMigTPF(isTPF, resourceName, "replication_specs.0.external_id"),
+ acc.TestCheckResourceAttrWithMigTPF(isTPF, dataSourceName, "replication_specs.0.region_configs.#", acc.JSONEquals(strconv.Itoa(regionConfigs))),
+ acc.TestCheckResourceAttrWithMigTPF(isTPF, dataSourcePluralName, "results.0.replication_specs.0.region_configs.#", acc.JSONEquals(strconv.Itoa(regionConfigs))),
+ acc.TestCheckResourceAttrSetMigTPF(isTPF, dataSourcePluralName, "results.#"),
+ acc.TestCheckResourceAttrSetMigTPF(isTPF, dataSourcePluralName, "results.0.replication_specs.#"),
+ acc.TestCheckResourceAttrSetMigTPF(isTPF, dataSourcePluralName, "results.0.name"),
+ }
+ return checkAggrMig(isTPF, useDataSource,
+ []string{"project_id", "replication_specs.#"},
map[string]string{
"name": name},
additionalChecks...,
)
}
-func configShardedOldSchemaMultiCloud(t *testing.T, usePreviewProvider bool, projectID, name string, numShards int, analyticsSize string, configServerManagementMode *string) string {
+func configShardedMultiCloud(t *testing.T, projectID, name string, numShards int, analyticsSize string, configServerManagementMode *string) string {
t.Helper()
var rootConfig string
+ var replicationSpecs string
if configServerManagementMode != nil {
// valid values: FIXED_TO_DEDICATED or ATLAS_MANAGED (default)
// only valid for Major version 8 and later
@@ -2084,112 +1847,122 @@ func configShardedOldSchemaMultiCloud(t *testing.T, usePreviewProvider bool, pro
config_server_management_mode = %[1]q
`, *configServerManagementMode)
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- cluster_type = "SHARDED"
- %[5]s
- replication_specs {
- num_shards = %[3]d
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = %[4]q
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 2
- }
- provider_name = "AZURE"
- priority = 6
- region_name = "US_EAST_2"
- }
- }
+ for i := 0; i < numShards; i++ {
+ replicationSpecs += fmt.Sprintf(`
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = %[1]q
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ },`, analyticsSize)
+ }
+ replicationSpecs = strings.TrimSuffix(replicationSpecs, ",")
+
+ advClusterConfig := fmt.Sprintf(`
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = %[1]q
+ name = %[2]q
+ cluster_type = "SHARDED"
+
+ %[3]s
+
+
+ replication_specs = [
+ %[4]s
+ ]
}
- `, projectID, name, numShards, analyticsSize, rootConfig)) + dataSourcesTFOldSchema
+ `, projectID, name, rootConfig, replicationSpecs)
+
+ return advClusterConfig + dataSourcesConfig
}
-func checkShardedOldSchemaMultiCloud(usePreviewProvider bool, name string, numShards int, analyticsSize string, verifyExternalID bool, configServerManagementMode *string) resource.TestCheckFunc {
+func checkShardedMultiCloud(name, analyticsSize string, verifyExternalID bool, configServerManagementMode *string) resource.TestCheckFunc {
additionalChecks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops", acc.IntGreatThan(0)),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.region_configs.1.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, dataSourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, dataSourceName, "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops", acc.IntGreatThan(0)),
- acc.TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider, dataSourceName, "replication_specs.0.region_configs.1.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(true, resourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(true, resourceName, "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(true, resourceName, "replication_specs.0.region_configs.1.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(true, dataSourceName, "replication_specs.0.region_configs.0.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(true, dataSourceName, "replication_specs.0.region_configs.0.analytics_specs.0.disk_iops", acc.IntGreatThan(0)),
+ acc.TestCheckResourceAttrWithMigTPF(true, dataSourceName, "replication_specs.0.region_configs.1.electable_specs.0.disk_iops", acc.IntGreatThan(0)),
}
if verifyExternalID {
additionalChecks = append(
additionalChecks,
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.external_id"))
+ acc.TestCheckResourceAttrSetMigTPF(true, resourceName, "replication_specs.0.external_id"))
}
if configServerManagementMode != nil {
additionalChecks = append(additionalChecks,
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, resourceName, "config_server_management_mode", *configServerManagementMode),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, resourceName, "config_server_type"),
- acc.TestCheckResourceAttrPreviewProviderV2(usePreviewProvider, dataSourceName, "config_server_management_mode", *configServerManagementMode),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourceName, "config_server_type"),
+ acc.TestCheckResourceAttrMigTPF(true, resourceName, "config_server_management_mode", *configServerManagementMode),
+ acc.TestCheckResourceAttrSetMigTPF(true, resourceName, "config_server_type"),
+ acc.TestCheckResourceAttrMigTPF(true, dataSourceName, "config_server_management_mode", *configServerManagementMode),
+ acc.TestCheckResourceAttrSetMigTPF(true, dataSourceName, "config_server_type"),
)
}
- return checkAggr(usePreviewProvider,
- []string{"project_id", "replication_specs.#", "replication_specs.0.id", "replication_specs.0.region_configs.#"},
+ return checkAggrMig(true, true,
+ []string{"project_id", "replication_specs.#", "replication_specs.0.region_configs.#"},
map[string]string{
- "name": name,
- "replication_specs.0.num_shards": strconv.Itoa(numShards),
+ "name": name,
"replication_specs.0.region_configs.0.analytics_specs.0.instance_size": analyticsSize,
},
additionalChecks...)
}
-func configSingleProviderPaused(t *testing.T, usePreviewProvider bool, projectID, clusterName string, paused bool, instanceSize string) string {
+func configSingleProviderPaused(t *testing.T, projectID, clusterName string, paused bool, instanceSize string) string {
t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
paused = %[3]t
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = %[4]q
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
}
-`, projectID, clusterName, paused, instanceSize)) + dataSourcesTFNewSchema
+`, projectID, clusterName, paused, instanceSize) + dataSourcesConfig
}
-func checkSingleProviderPaused(usePreviewProvider bool, name string, paused bool) resource.TestCheckFunc {
- return checkAggr(usePreviewProvider,
+func checkSingleProviderPaused(name string, paused bool) resource.TestCheckFunc {
+ return checkAggr(
[]string{"project_id", "replication_specs.#", "replication_specs.0.region_configs.#"},
map[string]string{
"name": name,
"paused": strconv.FormatBool(paused)})
}
-func configAdvanced(t *testing.T, usePreviewProvider bool, projectID, clusterName, mongoDBMajorVersion string, p20240530 *admin20240530.ClusterDescriptionProcessArgs, p *admin.ClusterDescriptionProcessArgs20240805) string {
+func configAdvanced(t *testing.T, projectID, clusterName, mongoDBMajorVersion string, p20240530 *admin20240530.ClusterDescriptionProcessArgs, p *admin.ClusterDescriptionProcessArgs20240805) string {
t.Helper()
changeStreamOptionsStr := ""
defaultMaxTimeStr := ""
@@ -2218,154 +1991,147 @@ func configAdvanced(t *testing.T, usePreviewProvider bool, projectID, clusterNam
mongoDBMajorVersionStr = fmt.Sprintf(`mongo_db_major_version = %[1]q`, mongoDBMajorVersion)
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- %[13]s
+ %[12]s
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
- advanced_configuration {
- fail_index_key_too_long = %[3]t
- javascript_enabled = %[4]t
- minimum_enabled_tls_protocol = %[5]q
- no_table_scan = %[6]t
- oplog_size_mb = %[7]d
- sample_size_bi_connector = %[8]d
- sample_refresh_interval_bi_connector = %[9]d
- transaction_lifetime_limit_seconds = %[10]d
- %[11]s
- %[12]s
+ advanced_configuration = {
+ javascript_enabled = %[3]t
+ minimum_enabled_tls_protocol = %[4]q
+ no_table_scan = %[5]t
+ oplog_size_mb = %[6]d
+ sample_size_bi_connector = %[7]d
+ sample_refresh_interval_bi_connector = %[8]d
+ transaction_lifetime_limit_seconds = %[9]d
+ %[10]s
+ %[11]s
+ %[13]s
%[14]s
- %[15]s
}
}
- `, projectID, clusterName,
- p20240530.GetFailIndexKeyTooLong(), p20240530.GetJavascriptEnabled(), p20240530.GetMinimumEnabledTlsProtocol(), p20240530.GetNoTableScan(),
+ `, projectID, clusterName, p20240530.GetJavascriptEnabled(), p20240530.GetMinimumEnabledTlsProtocol(), p20240530.GetNoTableScan(),
p20240530.GetOplogSizeMB(), p20240530.GetSampleSizeBIConnector(), p20240530.GetSampleRefreshIntervalBIConnector(), p20240530.GetTransactionLifetimeLimitSeconds(),
- changeStreamOptionsStr, defaultMaxTimeStr, mongoDBMajorVersionStr, tlsCipherConfigModeStr, customOpensslCipherConfigTLS12Str)) + dataSourcesTFNewSchema
+ changeStreamOptionsStr, defaultMaxTimeStr, mongoDBMajorVersionStr, tlsCipherConfigModeStr, customOpensslCipherConfigTLS12Str) + dataSourcesConfig
}
-func checkAdvanced(usePreviewProvider bool, name, tls string, processArgs *admin.ClusterDescriptionProcessArgs20240805) resource.TestCheckFunc {
+func checkAdvanced(name, tls string, processArgs *admin.ClusterDescriptionProcessArgs20240805) resource.TestCheckFunc {
advancedConfig := map[string]string{
"name": name,
- "advanced_configuration.0.minimum_enabled_tls_protocol": tls,
- "advanced_configuration.0.fail_index_key_too_long": "false",
- "advanced_configuration.0.javascript_enabled": "true",
- "advanced_configuration.0.no_table_scan": "false",
- "advanced_configuration.0.oplog_size_mb": "1000",
- "advanced_configuration.0.sample_refresh_interval_bi_connector": "310",
- "advanced_configuration.0.sample_size_bi_connector": "110",
- "advanced_configuration.0.transaction_lifetime_limit_seconds": "300",
+ "advanced_configuration.minimum_enabled_tls_protocol": tls,
+ "advanced_configuration.javascript_enabled": "true",
+ "advanced_configuration.no_table_scan": "false",
+ "advanced_configuration.oplog_size_mb": "1000",
+ "advanced_configuration.sample_refresh_interval_bi_connector": "310",
+ "advanced_configuration.sample_size_bi_connector": "110",
+ "advanced_configuration.transaction_lifetime_limit_seconds": "300",
}
if processArgs.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds != nil {
- advancedConfig["advanced_configuration.0.change_stream_options_pre_and_post_images_expire_after_seconds"] = strconv.Itoa(*processArgs.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds)
+ advancedConfig["advanced_configuration.change_stream_options_pre_and_post_images_expire_after_seconds"] = strconv.Itoa(*processArgs.ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds)
}
if processArgs.DefaultMaxTimeMS != nil {
- advancedConfig["advanced_configuration.0.default_max_time_ms"] = strconv.Itoa(*processArgs.DefaultMaxTimeMS)
+ advancedConfig["advanced_configuration.default_max_time_ms"] = strconv.Itoa(*processArgs.DefaultMaxTimeMS)
}
if processArgs.TlsCipherConfigMode != nil && processArgs.CustomOpensslCipherConfigTls12 != nil {
- advancedConfig["advanced_configuration.0.tls_cipher_config_mode"] = "CUSTOM"
- advancedConfig["advanced_configuration.0.custom_openssl_cipher_config_tls12.#"] = strconv.Itoa(len(*processArgs.CustomOpensslCipherConfigTls12))
+ advancedConfig["advanced_configuration.tls_cipher_config_mode"] = "CUSTOM"
+ advancedConfig["advanced_configuration.custom_openssl_cipher_config_tls12.#"] = strconv.Itoa(len(*processArgs.CustomOpensslCipherConfigTls12))
} else {
- advancedConfig["advanced_configuration.0.tls_cipher_config_mode"] = "DEFAULT"
+ advancedConfig["advanced_configuration.tls_cipher_config_mode"] = "DEFAULT"
}
pluralChecks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.#"),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.replication_specs.#"),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.name"),
+ resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.#"),
+ resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.replication_specs.#"),
+ resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.name"),
}
- return checkAggr(usePreviewProvider,
- []string{"project_id", "replication_specs.#", "replication_specs.0.region_configs.#"},
+ return checkAggr([]string{"project_id", "replication_specs.#", "replication_specs.0.region_configs.#"},
advancedConfig,
pluralChecks...,
)
}
-func configAdvancedDefaultWrite(t *testing.T, usePreviewProvider bool, projectID, clusterName string, p *admin20240530.ClusterDescriptionProcessArgs) string {
+func configAdvancedDefaultWrite(t *testing.T, projectID, clusterName string, p *admin20240530.ClusterDescriptionProcessArgs) string {
t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
- advanced_configuration {
+ advanced_configuration = {
javascript_enabled = %[3]t
minimum_enabled_tls_protocol = %[4]q
no_table_scan = %[5]t
oplog_size_mb = %[6]d
sample_size_bi_connector = %[7]d
sample_refresh_interval_bi_connector = %[8]d
- default_read_concern = %[9]q
- default_write_concern = %[10]q
+ default_write_concern = %[9]q
}
}
`, projectID, clusterName, p.GetJavascriptEnabled(), p.GetMinimumEnabledTlsProtocol(), p.GetNoTableScan(),
- p.GetOplogSizeMB(), p.GetSampleSizeBIConnector(), p.GetSampleRefreshIntervalBIConnector(), p.GetDefaultReadConcern(), p.GetDefaultWriteConcern())) + dataSourcesTFNewSchema
+ p.GetOplogSizeMB(), p.GetSampleSizeBIConnector(), p.GetSampleRefreshIntervalBIConnector(), p.GetDefaultWriteConcern()) + dataSourcesConfig
}
-func checkAdvancedDefaultWrite(usePreviewProvider bool, name, writeConcern, tls string) resource.TestCheckFunc {
+func checkAdvancedDefaultWrite(name, writeConcern, tls string) resource.TestCheckFunc {
pluralChecks := []resource.TestCheckFunc{
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.#"),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.replication_specs.#"),
- acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, dataSourcePluralName, "results.0.name"),
+ resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.#"),
+ resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.replication_specs.#"),
+ resource.TestCheckResourceAttrSet(dataSourcePluralName, "results.0.name"),
}
- return checkAggr(usePreviewProvider,
+ return checkAggr(
[]string{"project_id", "replication_specs.#", "replication_specs.0.region_configs.#"},
map[string]string{
"name": name,
- "advanced_configuration.0.minimum_enabled_tls_protocol": tls,
- "advanced_configuration.0.default_write_concern": writeConcern,
- "advanced_configuration.0.default_read_concern": "available",
- "advanced_configuration.0.fail_index_key_too_long": "false",
- "advanced_configuration.0.javascript_enabled": "true",
- "advanced_configuration.0.no_table_scan": "false",
- "advanced_configuration.0.oplog_size_mb": "1000",
- "advanced_configuration.0.sample_refresh_interval_bi_connector": "310",
- "advanced_configuration.0.sample_size_bi_connector": "110",
- "advanced_configuration.0.tls_cipher_config_mode": "DEFAULT"},
+ "advanced_configuration.minimum_enabled_tls_protocol": tls,
+ "advanced_configuration.default_write_concern": writeConcern,
+ "advanced_configuration.javascript_enabled": "true",
+ "advanced_configuration.no_table_scan": "false",
+ "advanced_configuration.oplog_size_mb": "1000",
+ "advanced_configuration.sample_refresh_interval_bi_connector": "310",
+ "advanced_configuration.sample_size_bi_connector": "110",
+ "advanced_configuration.tls_cipher_config_mode": "DEFAULT"},
pluralChecks...)
}
-func configReplicationSpecsAutoScaling(t *testing.T, usePreviewProvider bool, projectID, clusterName string, autoScalingSettings *admin.AdvancedAutoScalingSettings, elecInstanceSize string, elecDiskSizeGB, analyticsNodeCount int) string {
+func configReplicationSpecsAutoScaling(t *testing.T, projectID, clusterName string, autoScalingSettings *admin.AdvancedAutoScalingSettings, elecInstanceSize string, elecDiskSizeGB, analyticsNodeCount int) string {
t.Helper()
lifecycleIgnoreChanges := ""
autoScalingCompute := autoScalingSettings.GetCompute()
@@ -2373,35 +2139,35 @@ func configReplicationSpecsAutoScaling(t *testing.T, usePreviewProvider bool, pr
lifecycleIgnoreChanges = `
lifecycle {
ignore_changes = [
- replication_specs.0.region_configs.0.electable_specs.0.instance_size,
- replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb
+ replication_specs.0.region_configs.0.electable_specs.instance_size,
+ replication_specs.0.region_configs.0.electable_specs.disk_size_gb
]
}`
}
autoScalingBlock := ""
if autoScalingSettings != nil {
- autoScalingBlock = fmt.Sprintf(`auto_scaling {
+ autoScalingBlock = fmt.Sprintf(`auto_scaling = {
compute_enabled = %t
disk_gb_enabled = %t
compute_max_instance_size = %q
}`, autoScalingSettings.Compute.GetEnabled(), autoScalingSettings.DiskGB.GetEnabled(), autoScalingSettings.Compute.GetMaxInstanceSize())
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = %[3]q
disk_size_gb = %[4]d
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = %[5]d
}
@@ -2409,42 +2175,42 @@ func configReplicationSpecsAutoScaling(t *testing.T, usePreviewProvider bool, pr
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
- advanced_configuration {
+ }]
+ }]
+ advanced_configuration = {
oplog_min_retention_hours = 5.5
}
%[7]s
}
- `, projectID, clusterName, elecInstanceSize, elecDiskSizeGB, analyticsNodeCount, autoScalingBlock, lifecycleIgnoreChanges))
+ `, projectID, clusterName, elecInstanceSize, elecDiskSizeGB, analyticsNodeCount, autoScalingBlock, lifecycleIgnoreChanges)
}
-func configReplicationSpecsAnalyticsAutoScaling(t *testing.T, usePreviewProvider bool, projectID, clusterName string, analyticsAutoScalingSettings *admin.AdvancedAutoScalingSettings, analyticsNodeCount int) string {
+func configReplicationSpecsAnalyticsAutoScaling(t *testing.T, projectID, clusterName string, analyticsAutoScalingSettings *admin.AdvancedAutoScalingSettings, analyticsNodeCount int) string {
t.Helper()
analyticsAutoScalingBlock := ""
if analyticsAutoScalingSettings != nil {
analyticsAutoScalingBlock = fmt.Sprintf(`
- analytics_auto_scaling {
+ analytics_auto_scaling = {
compute_enabled = %t
disk_gb_enabled = %t
compute_max_instance_size = %q
}`, analyticsAutoScalingSettings.Compute.GetEnabled(), analyticsAutoScalingSettings.DiskGB.GetEnabled(), analyticsAutoScalingSettings.Compute.GetMaxInstanceSize())
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = %[3]d
}
@@ -2452,15 +2218,18 @@ func configReplicationSpecsAnalyticsAutoScaling(t *testing.T, usePreviewProvider
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
}
- `, projectID, clusterName, analyticsNodeCount, analyticsAutoScalingBlock))
+ `, projectID, clusterName, analyticsNodeCount, analyticsAutoScalingBlock)
}
-func configGeoShardedOldSchema(t *testing.T, usePreviewProvider bool, projectID, name string, numShardsFirstZone, numShardsSecondZone int, selfManagedSharding bool) string {
+func configGeoSharded(t *testing.T, projectID, name string, numShardsFirstZone, numShardsSecondZone int, selfManagedSharding bool, useSDKv2 ...bool) string {
t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ advClusterConfig := ""
+
+ if isOptionalTrue(useSDKv2...) {
+ advClusterConfig = fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
@@ -2509,109 +2278,89 @@ func configGeoShardedOldSchema(t *testing.T, usePreviewProvider bool, projectID,
}
}
- `, projectID, name, numShardsFirstZone, numShardsSecondZone, selfManagedSharding)) + dataSourcesTFOldSchema
-}
-
-func checkGeoShardedOldSchema(usePreviewProvider bool, name string, numShardsFirstZone, numShardsSecondZone int, isLatestProviderVersion, verifyExternalID bool) resource.TestCheckFunc {
- additionalChecks := []resource.TestCheckFunc{}
-
- if verifyExternalID {
- additionalChecks = append(additionalChecks, acc.TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider, resourceName, "replication_specs.0.external_id"))
+ `, projectID, name, numShardsFirstZone, numShardsSecondZone, selfManagedSharding)
+ return advClusterConfig + dataSourcesConfig
}
- if isLatestProviderVersion { // checks that will not apply if doing migration test with older version
- additionalChecks = append(additionalChecks, checkAggr(usePreviewProvider,
- []string{"replication_specs.0.zone_id", "replication_specs.0.zone_id"},
- map[string]string{
- "replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb": "60",
- "replication_specs.0.region_configs.0.analytics_specs.0.disk_size_gb": "60",
- }))
+ var replicationSpecs string
+ for i := 0; i < numShardsFirstZone; i++ {
+ replicationSpecs += `
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 0
+ disk_size_gb = 60
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ disk_size_gb = 60
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ }]
+ zone_name = "zone n1"
+ },`
}
-
- return checkAggr(usePreviewProvider,
- []string{"project_id", "replication_specs.0.id", "replication_specs.1.id"},
- map[string]string{
- "name": name,
- "disk_size_gb": "60",
- "replication_specs.0.num_shards": strconv.Itoa(numShardsFirstZone),
- "replication_specs.1.num_shards": strconv.Itoa(numShardsSecondZone),
- },
- additionalChecks...,
- )
-}
-
-func configShardedOldSchemaDiskSizeGBElectableLevel(t *testing.T, usePreviewProvider bool, projectID, name string, diskSizeGB int) string {
- t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ for i := 0; i < numShardsSecondZone; i++ {
+ replicationSpecs += `
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 0
+ disk_size_gb = 60
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ disk_size_gb = 60
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }]
+ zone_name = "zone n2"
+ },`
+ }
+ replicationSpecs = strings.TrimSuffix(replicationSpecs, ",")
+ advClusterConfig = fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
backup_enabled = false
mongo_db_major_version = "7.0"
- cluster_type = "SHARDED"
-
- replication_specs {
- num_shards = 2
+ cluster_type = "GEOSHARDED"
+ global_cluster_self_managed_sharding = %[3]t
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- disk_size_gb = %[3]d
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 0
- disk_size_gb = %[3]d
- }
- provider_name = "AWS"
- priority = 7
- region_name = "US_EAST_1"
- }
- }
+ replication_specs = [
+ %[4]s
+ ]
}
- `, projectID, name, diskSizeGB)) + dataSourcesTFOldSchema
+ `, projectID, name, selfManagedSharding, replicationSpecs)
+
+ return advClusterConfig + dataSourcesConfig
}
-func checkShardedOldSchemaDiskSizeGBElectableLevel(usePreviewProvider bool, diskSizeGB int) resource.TestCheckFunc {
- return checkAggr(usePreviewProvider,
- []string{},
- map[string]string{
- "replication_specs.0.num_shards": "2",
- "disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
- "replication_specs.0.region_configs.0.electable_specs.0.disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
- "replication_specs.0.region_configs.0.analytics_specs.0.disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
- })
+func checkAggrMig(isTPF, useDataSource bool, attrsSet []string, attrsMap map[string]string, extra ...resource.TestCheckFunc) resource.TestCheckFunc {
+ extraChecks := extra
+ extraChecks = append(extraChecks, acc.CheckExistsCluster(resourceName))
+ if useDataSource {
+ return acc.CheckRSAndDSMigTPF(isTPF, resourceName, admin.PtrString(dataSourceName), nil, attrsSet, attrsMap, extraChecks...)
+ }
+ return acc.CheckRSAndDSMigTPF(isTPF, resourceName, nil, nil, attrsSet, attrsMap, extraChecks...)
}
-func configShardedNewSchema(t *testing.T, usePreviewProvider bool, orgID, projectName, name string, diskSizeGB int, firstInstanceSize, lastInstanceSize string, firstDiskIOPS, lastDiskIOPS *int, includeMiddleSpec, increaseDiskSizeShard2 bool) string {
+func configShardedNewSchema(t *testing.T, orgID, projectName, name string, diskSizeGB int, firstInstanceSize, lastInstanceSize string, firstDiskIOPS, lastDiskIOPS *int, includeMiddleSpec, increaseDiskSizeShard2 bool, useSDKv2 ...bool) string {
t.Helper()
var thirdReplicationSpec string
var diskSizeGBShard2 = diskSizeGB
if increaseDiskSizeShard2 {
diskSizeGBShard2 = diskSizeGB + 10
}
- if includeMiddleSpec {
- thirdReplicationSpec = fmt.Sprintf(`
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = %[1]q
- node_count = 3
- disk_size_gb = %[2]d
- }
- analytics_specs {
- instance_size = %[1]q
- node_count = 1
- disk_size_gb = %[2]d
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
- `, firstInstanceSize, diskSizeGB)
- }
+
var firstDiskIOPSAttrs string
if firstDiskIOPS != nil {
firstDiskIOPSAttrs = fmt.Sprintf(`
@@ -2626,78 +2375,175 @@ func configShardedNewSchema(t *testing.T, usePreviewProvider bool, orgID, projec
ebs_volume_type = "PROVISIONED"
`, *lastDiskIOPS)
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
- resource "mongodbatlas_project" "cluster_project" {
- org_id = %[1]q
- name = %[2]q
- }
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = mongodbatlas_project.cluster_project.id
- name = %[3]q
- backup_enabled = false
- cluster_type = "SHARDED"
+ dataSourcesConfig := `
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ }
+ `
+ if isOptionalTrue(useSDKv2...) {
+ if includeMiddleSpec {
+ thirdReplicationSpec = fmt.Sprintf(`
replication_specs {
region_configs {
electable_specs {
- instance_size = %[4]q
+ instance_size = %[1]q
node_count = 3
- disk_size_gb = %[9]d
- %[6]s
+ disk_size_gb = %[2]d
}
analytics_specs {
- instance_size = %[4]q
+ instance_size = %[1]q
node_count = 1
- disk_size_gb = %[9]d
+ disk_size_gb = %[2]d
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
}
}
+ `, firstInstanceSize, diskSizeGB)
+ }
- %[8]s
+ return fmt.Sprintf(`
+ resource "mongodbatlas_project" "cluster_project" {
+ org_id = %[1]q
+ name = %[2]q
+ }
- replication_specs {
- region_configs {
- electable_specs {
- instance_size = %[5]q
- node_count = 3
- disk_size_gb = %[10]d
- %[7]s
- }
- analytics_specs {
- instance_size = %[5]q
- node_count = 1
- disk_size_gb = %[10]d
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_project.cluster_project.id
+ name = %[3]q
+ backup_enabled = false
+ cluster_type = "SHARDED"
+
+ replication_specs {
+ region_configs {
+ electable_specs {
+ instance_size = %[4]q
+ node_count = 3
+ disk_size_gb = %[9]d
+ %[6]s
}
+ analytics_specs {
+ instance_size = %[4]q
+ node_count = 1
+ disk_size_gb = %[9]d
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
}
}
- data "mongodbatlas_advanced_cluster" "test" {
- project_id = mongodbatlas_advanced_cluster.test.project_id
- name = mongodbatlas_advanced_cluster.test.name
- use_replication_spec_per_shard = true
- }
+ %[8]s
- data "mongodbatlas_advanced_clusters" "test-replication-specs-per-shard-false" {
- project_id = mongodbatlas_advanced_cluster.test.project_id
- use_replication_spec_per_shard = false
+ replication_specs {
+ region_configs {
+ electable_specs {
+ instance_size = %[5]q
+ node_count = 3
+ disk_size_gb = %[10]d
+ %[7]s
+ }
+ analytics_specs {
+ instance_size = %[5]q
+ node_count = 1
+ disk_size_gb = %[10]d
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }
}
+ }
+
+ %[11]s
+`, orgID, projectName, name, firstInstanceSize, lastInstanceSize, firstDiskIOPSAttrs, lastDiskIOPSAttrs, thirdReplicationSpec, diskSizeGB, diskSizeGBShard2, dataSourcesConfig)
+ }
- data "mongodbatlas_advanced_clusters" "test" {
- project_id = mongodbatlas_advanced_cluster.test.project_id
- use_replication_spec_per_shard = true
+ if includeMiddleSpec {
+ thirdReplicationSpec = fmt.Sprintf(`
+ {
+ region_configs = [{
+ electable_specs = {
+ instance_size = %[1]q
+ node_count = 3
+ disk_size_gb = %[2]d
+ }
+ analytics_specs = {
+ instance_size = %[1]q
+ node_count = 1
+ disk_size_gb = %[2]d
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ },
+ `, firstInstanceSize, diskSizeGB)
+ }
+ return fmt.Sprintf(`
+ resource "mongodbatlas_project" "cluster_project" {
+ org_id = %[1]q
+ name = %[2]q
}
- `, orgID, projectName, name, firstInstanceSize, lastInstanceSize, firstDiskIOPSAttrs, lastDiskIOPSAttrs, thirdReplicationSpec, diskSizeGB, diskSizeGBShard2))
+
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_project.cluster_project.id
+ name = %[3]q
+ backup_enabled = false
+ cluster_type = "SHARDED"
+
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
+ instance_size = %[4]q
+ node_count = 3
+ disk_size_gb = %[9]d
+ %[6]s
+ }
+ analytics_specs = {
+ instance_size = %[4]q
+ node_count = 1
+ disk_size_gb = %[9]d
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }]
+ },
+ %[8]s
+ {
+ region_configs = [{
+ electable_specs = {
+ instance_size = %[5]q
+ node_count = 3
+ disk_size_gb = %[10]d
+ %[7]s
+ }
+ analytics_specs = {
+ instance_size = %[5]q
+ node_count = 1
+ disk_size_gb = %[10]d
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }]
+ }]
}
-func checkShardedNewSchema(usePreviewProvider bool, diskSizeGB int, firstInstanceSize, lastInstanceSize string, firstDiskIops, lastDiskIops *int, isAsymmetricCluster, includeMiddleSpec bool) resource.TestCheckFunc {
+ %[11]s
+ `, orgID, projectName, name, firstInstanceSize, lastInstanceSize, firstDiskIOPSAttrs, lastDiskIOPSAttrs, thirdReplicationSpec, diskSizeGB, diskSizeGBShard2, dataSourcesConfig)
+}
+
+func checkShardedNewSchema(isTPF bool, diskSizeGB int, firstInstanceSize, lastInstanceSize string, firstDiskIops, lastDiskIops *int, isAsymmetricCluster, includeMiddleSpec bool) resource.TestCheckFunc {
amtOfReplicationSpecs := 2
if includeMiddleSpec {
amtOfReplicationSpecs = 3
@@ -2709,7 +2555,6 @@ func checkShardedNewSchema(usePreviewProvider bool, diskSizeGB int, firstInstanc
}
clusterChecks := map[string]string{
- "disk_size_gb": fmt.Sprintf("%d", diskSizeGB),
"replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
"replication_specs.0.region_configs.0.electable_specs.0.instance_size": firstInstanceSize,
fmt.Sprintf("replication_specs.%d.region_configs.0.electable_specs.0.instance_size", lastSpecIndex): lastInstanceSize,
@@ -2725,87 +2570,81 @@ func checkShardedNewSchema(usePreviewProvider bool, diskSizeGB int, firstInstanc
clusterChecks[fmt.Sprintf("replication_specs.%d.region_configs.0.electable_specs.0.disk_iops", lastSpecIndex)] = fmt.Sprintf("%d", *lastDiskIops)
}
- // plural data source checks
- pluralChecks := acc.AddAttrSetChecksPreviewProviderV2(usePreviewProvider, dataSourcePluralName, nil,
+ pluralChecks := acc.AddAttrSetChecksMigTPF(isTPF, dataSourcePluralName, nil,
[]string{"results.#", "results.0.replication_specs.#", "results.0.replication_specs.0.region_configs.#", "results.0.name", "results.0.termination_protection_enabled", "results.0.global_cluster_self_managed_sharding"}...)
- pluralChecks = acc.AddAttrChecksPrefixPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralChecks, clusterChecks, "results.0")
+ pluralChecks = acc.AddAttrChecksPrefixMigTPF(isTPF, dataSourcePluralName, pluralChecks, clusterChecks, "results.0")
+
if isAsymmetricCluster {
- pluralChecks = append(pluralChecks, checkAggr(usePreviewProvider, []string{}, map[string]string{
- "replication_specs.0.id": "",
- "replication_specs.1.id": "",
- }))
- pluralChecks = acc.AddAttrChecksPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralChecks, map[string]string{
- "results.0.replication_specs.0.id": "",
- "results.0.replication_specs.1.id": "",
- })
+ pluralChecks = append(pluralChecks, checkAggrMig(isTPF, true, nil, nil))
} else {
- pluralChecks = append(pluralChecks, checkAggr(usePreviewProvider, []string{"replication_specs.0.id", "replication_specs.1.id"}, map[string]string{}))
- pluralChecks = acc.AddAttrSetChecksPreviewProviderV2(usePreviewProvider, dataSourcePluralName, pluralChecks, "results.0.replication_specs.0.id", "results.0.replication_specs.1.id")
+ pluralChecks = acc.AddAttrSetChecksMigTPF(isTPF, dataSourcePluralName, pluralChecks)
}
- return checkAggr(usePreviewProvider,
+
+ return checkAggrMig(isTPF, true,
[]string{"replication_specs.0.external_id", "replication_specs.0.zone_id", "replication_specs.1.external_id", "replication_specs.1.zone_id"},
clusterChecks,
pluralChecks...,
)
}
-func configGeoShardedNewSchema(t *testing.T, usePreviewProvider bool, projectID, name string, includeThirdShardInFirstZone bool) string {
+func configGeoShardedNewSchema(t *testing.T, projectID, name string, includeThirdShardInFirstZone bool) string {
t.Helper()
var thirdReplicationSpec string
if includeThirdShardInFirstZone {
thirdReplicationSpec = `
- replication_specs {
+ {
zone_name = "zone n1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ },
`
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
backup_enabled = false
mongo_db_major_version = "7.0"
cluster_type = "GEOSHARDED"
- replication_specs {
+
+ replication_specs = [{
zone_name = "zone n1"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ },
%[3]s
- replication_specs {
+ {
zone_name = "zone n2"
- region_configs {
- electable_specs {
+ region_configs = [{
+ electable_specs = {
instance_size = "M20"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
- `, projectID, name, thirdReplicationSpec)) + dataSourcesTFNewSchema
+ `, projectID, name, thirdReplicationSpec) + dataSourcesConfig
}
-func checkGeoShardedNewSchema(usePreviewProvider, includeThirdShardInFirstZone bool) resource.TestCheckFunc {
+func checkGeoShardedNewSchema(includeThirdShardInFirstZone bool) resource.TestCheckFunc {
var amtOfReplicationSpecs int
if includeThirdShardInFirstZone {
amtOfReplicationSpecs = 3
@@ -2817,188 +2656,12 @@ func checkGeoShardedNewSchema(usePreviewProvider, includeThirdShardInFirstZone b
"replication_specs.0.container_id.%": "1",
"replication_specs.1.container_id.%": "1",
}
- return checkAggr(usePreviewProvider, []string{}, clusterChecks)
-}
-
-func configShardedTransitionOldToNewSchema(t *testing.T, usePreviewProvider bool, projectID, name string, useNewSchema, autoscaling bool) string {
- t.Helper()
- var numShardsStr string
- if !useNewSchema {
- numShardsStr = `num_shards = 2`
- }
- var autoscalingStr string
- if autoscaling {
- autoscalingStr = `auto_scaling {
- compute_enabled = true
- disk_gb_enabled = true
- compute_max_instance_size = "M20"
- }`
- }
- replicationSpec := fmt.Sprintf(`
- replication_specs {
- %[1]s
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- %[2]s
- }
- }
- `, numShardsStr, autoscalingStr)
-
- var replicationSpecs string
- if useNewSchema {
- replicationSpecs = fmt.Sprintf(`
- %[1]s
- %[1]s
- `, replicationSpec)
- } else {
- replicationSpecs = replicationSpec
- }
-
- var dataSources = dataSourcesTFOldSchema
- if useNewSchema {
- dataSources = dataSourcesTFNewSchema
- }
-
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- backup_enabled = false
- cluster_type = "SHARDED"
-
- %[3]s
- }
-
- `, projectID, name, replicationSpecs)) + dataSources
-}
-
-func checkShardedTransitionOldToNewSchema(usePreviewProvider, useNewSchema bool) resource.TestCheckFunc {
- var amtOfReplicationSpecs int
- if useNewSchema {
- amtOfReplicationSpecs = 2
- } else {
- amtOfReplicationSpecs = 1
- }
- var checksForNewSchema []resource.TestCheckFunc
- if useNewSchema {
- checksForNewSchema = []resource.TestCheckFunc{
- checkAggr(usePreviewProvider, []string{"replication_specs.1.id", "replication_specs.0.external_id", "replication_specs.1.external_id"},
- map[string]string{
- "replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
- "replication_specs.1.region_configs.0.electable_specs.0.instance_size": "M10",
- "replication_specs.1.region_configs.0.analytics_specs.0.instance_size": "M10",
- }),
- }
- }
-
- return checkAggr(usePreviewProvider,
- []string{"replication_specs.0.id"},
- map[string]string{
- "replication_specs.#": fmt.Sprintf("%d", amtOfReplicationSpecs),
- "replication_specs.0.region_configs.0.electable_specs.0.instance_size": "M10",
- "replication_specs.0.region_configs.0.analytics_specs.0.instance_size": "M10",
- },
- checksForNewSchema...,
- )
-}
-
-func configGeoShardedTransitionOldToNewSchema(t *testing.T, usePreviewProvider bool, projectID, name string, useNewSchema bool) string {
- t.Helper()
- var numShardsStr string
- if !useNewSchema {
- numShardsStr = `num_shards = 2`
- }
- replicationSpec := `
- replication_specs {
- %[1]s
- region_configs {
- electable_specs {
- instance_size = "M10"
- node_count = 3
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- }
- provider_name = "AWS"
- priority = 7
- region_name = %[2]q
- }
- zone_name = %[3]q
- }
- `
-
- var replicationSpecs string
- if !useNewSchema {
- replicationSpecs = fmt.Sprintf(`
- %[1]s
- %[2]s
- `, fmt.Sprintf(replicationSpec, numShardsStr, "US_EAST_1", "zone 1"), fmt.Sprintf(replicationSpec, numShardsStr, "EU_WEST_1", "zone 2"))
- } else {
- replicationSpecs = fmt.Sprintf(`
- %[1]s
- %[2]s
- %[3]s
- %[4]s
- `, fmt.Sprintf(replicationSpec, numShardsStr, "US_EAST_1", "zone 1"), fmt.Sprintf(replicationSpec, numShardsStr, "US_EAST_1", "zone 1"),
- fmt.Sprintf(replicationSpec, numShardsStr, "EU_WEST_1", "zone 2"), fmt.Sprintf(replicationSpec, numShardsStr, "EU_WEST_1", "zone 2"))
- }
-
- var dataSources = dataSourcesTFOldSchema
- if useNewSchema {
- dataSources = dataSourcesTFNewSchema
- }
-
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = %[1]q
- name = %[2]q
- backup_enabled = false
- cluster_type = "GEOSHARDED"
-
- %[3]s
- }
- `, projectID, name, replicationSpecs)) + dataSources
-}
-
-func checkGeoShardedTransitionOldToNewSchema(usePreviewProvider, useNewSchema bool) resource.TestCheckFunc {
- if useNewSchema {
- return checkAggr(usePreviewProvider,
- []string{"replication_specs.0.id", "replication_specs.1.id", "replication_specs.2.id", "replication_specs.3.id",
- "replication_specs.0.external_id", "replication_specs.1.external_id", "replication_specs.2.external_id", "replication_specs.3.external_id",
- },
- map[string]string{
- "replication_specs.#": "4",
- "replication_specs.0.zone_name": "zone 1",
- "replication_specs.1.zone_name": "zone 1",
- "replication_specs.2.zone_name": "zone 2",
- "replication_specs.3.zone_name": "zone 2",
- },
- )
- }
- return checkAggr(usePreviewProvider,
- []string{"replication_specs.0.id", "replication_specs.1.id"},
- map[string]string{
- "replication_specs.#": "2",
- "replication_specs.0.zone_name": "zone 1",
- "replication_specs.1.zone_name": "zone 2",
- },
- )
+ return checkAggr([]string{}, clusterChecks)
}
-func configReplicaSetScalingStrategyAndRedactClientLogData(t *testing.T, usePreviewProvider bool, orgID, projectName, name, replicaSetScalingStrategy string, redactClientLogData bool) string {
+func configReplicaSetScalingStrategyAndRedactClientLogData(t *testing.T, orgID, projectName, name, replicaSetScalingStrategy string, redactClientLogData bool) string {
t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_project" "cluster_project" {
org_id = %[1]q
name = %[2]q
@@ -3012,14 +2675,14 @@ func configReplicaSetScalingStrategyAndRedactClientLogData(t *testing.T, usePrev
replica_set_scaling_strategy = %[4]q
redact_client_log_data = %[5]t
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size ="M10"
node_count = 3
disk_size_gb = 10
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
disk_size_gb = 10
@@ -3027,172 +2690,127 @@ func configReplicaSetScalingStrategyAndRedactClientLogData(t *testing.T, usePrev
provider_name = "AWS"
priority = 7
region_name = "EU_WEST_1"
- }
- }
+ }]
+ }]
}
- `, orgID, projectName, name, replicaSetScalingStrategy, redactClientLogData)) + dataSourcesTFNewSchema
+ `, orgID, projectName, name, replicaSetScalingStrategy, redactClientLogData) + dataSourcesConfig
}
-func configReplicaSetScalingStrategyAndRedactClientLogDataOldSchema(t *testing.T, usePreviewProvider bool, orgID, projectName, name, replicaSetScalingStrategy string, redactClientLogData bool) string {
- t.Helper()
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
- resource "mongodbatlas_project" "cluster_project" {
- org_id = %[1]q
- name = %[2]q
- }
-
- resource "mongodbatlas_advanced_cluster" "test" {
- project_id = mongodbatlas_project.cluster_project.id
- name = %[3]q
- backup_enabled = false
- cluster_type = "SHARDED"
- replica_set_scaling_strategy = %[4]q
- redact_client_log_data = %[5]t
-
- replication_specs {
- num_shards = 2
- region_configs {
- electable_specs {
- instance_size ="M10"
- node_count = 3
- disk_size_gb = 10
- }
- analytics_specs {
- instance_size = "M10"
- node_count = 1
- disk_size_gb = 10
- }
- provider_name = "AWS"
- priority = 7
- region_name = "EU_WEST_1"
- }
- }
- }
- `, orgID, projectName, name, replicaSetScalingStrategy, redactClientLogData)) + dataSourcesTFOldSchema
-}
-
-func checkReplicaSetScalingStrategyAndRedactClientLogData(usePreviewProvider bool, replicaSetScalingStrategy string, redactClientLogData bool) resource.TestCheckFunc {
+func checkReplicaSetScalingStrategyAndRedactClientLogData(replicaSetScalingStrategy string, redactClientLogData bool) resource.TestCheckFunc {
clusterChecks := map[string]string{
"replica_set_scaling_strategy": replicaSetScalingStrategy,
"redact_client_log_data": strconv.FormatBool(redactClientLogData),
}
- // plural data source checks
- pluralChecks := acc.AddAttrSetChecksPreviewProviderV2(usePreviewProvider, dataSourcePluralName, nil,
+ pluralChecks := acc.AddAttrSetChecks(dataSourcePluralName, nil,
[]string{"results.#", "results.0.replica_set_scaling_strategy", "results.0.redact_client_log_data"}...)
- return checkAggr(usePreviewProvider,
- []string{},
- clusterChecks,
- pluralChecks...,
- )
+ return checkAggr([]string{}, clusterChecks, pluralChecks...)
}
-func configPriority(t *testing.T, usePreviewProvider bool, projectID, clusterName string, oldSchema, swapPriorities bool) string {
+func configPriority(t *testing.T, projectID, clusterName string, swapPriorities bool) string {
t.Helper()
const (
config7 = `
- region_configs {
+ {
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
node_count = 2
instance_size = "M10"
}
}
`
config6 = `
- region_configs {
+ {
provider_name = "AWS"
priority = 6
region_name = "US_WEST_2"
- electable_specs {
+ electable_specs = {
node_count = 1
instance_size = "M10"
}
}
`
)
- strType, strNumShards, strConfigs := "REPLICASET", "", config7+config6
- if oldSchema {
- strType = "SHARDED"
- strNumShards = "num_shards = 2"
- }
+ strType, strConfigs := "REPLICASET", config7+", "+config6
if swapPriorities {
- strConfigs = config6 + config7
+ strConfigs = config6 + ", " + config7
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = %[3]q
backup_enabled = false
- replication_specs {
+ replication_specs = [{
+ region_configs = [
+
%[4]s
- %[5]s
- }
+ ]
+ }]
}
- `, projectID, clusterName, strType, strNumShards, strConfigs))
+ `, projectID, clusterName, strType, strConfigs)
}
-func configBiConnectorConfig(t *testing.T, usePreviewProvider bool, projectID, name string, enabled bool) string {
+func configBiConnectorConfig(t *testing.T, projectID, name string, enabled bool) string {
t.Helper()
additionalConfig := `
- bi_connector_config {
+ bi_connector_config = {
enabled = false
}
`
if enabled {
additionalConfig = `
- bi_connector_config {
+ bi_connector_config = {
enabled = true
read_preference = "secondary"
}
`
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, usePreviewProvider, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
%[3]s
}
- `, projectID, name, additionalConfig)) + dataSourcesTFOldSchema
+ `, projectID, name, additionalConfig) + dataSourcesConfig
}
-func checkTenantBiConnectorConfig(usePreviewProvider bool, projectID, name string, enabled bool) resource.TestCheckFunc {
+func checkTenantBiConnectorConfig(projectID, name string, enabled bool) resource.TestCheckFunc {
attrsMap := map[string]string{
"project_id": projectID,
"name": name,
}
if enabled {
- attrsMap["bi_connector_config.0.enabled"] = "true"
- attrsMap["bi_connector_config.0.read_preference"] = "secondary"
+ attrsMap["bi_connector_config.enabled"] = "true"
+ attrsMap["bi_connector_config.read_preference"] = "secondary"
} else {
- attrsMap["bi_connector_config.0.enabled"] = "false"
+ attrsMap["bi_connector_config.enabled"] = "false"
}
- return checkAggr(usePreviewProvider, nil, attrsMap)
+ return checkAggr(nil, attrsMap)
}
func configFCVPinning(t *testing.T, orgID, projectName, clusterName string, pinningExpirationDate *string, mongoDBMajorVersion string) string {
@@ -3200,13 +2818,13 @@ func configFCVPinning(t *testing.T, orgID, projectName, clusterName string, pinn
var pinnedFCVAttr string
if pinningExpirationDate != nil {
pinnedFCVAttr = fmt.Sprintf(`
- pinned_fcv {
+ pinned_fcv = {
expiration_date = %q
}
`, *pinningExpirationDate)
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, fmt.Sprintf(`
+ return fmt.Sprintf(`
resource "mongodbatlas_project" "test" {
org_id = %[1]q
name = %[2]q
@@ -3222,23 +2840,23 @@ func configFCVPinning(t *testing.T, orgID, projectName, clusterName string, pinn
%[5]s
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_WEST_2"
- }
- }
+ }]
+ }]
}
- `, orgID, projectName, clusterName, mongoDBMajorVersion, pinnedFCVAttr)) + dataSourcesTFNewSchema
+ `, orgID, projectName, clusterName, mongoDBMajorVersion, pinnedFCVAttr) + dataSourcesConfig
}
-func configFlexCluster(t *testing.T, projectID, clusterName, providerName, region, zoneName string, withTags bool) string {
+func configFlexCluster(t *testing.T, projectID, clusterName, providerName, region, zoneName, timeoutConfig string, withTags bool, deleteOnCreateTimeout *bool) string {
t.Helper()
zoneNameLine := ""
if zoneName != "" {
@@ -3247,36 +2865,44 @@ func configFlexCluster(t *testing.T, projectID, clusterName, providerName, regio
tags := ""
if withTags {
tags = `
- tags {
- key = "testKey"
- value = "testValue"
+ tags = {
+ "testKey" = "testValue"
}`
}
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, fmt.Sprintf(`
+ deleteOnCreateTimeoutConfig := ""
+ if deleteOnCreateTimeout != nil {
+ deleteOnCreateTimeoutConfig = fmt.Sprintf(`
+ delete_on_create_timeout = %[1]t
+ `, *deleteOnCreateTimeout)
+ }
+ return fmt.Sprintf(`
resource "mongodbatlas_advanced_cluster" "test" {
project_id = %[1]q
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
provider_name = "FLEX"
backing_provider_name = %[3]q
region_name = %[4]q
priority = 7
- }
+ }]
%[5]s
- }
+ }]
%[6]s
+ %[7]s
termination_protection_enabled = false
+ %[8]s
}
- `, projectID, clusterName, providerName, region, zoneNameLine, tags)+dataSourcesTFOldSchema+
- strings.ReplaceAll(acc.FlexDataSource, "mongodbatlas_flex_cluster.", "mongodbatlas_advanced_cluster."))
+ `, projectID, clusterName, providerName, region, zoneNameLine, tags, timeoutConfig, deleteOnCreateTimeoutConfig) + dataSourcesConfig +
+ strings.ReplaceAll(acc.FlexDataSource, "mongodbatlas_flex_cluster.", "mongodbatlas_advanced_cluster.")
}
func TestAccClusterFlexCluster_basic(t *testing.T) {
var (
- projectID = acc.ProjectIDExecution(t)
- clusterName = acc.RandomClusterName()
+ projectID = acc.ProjectIDExecution(t)
+ clusterName = acc.RandomClusterName()
+ emptyTimeoutConfig = ""
)
resource.Test(t, resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
@@ -3284,22 +2910,72 @@ func TestAccClusterFlexCluster_basic(t *testing.T) {
CheckDestroy: acc.CheckDestroyFlexCluster,
Steps: []resource.TestStep{
{
- Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", false),
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", emptyTimeoutConfig, false, nil),
Check: checkFlexClusterConfig(projectID, clusterName, "AWS", "US_EAST_1", false, true),
},
{
- Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", true),
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", emptyTimeoutConfig, true, nil),
Check: checkFlexClusterConfig(projectID, clusterName, "AWS", "US_EAST_1", true, true),
},
acc.TestStepImportCluster(resourceName),
{
- Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_2", "", true),
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_2", "", emptyTimeoutConfig, true, nil),
ExpectError: regexp.MustCompile("flex cluster update is not supported except for tags and termination_protection_enabled fields"),
},
},
})
}
+func TestAccAdvancedCluster_createTimeoutWithDeleteOnCreateFlex(t *testing.T) {
+ var (
+ projectID = acc.ProjectIDExecution(t)
+ clusterName = acc.RandomName()
+ createTimeout = "1s"
+ deleteOnCreateTimeout = true
+ )
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", acc.TimeoutConfig(&createTimeout, nil, nil), false, &deleteOnCreateTimeout),
+ ExpectError: regexp.MustCompile("context deadline exceeded"), // with the current implementation, this is the error that is returned
+ },
+ },
+ })
+}
+
+func TestAccAdvancedCluster_updateDeleteTimeoutFlex(t *testing.T) {
+ var (
+ projectID = acc.ProjectIDExecution(t)
+ clusterName = acc.RandomName()
+ updateTimeout = "1s"
+ deleteTimeout = "1s"
+ )
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: acc.CheckDestroyFlexCluster,
+ Steps: []resource.TestStep{
+ {
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", acc.TimeoutConfig(nil, &updateTimeout, &deleteTimeout), false, nil),
+ },
+ {
+ Config: configFlexCluster(t, projectID, clusterName, "AWS", "US_EAST_1", "", acc.TimeoutConfig(nil, &updateTimeout, &deleteTimeout), true, nil),
+ ExpectError: regexp.MustCompile("timeout while waiting for state to become 'IDLE'"),
+ },
+ {
+ Config: acc.ConfigEmpty(), // triggers delete and because delete timeout is 1s, it times out
+ ExpectError: regexp.MustCompile("timeout while waiting for state to become 'DELETED'"),
+ },
+ {
+ // deletion of the flex cluster has been triggered, but has timed out in previous step, so this is needed in order to avoid "Error running post-test destroy, there may be dangling resource [...] Cluster already requested to be deleted"
+ Config: acc.ConfigRemove(resourceName),
+ },
+ },
+ })
+}
+
func checkFlexClusterConfig(projectID, clusterName, providerName, region string, tagsCheck, checkPlural bool) resource.TestCheckFunc {
checks := []resource.TestCheckFunc{acc.CheckExistsFlexCluster()}
attrMapAdvCluster := map[string]string{
@@ -3314,8 +2990,8 @@ func checkFlexClusterConfig(projectID, clusterName, providerName, region string,
}
attrSetAdvCluster := []string{
"backup_enabled",
- "connection_strings.0.standard",
- "connection_strings.0.standard_srv",
+ "connection_strings.standard",
+ "connection_strings.standard_srv",
"create_date",
"mongo_db_version",
"state_name",
@@ -3340,7 +3016,7 @@ func checkFlexClusterConfig(projectID, clusterName, providerName, region string,
if tagsCheck {
attrMapFlex["tags.testKey"] = "testValue"
tagsMap := map[string]string{"key": "testKey", "value": "testValue"}
- tagsCheck := checkKeyValueBlocks(true, true, "tags", tagsMap)
+ tagsCheck := checkKeyValueBlocks(true, "tags", tagsMap)
checks = append(checks, tagsCheck)
}
checks = acc.AddAttrChecks(acc.FlexDataSourceName, checks, attrMapFlex)
@@ -3360,5 +3036,9 @@ func checkFlexClusterConfig(projectID, clusterName, providerName, region string,
checks = acc.AddAttrSetChecksPrefix(acc.FlexDataSourcePluralName, checks, attrSetFlex, "results.0")
checks = acc.AddAttrChecks(dataSourcePluralName, checks, pluralMap)
}
- return acc.CheckRSAndDSPreviewProviderV2(true, resourceName, ds, dsp, attrSetAdvCluster, attrMapAdvCluster, checks...)
+ return acc.CheckRSAndDS(resourceName, ds, dsp, attrSetAdvCluster, attrMapAdvCluster, checks...)
+}
+
+func isOptionalTrue(arg ...bool) bool {
+ return len(arg) > 0 && arg[0]
}
diff --git a/internal/service/advancedclustertpf/schema.go b/internal/service/advancedclustertpf/schema.go
index d398832b12..df4634cc05 100644
--- a/internal/service/advancedclustertpf/schema.go
+++ b/internal/service/advancedclustertpf/schema.go
@@ -10,14 +10,13 @@ import (
"github.com/hashicorp/terraform-plugin-framework/attr"
dsschema "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema"
- "github.com/hashicorp/terraform-plugin-framework/resource/schema/int64default"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/schema/validator"
"github.com/hashicorp/terraform-plugin-framework/types"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc"
)
@@ -137,7 +136,7 @@ func resourceSchema(ctx context.Context) schema.Schema {
},
"delete_on_create_timeout": schema.BoolAttribute{
Optional: true,
- MarkdownDescription: "Flag that indicates whether to delete the cluster if the cluster creation times out. Default is false.",
+ MarkdownDescription: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
},
"encryption_at_rest_provider": schema.StringAttribute{
Computed: true,
@@ -204,11 +203,6 @@ func resourceSchema(ctx context.Context) schema.Schema {
MarkdownDescription: "List of settings that configure your cluster regions. This array has one object per shard representing node configurations in each shard. For replica sets there is only one object representing node configurations.",
NestedObject: schema.NestedAttributeObject{
Attributes: map[string]schema.Attribute{
- "id": schema.StringAttribute{
- DeprecationMessage: deprecationMsgOldSchema("id"),
- Computed: true,
- MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. If you include existing shard replication configurations in the request, you must specify this parameter. If you add a new shard to an existing Cluster, you may specify this parameter. The request deletes any existing shards in the Cluster that you exclude from the request. This corresponds to Shard ID displayed in the UI.",
- },
"container_id": schema.MapAttribute{
ElementType: types.StringType,
Computed: true,
@@ -218,13 +212,6 @@ func resourceSchema(ctx context.Context) schema.Schema {
Computed: true,
MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the replication object for a shard in a Cluster. This value corresponds to Shard ID displayed in the UI.",
},
- "num_shards": schema.Int64Attribute{
- DeprecationMessage: deprecationMsgOldSchema("num_shards"),
- Default: int64default.StaticInt64(1),
- Computed: true,
- Optional: true,
- MarkdownDescription: "Number of shards up to 50 to deploy for a sharded cluster.",
- },
"region_configs": schema.ListNestedAttribute{
Required: true,
MarkdownDescription: "Hardware specifications for nodes set for a given region. Each **regionConfigs** object describes the region's priority in elections and the number and type of MongoDB nodes that MongoDB Cloud deploys to the region. Each **regionConfigs** object must have either an **analyticsSpecs** object, **electableSpecs** object, or **readOnlySpecs** object. Tenant clusters only require **electableSpecs. Dedicated** clusters can specify any of these specifications, but must have at least one **electableSpecs** object within a **replicationSpec**.\n\n**Example:**\n\nIf you set `\"replicationSpecs[n].regionConfigs[m].analyticsSpecs.instanceSize\" : \"M30\"`, set `\"replicationSpecs[n].regionConfigs[m].electableSpecs.instanceSize\" : `\"M30\"` if you have electable nodes and `\"replicationSpecs[n].regionConfigs[m].readOnlySpecs.instanceSize\" : `\"M30\"` if you have read-only nodes.",
@@ -292,12 +279,6 @@ func resourceSchema(ctx context.Context) schema.Schema {
Optional: true,
MarkdownDescription: "Flag that indicates whether to retain backup snapshots for the deleted dedicated cluster.",
},
- "disk_size_gb": schema.Float64Attribute{
- DeprecationMessage: deprecationMsgOldSchema("disk_size_gb"),
- Computed: true,
- Optional: true,
- MarkdownDescription: "Storage capacity of instance data volumes expressed in gigabytes. Increase this number to add capacity.\n\n This value must be equal for all shards and node types.\n\n This value is not configurable on M0/M2/M5 clusters.\n\n MongoDB Cloud requires this parameter if you set **replicationSpecs**.\n\n If you specify a disk size below the minimum (10 GB), this parameter defaults to the minimum disk size value. \n\n Storage charge calculations depend on whether you choose the default value or a custom value.\n\n The maximum value for disk storage cannot exceed 50 times the maximum RAM for the selected cluster. If you require more storage space, consider upgrading your cluster to a higher tier.",
- },
"advanced_configuration": AdvancedConfigurationSchema(ctx),
"pinned_fcv": schema.SingleNestedAttribute{
Optional: true,
@@ -341,29 +322,18 @@ func dataSourceSchema(ctx context.Context) dsschema.Schema {
func pluralDataSourceSchema(ctx context.Context) dsschema.Schema {
return conversion.PluralDataSourceSchemaFromResource(resourceSchema(ctx), &conversion.PluralDataSourceSchemaRequest{
- RequiredFields: []string{"project_id"},
- OverridenRootFields: map[string]dsschema.Attribute{
- "use_replication_spec_per_shard": useReplicationSpecPerShardSchema(),
- },
+ RequiredFields: []string{"project_id"},
OverridenFields: dataSourceOverridenFields(),
})
}
func dataSourceOverridenFields() map[string]dsschema.Attribute {
return map[string]dsschema.Attribute{
- "use_replication_spec_per_shard": useReplicationSpecPerShardSchema(),
"accept_data_risks_and_force_replica_set_reconfig": nil,
"delete_on_create_timeout": nil,
}
}
-func useReplicationSpecPerShardSchema() dsschema.BoolAttribute {
- return dsschema.BoolAttribute{
- Optional: true,
- MarkdownDescription: "Set this field to true to allow the data source to use the latest schema representing each shard with an individual replication_specs object. This enables representing clusters with independent shard scaling.",
- }
-}
-
func AutoScalingSchema() schema.SingleNestedAttribute {
return schema.SingleNestedAttribute{
Computed: true,
@@ -502,18 +472,6 @@ func AdvancedConfigurationSchema(ctx context.Context) schema.SingleNestedAttribu
Optional: true,
MarkdownDescription: "Lifetime, in seconds, of multi-document transactions. Atlas considers the transactions that exceed this limit as expired and so aborts them through a periodic cleanup process.",
},
- "default_read_concern": schema.StringAttribute{
- DeprecationMessage: deprecationMsgOldSchema("default_read_concern"),
- Computed: true,
- Optional: true,
- MarkdownDescription: "Default level of acknowledgment requested from MongoDB for read operations set for this cluster.",
- },
- "fail_index_key_too_long": schema.BoolAttribute{
- DeprecationMessage: deprecationMsgOldSchema("fail_index_key_too_long"),
- Computed: true,
- Optional: true,
- MarkdownDescription: "When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.",
- },
"default_max_time_ms": schema.Int64Attribute{
Computed: true,
Optional: true,
@@ -538,7 +496,6 @@ func AdvancedConfigurationSchema(ctx context.Context) schema.SingleNestedAttribu
}
type TFModel struct {
- DiskSizeGB types.Float64 `tfsdk:"disk_size_gb"`
Labels types.Map `tfsdk:"labels"`
ReplicationSpecs types.List `tfsdk:"replication_specs"`
Tags types.Map `tfsdk:"tags"`
@@ -572,44 +529,41 @@ type TFModel struct {
DeleteOnCreateTimeout types.Bool `tfsdk:"delete_on_create_timeout"`
}
-// TFModelDS differs from TFModel: removes timeouts, accept_data_risks_and_force_replica_set_reconfig; adds use_replication_spec_per_shard.
+// TFModelDS differs from TFModel: removes timeouts, accept_data_risks_and_force_replica_set_reconfig
type TFModelDS struct {
- DiskSizeGB types.Float64 `tfsdk:"disk_size_gb"`
- Labels types.Map `tfsdk:"labels"`
- ReplicationSpecs types.List `tfsdk:"replication_specs"`
- Tags types.Map `tfsdk:"tags"`
- ReplicaSetScalingStrategy types.String `tfsdk:"replica_set_scaling_strategy"`
- Name types.String `tfsdk:"name"`
- AdvancedConfiguration types.Object `tfsdk:"advanced_configuration"`
- BiConnectorConfig types.Object `tfsdk:"bi_connector_config"`
- RootCertType types.String `tfsdk:"root_cert_type"`
- ClusterType types.String `tfsdk:"cluster_type"`
- MongoDBMajorVersion types.String `tfsdk:"mongo_db_major_version"`
- ConfigServerType types.String `tfsdk:"config_server_type"`
- VersionReleaseSystem types.String `tfsdk:"version_release_system"`
- ConnectionStrings types.Object `tfsdk:"connection_strings"`
- StateName types.String `tfsdk:"state_name"`
- MongoDBVersion types.String `tfsdk:"mongo_db_version"`
- CreateDate types.String `tfsdk:"create_date"`
- EncryptionAtRestProvider types.String `tfsdk:"encryption_at_rest_provider"`
- ProjectID types.String `tfsdk:"project_id"`
- ClusterID types.String `tfsdk:"cluster_id"`
- ConfigServerManagementMode types.String `tfsdk:"config_server_management_mode"`
- PinnedFCV types.Object `tfsdk:"pinned_fcv"`
- UseReplicationSpecPerShard types.Bool `tfsdk:"use_replication_spec_per_shard"`
- RedactClientLogData types.Bool `tfsdk:"redact_client_log_data"`
- GlobalClusterSelfManagedSharding types.Bool `tfsdk:"global_cluster_self_managed_sharding"`
- BackupEnabled types.Bool `tfsdk:"backup_enabled"`
- RetainBackupsEnabled types.Bool `tfsdk:"retain_backups_enabled"`
- Paused types.Bool `tfsdk:"paused"`
- TerminationProtectionEnabled types.Bool `tfsdk:"termination_protection_enabled"`
- PitEnabled types.Bool `tfsdk:"pit_enabled"`
+ Labels types.Map `tfsdk:"labels"`
+ ReplicationSpecs types.List `tfsdk:"replication_specs"`
+ Tags types.Map `tfsdk:"tags"`
+ ReplicaSetScalingStrategy types.String `tfsdk:"replica_set_scaling_strategy"`
+ Name types.String `tfsdk:"name"`
+ AdvancedConfiguration types.Object `tfsdk:"advanced_configuration"`
+ BiConnectorConfig types.Object `tfsdk:"bi_connector_config"`
+ RootCertType types.String `tfsdk:"root_cert_type"`
+ ClusterType types.String `tfsdk:"cluster_type"`
+ MongoDBMajorVersion types.String `tfsdk:"mongo_db_major_version"`
+ ConfigServerType types.String `tfsdk:"config_server_type"`
+ VersionReleaseSystem types.String `tfsdk:"version_release_system"`
+ ConnectionStrings types.Object `tfsdk:"connection_strings"`
+ StateName types.String `tfsdk:"state_name"`
+ MongoDBVersion types.String `tfsdk:"mongo_db_version"`
+ CreateDate types.String `tfsdk:"create_date"`
+ EncryptionAtRestProvider types.String `tfsdk:"encryption_at_rest_provider"`
+ ProjectID types.String `tfsdk:"project_id"`
+ ClusterID types.String `tfsdk:"cluster_id"`
+ ConfigServerManagementMode types.String `tfsdk:"config_server_management_mode"`
+ PinnedFCV types.Object `tfsdk:"pinned_fcv"`
+ RedactClientLogData types.Bool `tfsdk:"redact_client_log_data"`
+ GlobalClusterSelfManagedSharding types.Bool `tfsdk:"global_cluster_self_managed_sharding"`
+ BackupEnabled types.Bool `tfsdk:"backup_enabled"`
+ RetainBackupsEnabled types.Bool `tfsdk:"retain_backups_enabled"`
+ Paused types.Bool `tfsdk:"paused"`
+ TerminationProtectionEnabled types.Bool `tfsdk:"termination_protection_enabled"`
+ PitEnabled types.Bool `tfsdk:"pit_enabled"`
}
type TFModelPluralDS struct {
- ProjectID types.String `tfsdk:"project_id"`
- Results []*TFModelDS `tfsdk:"results"`
- UseReplicationSpecPerShard types.Bool `tfsdk:"use_replication_spec_per_shard"`
+ ProjectID types.String `tfsdk:"project_id"`
+ Results []*TFModelDS `tfsdk:"results"`
}
type TFBiConnectorModel struct {
@@ -669,18 +623,14 @@ var EndpointsObjType = types.ObjectType{AttrTypes: map[string]attr.Type{
type TFReplicationSpecsModel struct {
RegionConfigs types.List `tfsdk:"region_configs"`
ContainerId types.Map `tfsdk:"container_id"`
- Id types.String `tfsdk:"id"`
ExternalId types.String `tfsdk:"external_id"`
ZoneId types.String `tfsdk:"zone_id"`
ZoneName types.String `tfsdk:"zone_name"`
- NumShards types.Int64 `tfsdk:"num_shards"`
}
var ReplicationSpecsObjType = types.ObjectType{AttrTypes: map[string]attr.Type{
- "id": types.StringType,
"container_id": types.MapType{ElemType: types.StringType},
"external_id": types.StringType,
- "num_shards": types.Int64Type,
"region_configs": types.ListType{ElemType: RegionConfigsObjType},
"zone_id": types.StringType,
"zone_name": types.StringType,
@@ -747,7 +697,6 @@ type TFAdvancedConfigurationModel struct {
CustomOpensslCipherConfigTls12 types.Set `tfsdk:"custom_openssl_cipher_config_tls12"`
MinimumEnabledTlsProtocol types.String `tfsdk:"minimum_enabled_tls_protocol"`
DefaultWriteConcern types.String `tfsdk:"default_write_concern"`
- DefaultReadConcern types.String `tfsdk:"default_read_concern"`
TlsCipherConfigMode types.String `tfsdk:"tls_cipher_config_mode"`
SampleRefreshIntervalBiconnector types.Int64 `tfsdk:"sample_refresh_interval_bi_connector"`
SampleSizeBiconnector types.Int64 `tfsdk:"sample_size_bi_connector"`
@@ -757,14 +706,11 @@ type TFAdvancedConfigurationModel struct {
ChangeStreamOptionsPreAndPostImagesExpireAfterSeconds types.Int64 `tfsdk:"change_stream_options_pre_and_post_images_expire_after_seconds"`
JavascriptEnabled types.Bool `tfsdk:"javascript_enabled"`
NoTableScan types.Bool `tfsdk:"no_table_scan"`
- FailIndexKeyTooLong types.Bool `tfsdk:"fail_index_key_too_long"`
}
var AdvancedConfigurationObjType = types.ObjectType{AttrTypes: map[string]attr.Type{
"change_stream_options_pre_and_post_images_expire_after_seconds": types.Int64Type,
- "default_read_concern": types.StringType,
"default_write_concern": types.StringType,
- "fail_index_key_too_long": types.BoolType,
"javascript_enabled": types.BoolType,
"minimum_enabled_tls_protocol": types.StringType,
"no_table_scan": types.BoolType,
diff --git a/internal/service/advancedclustertpf/schema_test.go b/internal/service/advancedclustertpf/schema_test.go
index b60d6fc255..d81829575c 100644
--- a/internal/service/advancedclustertpf/schema_test.go
+++ b/internal/service/advancedclustertpf/schema_test.go
@@ -23,7 +23,7 @@ func TestAccAdvancedCluster_ValidationErrors(t *testing.T) {
ExpectError: regexp.MustCompile("Missing Configuration for Required Attribute"),
},
{
- Config: acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, invalidRegionConfigsPriorities),
+ Config: invalidRegionConfigsPriorities,
ExpectError: regexp.MustCompile("priority values in region_configs must be in descending order"),
},
{
@@ -54,10 +54,6 @@ func TestAdvancedCluster_PlanModifierErrors(t *testing.T) {
Config: configBasic(projectID, clusterName, "advanced_configuration = { default_max_time_ms = 100 }\nmongo_db_major_version=\"6\""),
ExpectError: regexp.MustCompile("`advanced_configuration.default_max_time_ms` can only be configured if the mongo_db_major_version is 8.0 or higher"),
},
- {
- Config: configBasic(projectID, clusterName, "advanced_configuration = { fail_index_key_too_long = true }"),
- ExpectError: regexp.MustCompile("`advanced_configuration.fail_index_key_too_long` can only be configured if the mongo_db_major_version is 4.4 or lower"),
- },
{
Config: configBasic(projectID, clusterName, "accept_data_risks_and_force_replica_set_reconfig = \"2006-01-02T15:04:05Z\""),
ExpectError: regexp.MustCompile("Update only attribute set on create: accept_data_risks_and_force_replica_set_reconfig"),
@@ -89,11 +85,6 @@ func TestAdvancedCluster_PlanModifierValid(t *testing.T) {
PlanOnly: true,
ExpectNonEmptyPlan: true,
},
- {
- Config: configBasic(projectID, clusterName, "advanced_configuration = { fail_index_key_too_long = true }\nmongo_db_major_version=\"4\""),
- PlanOnly: true,
- ExpectNonEmptyPlan: true,
- },
},
})
}
@@ -136,26 +127,26 @@ resource "mongodbatlas_advanced_cluster" "test" {
cluster_type = "REPLICASET"
backup_enabled = false
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
provider_name = "AWS"
priority = 6
region_name = "US_WEST_2"
- electable_specs {
+ electable_specs = {
node_count = 1
instance_size = "M10"
}
- }
- region_configs {
+ },
+ {
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
node_count = 2
instance_size = "M10"
}
- }
- }
+ }]
+ }]
}
`
var nullRegionConfigs = `
diff --git a/internal/service/advancedclustertpf/testdata/ClusterTwoRepSpecsWithAutoScalingAndSpecs/main_node_count_unknown.tf b/internal/service/advancedclustertpf/testdata/ClusterTwoRepSpecsWithAutoScalingAndSpecs/main_node_count_unknown.tf
new file mode 100644
index 0000000000..8e794f38a9
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/ClusterTwoRepSpecsWithAutoScalingAndSpecs/main_node_count_unknown.tf
@@ -0,0 +1,70 @@
+resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "111111111111111111111111"
+ name = "mocked-cluster"
+ cluster_type = "GEOSHARDED"
+
+
+ replication_specs = [{
+ region_configs = [{
+ analytics_auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 4 # changed from 5
+ }
+ priority = 7
+ provider_name = "AWS"
+ # read_only_specs = { # removed read_only_specs block
+ # instance_size = "M10"
+ # node_count = 2
+ # }
+ region_name = "US_EAST_1"
+ }]
+ zone_name = "Zone 1"
+ }, {
+ region_configs = [{
+ analytics_auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 4
+ }
+ auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 2"
+ }]
+}
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_removeBlocksFromConfig.yaml b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_removeBlocksFromConfig.yaml
new file mode 100644
index 0000000000..a74dc6279e
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_removeBlocksFromConfig.yaml
@@ -0,0 +1,312 @@
+variables:
+ clusterName: test-acc-tf-c-7398840803408065070
+ groupId: 67d01a24f610961835455eb1
+steps:
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "67d01a24f610961835455eb1"
+ name = "test-acc-tf-c-7398840803408065070"
+ cluster_type = "GEOSHARDED"
+
+
+ replication_specs = [{
+ region_configs = [{
+ analytics_auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 5
+ }
+ priority = 7
+ provider_name = "AWS"
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ region_name = "US_EAST_1"
+ }]
+ zone_name = "Zone 1"
+ }, {
+ region_configs = [{
+ analytics_auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 4
+ }
+ auto_scaling = {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ read_only_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 2"
+ }]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"GEOSHARDED\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"Zone 1\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"GEOSHARDED\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"Zone 1\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 2
+ status: 200
+ duplicate_responses: 24
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 27
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 28
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskSizeGB\": 10,\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf755\",\n \"numShards\": 1,\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf757\",\n \"numShards\": 1,\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 29
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"67d01a2d01d3561b07caf76c\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0e0706bcd69b8855a\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"67d01a2d01d3561b07caf76d\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_WEST_2\",\n \"vpcId\": \"vpc-04a84758be9599707\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 30
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultReadConcern\": null,\n \"defaultWriteConcern\": null,\n \"failIndexKeyTooLong\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 31
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "67d01a24f610961835455eb1"
+ name = "test-acc-tf-c-7398840803408065070"
+ cluster_type = "GEOSHARDED"
+
+
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 5
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ }]
+ zone_name = "Zone 1"
+ }, {
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M20"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 2"
+ }]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"Zone 1\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneName\": \"Zone 2\"\n }\n ]\n}"
+ responses:
+ - response_index: 42
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 37
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 43
+ status: 200
+ duplicate_responses: 37
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 81
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 38
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskSizeGB\": 10,\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf755\",\n \"numShards\": 1,\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf757\",\n \"numShards\": 1,\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 82
+ status: 400
+ duplicate_responses: 1
+ text: "{\n \"detail\": \"Asymmetric sharded cluster is not supported by the current API version. Please use the latest API instead. Documentation for the latest API is available at https://docs.atlas.mongodb.com/reference/api/clusters-advanced/.\",\n \"error\": 400,\n \"errorCode\": \"ASYMMETRIC_SHARD_UNSUPPORTED\",\n \"parameters\": [],\n \"reason\": \"Bad Request\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 39
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"67d01a2d01d3561b07caf76c\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0e0706bcd69b8855a\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"67d01a2d01d3561b07caf76d\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_WEST_2\",\n \"vpcId\": \"vpc-04a84758be9599707\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 40
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultReadConcern\": null,\n \"defaultWriteConcern\": null,\n \"failIndexKeyTooLong\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 41
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"Zone 1\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneName\": \"Zone 2\"\n }\n ]\n}"
+ responses:
+ - response_index: 42
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - config: ""
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 94
+ status: 202
+ text: "{}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 89
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 95
+ status: 200
+ duplicate_responses: 6
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"GEOSHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-739884080-shard-00-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-00-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-00.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-01.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-02.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-03.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-04.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-05.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-06.gwbdm.mongodb-dev.net:27016,test-acc-tf-c-739884080-shard-01-07.gwbdm.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-739884080.gwbdm.mongodb-dev.net\"\n },\n \"createDate\": \"2025-03-11T11:10:37Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"67d01a2d01d3561b07caf76e\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.5\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"67d01a2c01d3561b07caf756\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 5\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf753\",\n \"zoneName\": \"Zone 1\"\n },\n {\n \"id\": \"67d01a2c01d3561b07caf758\",\n \"regionConfigs\": [\n {\n \"analyticsAutoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 4\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"maxInstanceSize\": \"M30\",\n \"minInstanceSize\": \"M10\",\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": true\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"regionName\": \"US_WEST_2\"\n }\n ],\n \"zoneId\": \"67d01a2c01d3561b07caf754\",\n \"zoneName\": \"Zone 2\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 102
+ status: 404
+ text: "{\n \"detail\": \"No cluster named {clusterName} exists in group {groupId}.\",\n \"error\": 404,\n \"errorCode\": \"CLUSTER_NOT_FOUND\",\n \"parameters\": [\n \"{clusterName}\",\n \"{groupId}\"\n ],\n \"reason\": \"Not Found\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 90
+ status: 400
+ text: "{\n \"detail\": \"Asymmetric sharded cluster is not supported by the current API version. Please use the latest API instead. Documentation for the latest API is available at https://docs.atlas.mongodb.com/reference/api/clusters-advanced/.\",\n \"error\": 400,\n \"errorCode\": \"ASYMMETRIC_SHARD_UNSUPPORTED\",\n \"parameters\": [],\n \"reason\": \"Bad Request\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 91
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"67d01a2d01d3561b07caf76c\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0e0706bcd69b8855a\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"67d01a2d01d3561b07caf76d\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_WEST_2\",\n \"vpcId\": \"vpc-04a84758be9599707\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 92
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultReadConcern\": null,\n \"defaultWriteConcern\": null,\n \"failIndexKeyTooLong\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 93
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 94
+ status: 202
+ text: "{}"
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate.yaml b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate.yaml
new file mode 100644
index 0000000000..06c6faf4b9
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate.yaml
@@ -0,0 +1,595 @@
+variables:
+ clusterName: test-acc-tf-c-2263776537053663235
+ clusterName2: test-acc-tf-c-8473169683272584631
+ clusterName3: test-acc-tf-c-1834989662525036537
+ clusterName4: test-acc-tf-c-5102116492787641106
+ groupId: 68b470dbc210fb29e6fd0b18
+steps:
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+
+ timeouts = {
+ create = "6000s"
+ }
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-2263776537053663235"
+ cluster_type = "REPLICASET"
+ replication_specs = [{
+ region_configs = [{
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ auto_scaling = {
+ compute_scale_down_enabled = false
+ compute_enabled = false
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ node_count = 3
+ instance_size = "M10"
+ disk_size_gb = 10
+ }
+ }]
+ }]
+
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"REPLICASET\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"REPLICASET\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskSizeGB\": 10,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 2
+ status: 200
+ duplicate_responses: 18
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 21
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 22
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 23
+ status: 200
+ duplicate_responses: 16
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 24
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 27
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 30
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 31
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 33
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 35
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 38
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+
+ timeouts = {
+ create = "6000s"
+ }
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-2263776537053663235"
+ cluster_type = "REPLICASET"
+ replication_specs = [{
+ region_configs = [{
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ auto_scaling = {
+ compute_scale_down_enabled = false
+ compute_enabled = false
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ node_count = 3
+ instance_size = "M10"
+ disk_size_gb = 10
+ }
+ }]
+ }]
+
+ backup_enabled = true
+ bi_connector_config = {
+ enabled = true
+ }
+ labels = {
+ "env" = "test"
+ }
+ tags = {
+ "env" = "test"
+ }
+ pit_enabled = true
+ redact_client_log_data = true
+ replica_set_scaling_strategy = "NODE_TYPE"
+ root_cert_type = "ISRGROOTX1"
+ version_release_system = "CONTINUOUS"
+
+ advanced_configuration = {
+ change_stream_options_pre_and_post_images_expire_after_seconds = 100
+ default_write_concern = "majority"
+ javascript_enabled = true
+ minimum_enabled_tls_protocol = "TLS1_2" # This cluster does not support TLS1.0 or TLS1.1. If you must use old TLS versions contact MongoDB support
+ no_table_scan = true
+ sample_refresh_interval_bi_connector = 310
+ sample_size_bi_connector = 110
+ transaction_lifetime_limit_seconds = 300
+ custom_openssl_cipher_config_tls12 = ["TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"]
+ tls_cipher_config_mode = "CUSTOM"
+ default_max_time_ms = 65
+ }
+
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ responses:
+ - response_index: 74
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: PATCH
+ version: '2024-08-05'
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"noTableScan\": true,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ responses:
+ - response_index: 82
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 71
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 75
+ status: 200
+ duplicate_responses: 4
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - response_index: 80
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - response_index: 83
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - response_index: 85
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 72
+ status: 200
+ duplicate_responses: 17
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 73
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - response_index: 91
+ status: 200
+ duplicate_responses: 6
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ responses:
+ - response_index: 74
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: PATCH
+ version: '2024-08-05'
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"noTableScan\": true,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ responses:
+ - response_index: 82
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 86
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 90
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 92
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 94
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 96
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 99
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+
+ timeouts = {
+ create = "6000s"
+ }
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-2263776537053663235"
+ cluster_type = "REPLICASET"
+ replication_specs = [{
+ region_configs = [{
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ auto_scaling = {
+ compute_scale_down_enabled = false
+ compute_enabled = false
+ disk_gb_enabled = true
+ }
+ electable_specs = {
+ node_count = 3
+ instance_size = "M10"
+ disk_size_gb = 10
+ }
+ }]
+ }]
+
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"labels\": [],\n \"tags\": []\n}"
+ responses:
+ - response_index: 135
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 132
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - response_index: 136
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 133
+ status: 200
+ duplicate_responses: 17
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 134
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"labels\": [],\n \"tags\": []\n}"
+ responses:
+ - response_index: 135
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 139
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 143
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 145
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 147
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 149
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 152
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: ""
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 202
+ status: 202
+ text: "{}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 185
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - response_index: 203
+ status: 200
+ duplicate_responses: 4
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n}"
+ - response_index: 208
+ status: 404
+ text: "{\n \"detail\": \"No cluster named {clusterName} exists in group {groupId}.\",\n \"error\": 404,\n \"errorCode\": \"CLUSTER_NOT_FOUND\",\n \"parameters\": [\n \"{clusterName}\",\n \"{groupId}\"\n ],\n \"reason\": \"Not Found\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 186
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 187
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 190
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 193
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 194
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 196
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 198
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 201
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 202
+ status: 202
+ text: "{}"
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
new file mode 100644
index 0000000000..9c445eafcf
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
@@ -0,0 +1,32 @@
+{
+ "clusterType": "REPLICASET",
+ "labels": [],
+ "name": "{clusterName}",
+ "replicationSpecs": [
+ {
+ "regionConfigs": [
+ {
+ "autoScaling": {
+ "compute": {
+ "enabled": false,
+ "scaleDownEnabled": false
+ },
+ "diskGB": {
+ "enabled": true
+ }
+ },
+ "electableSpecs": {
+ "diskSizeGB": 10,
+ "instanceSize": "M10",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "regionName": "US_EAST_1"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ }
+ ],
+ "tags": []
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
new file mode 100644
index 0000000000..ab84e26118
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
@@ -0,0 +1,30 @@
+{
+ "advancedConfiguration": {
+ "customOpensslCipherConfigTls12": [
+ "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
+ ],
+ "minimumEnabledTlsProtocol": "TLS1_2",
+ "tlsCipherConfigMode": "CUSTOM"
+ },
+ "backupEnabled": true,
+ "biConnector": {
+ "enabled": true,
+ "readPreference": "secondary"
+ },
+ "labels": [
+ {
+ "key": "env",
+ "value": "test"
+ }
+ ],
+ "pitEnabled": true,
+ "redactClientLogData": true,
+ "replicaSetScalingStrategy": "NODE_TYPE",
+ "tags": [
+ {
+ "key": "env",
+ "value": "test"
+ }
+ ],
+ "versionReleaseSystem": "CONTINUOUS"
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_02_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_processArgs_2024-08-05.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_02_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_processArgs_2024-08-05.json
new file mode 100644
index 0000000000..3095de5536
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_02_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_processArgs_2024-08-05.json
@@ -0,0 +1,9 @@
+{
+ "changeStreamOptionsPreAndPostImagesExpireAfterSeconds": 100,
+ "defaultMaxTimeMS": 65,
+ "defaultWriteConcern": "majority",
+ "noTableScan": true,
+ "sampleRefreshIntervalBIConnector": 310,
+ "sampleSizeBIConnector": 110,
+ "transactionLifetimeLimitSeconds": 300
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_03_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_processArgs_2023-01-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_03_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_processArgs_2023-01-01.json
new file mode 100644
index 0000000000..74c44cf2ce
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/02_03_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_processArgs_2023-01-01.json
@@ -0,0 +1,3 @@
+{
+ "defaultReadConcern": "available"
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/03_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/03_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
new file mode 100644
index 0000000000..427e29bab1
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/03_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
@@ -0,0 +1,4 @@
+{
+ "labels": [],
+ "tags": []
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/04_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_replicasetAdvConfigUpdate/04_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling.yaml b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling.yaml
new file mode 100644
index 0000000000..d2e3d0cb92
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling.yaml
@@ -0,0 +1,513 @@
+variables:
+ clusterName: test-acc-tf-c-1834989662525036537
+ clusterName2: test-acc-tf-c-8473169683272584631
+ clusterName3: test-acc-tf-c-5102116492787641106
+ clusterName4: test-acc-tf-c-2263776537053663235
+ groupId: 68b470dbc210fb29e6fd0b18
+steps:
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-1834989662525036537"
+ cluster_type = "SHARDED"
+
+ replication_specs = [{ # shard 1
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 2000
+ node_count = 3
+ ebs_volume_type = "PROVISIONED"
+ }
+
+
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ },
+ { # shard 2
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ ebs_volume_type = "PROVISIONED"
+ disk_iops = 1000
+ node_count = 3
+ }
+
+
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ }]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"SHARDED\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"SHARDED\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 2
+ status: 200
+ duplicate_responses: 21
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 24
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 25
+ status: 200
+ duplicate_responses: 16
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 26
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 28
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 32
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 33
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 37
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 39
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 40
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-1834989662525036537"
+ cluster_type = "SHARDED"
+
+ replication_specs = [{ # shard 1
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 2000
+ node_count = 3
+ ebs_volume_type = "PROVISIONED"
+ }
+
+ auto_scaling = {
+ disk_gb_enabled = true
+ }
+
+ analytics_specs = {
+ instance_size = "M30"
+ node_count = 1
+ ebs_volume_type = "PROVISIONED"
+ disk_iops = 2000
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ },
+ { # shard 2
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ ebs_volume_type = "PROVISIONED"
+ disk_iops = 1000
+ node_count = 3
+ }
+
+ auto_scaling = {
+ disk_gb_enabled = true
+ }
+
+ analytics_specs = {
+ instance_size = "M30"
+ node_count = 1
+ ebs_volume_type = "PROVISIONED"
+ disk_iops = 1000
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ }]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {},\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {},\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ]\n}"
+ responses:
+ - response_index: 76
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 73
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 77
+ status: 200
+ duplicate_responses: 19
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 97
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 74
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - response_index: 98
+ status: 200
+ duplicate_responses: 10
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": false,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": null\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 75
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {},\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {},\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ]\n}"
+ responses:
+ - response_index: 76
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 101
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 104
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 106
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 109
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-1834989662525036537"
+ cluster_type = "SHARDED"
+
+ replication_specs = [{ # shard 1
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ disk_iops = 2000
+ node_count = 3
+ ebs_volume_type = "PROVISIONED"
+ }
+
+
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ },
+ { # shard 2
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M30"
+ ebs_volume_type = "PROVISIONED"
+ disk_iops = 1000
+ node_count = 3
+ }
+
+
+ provider_name = "AWS"
+ priority = 7
+ region_name = "EU_WEST_1"
+ }]
+ }]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests: []
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 134
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 135
+ status: 200
+ duplicate_responses: 10
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": false,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": null\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 136
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 138
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 142
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 143
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 146
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: ""
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 184
+ status: 202
+ text: "{}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 171
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 185
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 191
+ status: 404
+ text: "{\n \"detail\": \"No cluster named {clusterName} exists in group {groupId}.\",\n \"error\": 404,\n \"errorCode\": \"CLUSTER_NOT_FOUND\",\n \"parameters\": [\n \"{clusterName}\",\n \"{groupId}\"\n ],\n \"reason\": \"Not Found\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 172
+ status: 200
+ duplicate_responses: 3
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": false,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": null\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 173
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 175
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-03.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 179
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 180
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 183
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 184
+ status: 202
+ text: "{}"
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
new file mode 100644
index 0000000000..6d1e8b6208
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
@@ -0,0 +1,40 @@
+{
+ "clusterType": "SHARDED",
+ "labels": [],
+ "name": "{clusterName}",
+ "replicationSpecs": [
+ {
+ "regionConfigs": [
+ {
+ "electableSpecs": {
+ "diskIOPS": 2000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "regionName": "EU_WEST_1"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ },
+ {
+ "regionConfigs": [
+ {
+ "electableSpecs": {
+ "diskIOPS": 1000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "regionName": "EU_WEST_1"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ }
+ ],
+ "tags": []
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
new file mode 100644
index 0000000000..bd5660286d
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
@@ -0,0 +1,72 @@
+{
+ "replicationSpecs": [
+ {
+ "regionConfigs": [
+ {
+ "analyticsSpecs": {
+ "diskIOPS": 2000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 1
+ },
+ "autoScaling": {
+ "compute": {},
+ "diskGB": {
+ "enabled": true
+ }
+ },
+ "electableSpecs": {
+ "diskIOPS": 2000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "readOnlySpecs": {
+ "diskIOPS": 2000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 0
+ },
+ "regionName": "EU_WEST_1"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ },
+ {
+ "regionConfigs": [
+ {
+ "analyticsSpecs": {
+ "diskIOPS": 1000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 1
+ },
+ "autoScaling": {
+ "compute": {},
+ "diskGB": {
+ "enabled": true
+ }
+ },
+ "electableSpecs": {
+ "diskIOPS": 1000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "readOnlySpecs": {
+ "diskIOPS": 1000,
+ "ebsVolumeType": "PROVISIONED",
+ "instanceSize": "M30",
+ "nodeCount": 0
+ },
+ "regionName": "EU_WEST_1"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/03_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/03_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/04_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_shardedAddAnalyticsAndAutoScaling/04_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded.yaml b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded.yaml
new file mode 100644
index 0000000000..558058ac71
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded.yaml
@@ -0,0 +1,384 @@
+variables:
+ clusterName: test-acc-tf-c-3107751097158932430
+ groupId: 68b48f4e7af6b0372e8d18f1
+steps:
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b48f4e7af6b0372e8d18f1"
+ name = "test-acc-tf-c-3107751097158932430"
+ cluster_type = "SHARDED"
+
+
+ mongo_db_major_version = "8"
+ config_server_management_mode = "FIXED_TO_DEDICATED"
+
+
+
+ replication_specs = [
+
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ },
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ }
+ ]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"labels\": [],\n \"mongoDBMajorVersion\": \"8.0\",\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"labels\": [],\n \"mongoDBMajorVersion\": \"8.0\",\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 2
+ status: 200
+ duplicate_responses: 26
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 29
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 30
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b48f587af6b0372e8d1aaf\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0c78f3e2c74dbf751\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 31
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b48f587af6b0372e8d1aae\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b48f587af6b0372e8d1aae_r8gkw81j\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 32
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 34
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 41
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b48f4e7af6b0372e8d18f1"
+ name = "test-acc-tf-c-3107751097158932430"
+ cluster_type = "SHARDED"
+
+
+ mongo_db_major_version = "8"
+ config_server_management_mode = "ATLAS_MANAGED"
+
+
+
+ replication_specs = [
+
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M20"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ },
+ {
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M20"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ }
+ ]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ]\n}"
+ responses:
+ - response_index: 69
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 65
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 70
+ status: 200
+ duplicate_responses: 14
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 85
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 66
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b48f587af6b0372e8d1aaf\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0c78f3e2c74dbf751\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 67
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b48f587af6b0372e8d1aae\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b48f587af6b0372e8d1aae_r8gkw81j\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 68
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ]\n}"
+ responses:
+ - response_index: 69
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 90
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 97
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: ""
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 134
+ status: 202
+ text: "{}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 121
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 135
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 143
+ status: 404
+ duplicate_responses: 2
+ text: "{\n \"detail\": \"No cluster named {clusterName} exists in group {groupId}.\",\n \"error\": 404,\n \"errorCode\": \"CLUSTER_NOT_FOUND\",\n \"parameters\": [\n \"{clusterName}\",\n \"{groupId}\"\n ],\n \"reason\": \"Not Found\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 122
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b48f587af6b0372e8d1aaf\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0c78f3e2c74dbf751\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 123
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b48f587af6b0372e8d1aae\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b48f587af6b0372e8d1aae_r8gkw81j\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 124
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 126
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-310775109-shard-00-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-00-05.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-00.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-01.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-02.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-03.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-04.tnl8ax.mongodb-dev.net:27016,test-acc-tf-c-310775109-shard-01-05.tnl8ax.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-310775109.tnl8ax.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T18:07:20Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b48f587af6b0372e8d1ab0\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b48f587af6b0372e8d1a9a\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b48f587af6b0372e8d1a9c\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b48f587af6b0372e8d1a98\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 133
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 134
+ status: 202
+ text: "{}"
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
new file mode 100644
index 0000000000..27a1e0e343
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
@@ -0,0 +1,64 @@
+{
+ "clusterType": "SHARDED",
+ "configServerManagementMode": "FIXED_TO_DEDICATED",
+ "labels": [],
+ "mongoDBMajorVersion": "8.0",
+ "name": "{clusterName}",
+ "replicationSpecs": [
+ {
+ "regionConfigs": [
+ {
+ "analyticsSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 1
+ },
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "regionName": "EU_WEST_1"
+ },
+ {
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 2
+ },
+ "priority": 6,
+ "providerName": "AZURE",
+ "regionName": "US_EAST_2"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ },
+ {
+ "regionConfigs": [
+ {
+ "analyticsSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 1
+ },
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "regionName": "EU_WEST_1"
+ },
+ {
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 2
+ },
+ "priority": 6,
+ "providerName": "AZURE",
+ "regionName": "US_EAST_2"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ }
+ ],
+ "tags": []
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
new file mode 100644
index 0000000000..9cabce798b
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/02_01_PATCH__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2024-10-23.json
@@ -0,0 +1,75 @@
+{
+ "configServerManagementMode": "ATLAS_MANAGED",
+ "replicationSpecs": [
+ {
+ "regionConfigs": [
+ {
+ "analyticsSpecs": {
+ "instanceSize": "M20",
+ "nodeCount": 1
+ },
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "readOnlySpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 0
+ },
+ "regionName": "EU_WEST_1"
+ },
+ {
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 2
+ },
+ "priority": 6,
+ "providerName": "AZURE",
+ "readOnlySpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 0
+ },
+ "regionName": "US_EAST_2"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ },
+ {
+ "regionConfigs": [
+ {
+ "analyticsSpecs": {
+ "instanceSize": "M20",
+ "nodeCount": 1
+ },
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 3
+ },
+ "priority": 7,
+ "providerName": "AWS",
+ "readOnlySpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 0
+ },
+ "regionName": "EU_WEST_1"
+ },
+ {
+ "electableSpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 2
+ },
+ "priority": 6,
+ "providerName": "AZURE",
+ "readOnlySpecs": {
+ "instanceSize": "M10",
+ "nodeCount": 0
+ },
+ "regionName": "US_EAST_2"
+ }
+ ],
+ "zoneName": "ZoneName managed by Terraform"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/03_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricSharded/03_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricShardedOldSchema.yaml b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricShardedOldSchema.yaml
new file mode 100644
index 0000000000..800bb67fd6
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_symmetricShardedOldSchema.yaml
@@ -0,0 +1,382 @@
+variables:
+ clusterName: test-acc-tf-c-8473169683272584631
+ clusterName2: test-acc-tf-c-1834989662525036537
+ clusterName3: test-acc-tf-c-5102116492787641106
+ clusterName4: test-acc-tf-c-2263776537053663235
+ groupId: 68b470dbc210fb29e6fd0b18
+steps:
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-8473169683272584631"
+ cluster_type = "SHARDED"
+
+
+ mongo_db_major_version = "8"
+ config_server_management_mode = "FIXED_TO_DEDICATED"
+
+
+
+ replication_specs = [{
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ }]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"labels\": [],\n \"mongoDBMajorVersion\": \"8.0\",\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"labels\": [],\n \"mongoDBMajorVersion\": \"8.0\",\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 2
+ status: 200
+ duplicate_responses: 26
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 29
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 30
+ status: 200
+ duplicate_responses: 16
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 31
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 32
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 34
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 42
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 44
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 46
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 47
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-8473169683272584631"
+ cluster_type = "SHARDED"
+
+
+ mongo_db_major_version = "8"
+ config_server_management_mode = "ATLAS_MANAGED"
+
+
+
+ replication_specs = [{
+ region_configs = [{
+ analytics_specs = {
+ instance_size = "M20"
+ node_count = 1
+ }
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }, {
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 2
+ }
+ priority = 6
+ provider_name = "AZURE"
+ region_name = "US_EAST_2"
+ }]
+ }]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ]\n}"
+ responses:
+ - response_index: 87
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 83
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 88
+ status: 200
+ duplicate_responses: 13
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 102
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 84
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - response_index: 103
+ status: 200
+ duplicate_responses: 10
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": false,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": null\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 85
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 86
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: PATCH
+ version: '2024-10-23'
+ text: "{\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ]\n}"
+ responses:
+ - response_index: 87
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 106
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 115
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 116
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: ""
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 159
+ status: 202
+ text: "{}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 144
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 160
+ status: 200
+ duplicate_responses: 4
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 165
+ status: 404
+ duplicate_responses: 2
+ text: "{\n \"detail\": \"No cluster named {clusterName} exists in group {groupId}.\",\n \"error\": 404,\n \"errorCode\": \"CLUSTER_NOT_FOUND\",\n \"parameters\": [\n \"{clusterName}\",\n \"{groupId}\"\n ],\n \"reason\": \"Not Found\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 145
+ status: 200
+ duplicate_responses: 3
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": false,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": null\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 146
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 147
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 149
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 157
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 158
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 159
+ status: 202
+ text: "{}"
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade.yaml b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade.yaml
new file mode 100644
index 0000000000..3d06724d04
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade.yaml
@@ -0,0 +1,383 @@
+variables:
+ clusterName: test-acc-tf-c-5102116492787641106
+ clusterName2: test-acc-tf-c-8473169683272584631
+ clusterName3: test-acc-tf-c-1834989662525036537
+ clusterName4: test-acc-tf-c-2263776537053663235
+ groupId: 68b470dbc210fb29e6fd0b18
+steps:
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-5102116492787641106"
+ cluster_type = "REPLICASET"
+
+ replication_specs = [{
+ region_configs = [{
+ backing_provider_name = "AWS"
+ electable_specs = {
+ instance_size = "M0"
+ }
+ priority = 7
+ provider_name = "TENANT"
+ region_name = "US_EAST_1"
+ }]
+ zone_name = "Zone 1"
+ }]
+ }
+
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"REPLICASET\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: POST
+ version: '2024-10-23'
+ text: "{\n \"clusterType\": \"REPLICASET\",\n \"labels\": [],\n \"name\": \"{clusterName}\",\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"tags\": []\n}"
+ responses:
+ - response_index: 1
+ status: 201
+ text: "{\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 2
+ status: 200
+ duplicate_responses: 4
+ text: "{\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 7
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.12\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 8
+ status: 200
+ duplicate_responses: 7
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 9
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"FIXED_TO_DEDICATED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": []\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.12\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"CREATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 11
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 13
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 14
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 16
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 19
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 20
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: |-
+ resource "mongodbatlas_advanced_cluster" "test" {
+ project_id = "68b470dbc210fb29e6fd0b18"
+ name = "test-acc-tf-c-5102116492787641106"
+ cluster_type = "REPLICASET"
+
+ replication_specs = [{
+ region_configs = [{
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ electable_specs = {
+ node_count = 3
+ instance_size = "M10"
+ }
+ }]
+ zone_name = "Zone 1"
+ }]
+ }
+ data "mongodbatlas_advanced_cluster" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ name = mongodbatlas_advanced_cluster.test.name
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+
+ data "mongodbatlas_advanced_clusters" "test" {
+ project_id = mongodbatlas_advanced_cluster.test.project_id
+ depends_on = [mongodbatlas_advanced_cluster.test]
+ }
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/tenantUpgrade
+ method: POST
+ version: '2023-01-01'
+ text: "{\n \"name\": \"{clusterName}\",\n \"providerSettings\": {\n \"instanceSizeName\": \"M10\",\n \"providerName\": \"AWS\",\n \"regionName\": \"US_EAST_1\"\n }\n}"
+ responses:
+ - response_index: 50
+ status: 200
+ text: "{\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGBEnabled\": false\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskSizeGB\": 0.5,\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.12\",\n \"mongoURI\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017\",\n \"mongoURIUpdated\": \"2025-08-31T16:00:04Z\",\n \"mongoURIWithOptions\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"name\": \"{clusterName}\",\n \"numShards\": 1,\n \"paused\": false,\n \"pitEnabled\": false,\n \"providerBackupEnabled\": false,\n \"providerSettings\": {\n \"autoScaling\": {\n \"compute\": {\n \"maxInstanceSize\": null,\n \"minInstanceSize\": null\n }\n },\n \"backingProviderName\": \"AWS\",\n \"effectiveInstanceSizeName\": \"M0\",\n \"instanceSizeName\": \"M0\",\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n },\n \"replicationFactor\": 3,\n \"replicationSpec\": {\n \"US_EAST_1\": {\n \"analyticsNodes\": 0,\n \"electableNodes\": 3,\n \"priority\": 7,\n \"readOnlyNodes\": 0\n }\n },\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db4\",\n \"numShards\": 1,\n \"regionsConfig\": {\n \"US_EAST_1\": {\n \"analyticsNodes\": 0,\n \"electableNodes\": 3,\n \"priority\": 7,\n \"readOnlyNodes\": 0\n }\n },\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"srvAddress\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 48
+ status: 200
+ text: "{\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.12\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 51
+ status: 200
+ text: "{\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.12\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db5\",\n \"regionConfigs\": [\n {\n \"backingProviderName\": \"AWS\",\n \"electableSpecs\": {\n \"diskSizeGB\": 0.5,\n \"effectiveInstanceSize\": \"M0\",\n \"instanceSize\": \"M0\"\n },\n \"priority\": 7,\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470eac210fb29e6fd0db3\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 52
+ status: 200
+ duplicate_responses: 18
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {},\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 71
+ status: 200
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 72
+ status: 200
+ duplicate_responses: 6
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 49
+ status: 200
+ duplicate_responses: 8
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/tenantUpgrade
+ method: POST
+ version: '2023-01-01'
+ text: "{\n \"name\": \"{clusterName}\",\n \"providerSettings\": {\n \"instanceSizeName\": \"M10\",\n \"providerName\": \"AWS\",\n \"regionName\": \"US_EAST_1\"\n }\n}"
+ responses:
+ - response_index: 50
+ status: 200
+ text: "{\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGBEnabled\": false\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:30Z\",\n \"diskSizeGB\": 0.5,\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470eac210fb29e6fd0dba\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.12\",\n \"mongoURI\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017\",\n \"mongoURIUpdated\": \"2025-08-31T16:00:04Z\",\n \"mongoURIWithOptions\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"name\": \"{clusterName}\",\n \"numShards\": 1,\n \"paused\": false,\n \"pitEnabled\": false,\n \"providerBackupEnabled\": false,\n \"providerSettings\": {\n \"autoScaling\": {\n \"compute\": {\n \"maxInstanceSize\": null,\n \"minInstanceSize\": null\n }\n },\n \"backingProviderName\": \"AWS\",\n \"effectiveInstanceSizeName\": \"M0\",\n \"instanceSizeName\": \"M0\",\n \"providerName\": \"TENANT\",\n \"regionName\": \"US_EAST_1\"\n },\n \"replicationFactor\": 3,\n \"replicationSpec\": {\n \"US_EAST_1\": {\n \"analyticsNodes\": 0,\n \"electableNodes\": 3,\n \"priority\": 7,\n \"readOnlyNodes\": 0\n }\n },\n \"replicationSpecs\": [\n {\n \"id\": \"68b470eac210fb29e6fd0db4\",\n \"numShards\": 1,\n \"regionsConfig\": {\n \"US_EAST_1\": {\n \"analyticsNodes\": 0,\n \"electableNodes\": 3,\n \"priority\": 7,\n \"readOnlyNodes\": 0\n }\n },\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"srvAddress\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 73
+ status: 200
+ duplicate_responses: 16
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 77
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 80
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 81
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 83
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 87
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 88
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - config: ""
+ diff_requests:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 139
+ status: 202
+ text: "{}"
+ request_responses:
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 122
+ status: 200
+ duplicate_responses: 1
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 140
+ status: 200
+ duplicate_responses: 3
+ text: "{\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"DELETING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}"
+ - response_index: 144
+ status: 404
+ duplicate_responses: 2
+ text: "{\n \"detail\": \"No cluster named {clusterName} exists in group {groupId}.\",\n \"error\": 404,\n \"errorCode\": \"CLUSTER_NOT_FOUND\",\n \"parameters\": [\n \"{clusterName}\",\n \"{groupId}\"\n ],\n \"reason\": \"Not Found\"\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AWS
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 123
+ status: 200
+ duplicate_responses: 5
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AWS\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"id\": \"68b470e5e989e54ca7724068\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"EU_WEST_1\",\n \"vpcId\": \"vpc-0a4286e1883a415be\"\n },\n {\n \"atlasCidrBlock\": \"192.168.240.0/21\",\n \"id\": \"68b470e5e989e54ca7724069\",\n \"providerName\": \"AWS\",\n \"provisioned\": true,\n \"regionName\": \"US_EAST_1\",\n \"vpcId\": \"vpc-0ee9726a61b016982\"\n }\n ],\n \"totalCount\": 2\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 124
+ status: 200
+ duplicate_responses: 2
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 127
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters?includeCount=true\\u0026includeDeletedWithRetainedBackups=false\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"DEDICATED\",\n \"connectionStrings\": {\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-847316968-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-03.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-04.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-847316968-shard-00-05.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-847316968.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5c210fb29e6fd0cd7\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName2}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5c210fb29e6fd0cc9\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 8,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n },\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M20\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": false\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 2\n },\n \"priority\": 6,\n \"providerName\": \"AZURE\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3500,\n \"diskSizeGB\": 8,\n \"diskThroughput\": 125,\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_2\"\n }\n ],\n \"zoneId\": \"68b470e5c210fb29e6fd0cc7\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"SHARDED\",\n \"configServerManagementMode\": \"ATLAS_MANAGED\",\n \"configServerType\": \"EMBEDDED\",\n \"connectionStrings\": {\n \"awsPrivateLinkSrv\": {},\n \"privateEndpoint\": [],\n \"standard\": \"mongodb://test-acc-tf-c-183498966-config-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-config-00-02.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-00.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-01.bo7b8w.mongodb-dev.net:27016,test-acc-tf-c-183498966-shard-00-02.bo7b8w.mongodb-dev.net:27016/?ssl=true\\u0026authSource=admin\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-183498966.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406a\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName3}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724050\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 2000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n },\n {\n \"id\": \"68b470e5e989e54ca7724052\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 1\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 1000,\n \"diskSizeGB\": 40,\n \"ebsVolumeType\": \"PROVISIONED\",\n \"instanceSize\": \"M30\",\n \"nodeCount\": 0\n },\n \"regionName\": \"EU_WEST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca772404e\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"DEFAULT\"\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-510211649-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-510211649-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-i7pjvi-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-510211649.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T16:00:39Z\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.0\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b471a7e989e54ca772430f\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.0\",\n \"mongoDBVersion\": \"8.0.13\",\n \"name\": \"{clusterName}\",\n \"paused\": false,\n \"pitEnabled\": false,\n \"redactClientLogData\": false,\n \"replicationSpecs\": [\n {\n \"id\": \"68b471a7e989e54ca772430e\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b471a7e989e54ca772430d\",\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"IDLE\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n },\n {\n \"advancedConfiguration\": {\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"tlsCipherConfigMode\": \"CUSTOM\"\n },\n \"backupEnabled\": true,\n \"biConnector\": {\n \"enabled\": true,\n \"readPreference\": \"secondary\"\n },\n \"clusterType\": \"REPLICASET\",\n \"connectionStrings\": {\n \"standard\": \"mongodb://test-acc-tf-c-226377653-shard-00-00.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-01.bo7b8w.mongodb-dev.net:27017,test-acc-tf-c-226377653-shard-00-02.bo7b8w.mongodb-dev.net:27017/?ssl=true\\u0026authSource=admin\\u0026replicaSet=atlas-eet9li-shard-0\",\n \"standardSrv\": \"mongodb+srv://test-acc-tf-c-226377653.bo7b8w.mongodb-dev.net\"\n },\n \"createDate\": \"2025-08-31T15:57:25Z\",\n \"diskWarmingMode\": \"FULLY_WARMED\",\n \"encryptionAtRestProvider\": \"NONE\",\n \"featureCompatibilityVersion\": \"8.2\",\n \"globalClusterSelfManagedSharding\": false,\n \"groupId\": \"{groupId}\",\n \"id\": \"68b470e5e989e54ca772406b\",\n \"internalClusterRole\": \"NONE\",\n \"labels\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}\",\n \"rel\": \"self\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/restoreJobs\",\n \"rel\": \"https://cloud.mongodb.com/restoreJobs\"\n },\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/backup/snapshots\",\n \"rel\": \"https://cloud.mongodb.com/snapshots\"\n }\n ],\n \"mongoDBMajorVersion\": \"8.2\",\n \"mongoDBVersion\": \"8.2.0\",\n \"name\": \"{clusterName4}\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"redactClientLogData\": true,\n \"replicaSetScalingStrategy\": \"NODE_TYPE\",\n \"replicationSpecs\": [\n {\n \"id\": \"68b470e5e989e54ca7724055\",\n \"regionConfigs\": [\n {\n \"analyticsSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": false,\n \"predictiveEnabled\": false,\n \"scaleDownEnabled\": false\n },\n \"diskGB\": {\n \"enabled\": true\n }\n },\n \"electableSpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"readOnlySpecs\": {\n \"diskIOPS\": 3000,\n \"diskSizeGB\": 10,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M10\",\n \"nodeCount\": 0\n },\n \"regionName\": \"US_EAST_1\"\n }\n ],\n \"zoneId\": \"68b470e5e989e54ca7724053\",\n \"zoneName\": \"ZoneName managed by Terraform\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"stateName\": \"UPDATING\",\n \"tags\": [\n {\n \"key\": \"env\",\n \"value\": \"test\"\n }\n ],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"CONTINUOUS\"\n }\n ],\n \"totalCount\": 4\n}"
+ - path: /api/atlas/v2/groups/{groupId}/containers?providerName=AZURE
+ method: GET
+ version: '2023-01-01'
+ text: ""
+ responses:
+ - response_index: 130
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/containers?includeCount=true\\u0026providerName=AZURE\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"atlasCidrBlock\": \"192.168.248.0/21\",\n \"azureSubscriptionId\": \"59273ca1408bc99fee1bb339\",\n \"id\": \"68b470e5c210fb29e6fd0cd6\",\n \"providerName\": \"AZURE\",\n \"provisioned\": true,\n \"region\": \"US_EAST_2\",\n \"vnetName\": \"vnet_68b470e5c210fb29e6fd0cd6_66bzw7bx\"\n }\n ],\n \"totalCount\": 1\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName2}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 131
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName3}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 133
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": null,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [],\n \"defaultMaxTimeMS\": null,\n \"defaultWriteConcern\": null,\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": false,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": null,\n \"sampleSizeBIConnector\": null,\n \"tlsCipherConfigMode\": \"DEFAULT\",\n \"transactionLifetimeLimitSeconds\": null\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName4}/processArgs
+ method: GET
+ version: '2024-08-05'
+ text: ""
+ responses:
+ - response_index: 137
+ status: 200
+ text: "{\n \"changeStreamOptionsPreAndPostImagesExpireAfterSeconds\": 100,\n \"chunkMigrationConcurrency\": null,\n \"customOpensslCipherConfigTls12\": [\n \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"\n ],\n \"defaultMaxTimeMS\": 65,\n \"defaultWriteConcern\": \"majority\",\n \"javascriptEnabled\": true,\n \"minimumEnabledTlsProtocol\": \"TLS1_2\",\n \"noTableScan\": true,\n \"oplogMinRetentionHours\": null,\n \"oplogSizeMB\": null,\n \"queryStatsLogVerbosity\": 1,\n \"sampleRefreshIntervalBIConnector\": 310,\n \"sampleSizeBIConnector\": 110,\n \"tlsCipherConfigMode\": \"CUSTOM\",\n \"transactionLifetimeLimitSeconds\": 300\n}"
+ - path: /api/atlas/v2/groups/{groupId}/flexClusters
+ method: GET
+ version: '2024-11-13'
+ text: ""
+ responses:
+ - response_index: 138
+ status: 200
+ text: "{\n \"links\": [\n {\n \"href\": \"https://cloud-dev.mongodb.com/api/atlas/v2/groups/{groupId}/flexClusters?includeCount=true\\u0026pageNum=1\\u0026itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [],\n \"totalCount\": 0\n}"
+ - path: /api/atlas/v2/groups/{groupId}/clusters/{clusterName}
+ method: DELETE
+ version: '2023-02-01'
+ text: ""
+ responses:
+ - response_index: 139
+ status: 202
+ text: "{}"
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
new file mode 100644
index 0000000000..ad55adf915
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/01_01_POST__api_atlas_v2_groups_{groupId}_clusters_2024-10-23.json
@@ -0,0 +1,22 @@
+{
+ "clusterType": "REPLICASET",
+ "labels": [],
+ "name": "{clusterName}",
+ "replicationSpecs": [
+ {
+ "regionConfigs": [
+ {
+ "backingProviderName": "AWS",
+ "electableSpecs": {
+ "instanceSize": "M0"
+ },
+ "priority": 7,
+ "providerName": "TENANT",
+ "regionName": "US_EAST_1"
+ }
+ ],
+ "zoneName": "Zone 1"
+ }
+ ],
+ "tags": []
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/02_01_POST__api_atlas_v2_groups_{groupId}_clusters_tenantUpgrade_2023-01-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/02_01_POST__api_atlas_v2_groups_{groupId}_clusters_tenantUpgrade_2023-01-01.json
new file mode 100644
index 0000000000..acbb57bd94
--- /dev/null
+++ b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/02_01_POST__api_atlas_v2_groups_{groupId}_clusters_tenantUpgrade_2023-01-01.json
@@ -0,0 +1,8 @@
+{
+ "name": "{clusterName}",
+ "providerSettings": {
+ "instanceSizeName": "M10",
+ "providerName": "AWS",
+ "regionName": "US_EAST_1"
+ }
+}
\ No newline at end of file
diff --git a/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/03_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json b/internal/service/advancedclustertpf/testdata/TestAccMockableAdvancedCluster_tenantUpgrade/03_01_DELETE__api_atlas_v2_groups_{groupId}_clusters_{clusterName}_2023-02-01.json
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/internal/service/atlasuser/data_source_atlas_user.go b/internal/service/atlasuser/data_source_atlas_user.go
index aecb75b2ce..bbbe5718b2 100644
--- a/internal/service/atlasuser/data_source_atlas_user.go
+++ b/internal/service/atlasuser/data_source_atlas_user.go
@@ -4,14 +4,16 @@ import (
"context"
"fmt"
+ admin20241113 "go.mongodb.org/atlas-sdk/v20241113005/admin"
+
"github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/datasource/schema"
"github.com/hashicorp/terraform-plugin-framework/path"
"github.com/hashicorp/terraform-plugin-framework/schema/validator"
"github.com/hashicorp/terraform-plugin-framework/types"
- admin20241113 "go.mongodb.org/atlas-sdk/v20241113005/admin"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
@@ -65,6 +67,7 @@ type atlasUserDS struct {
func (d *atlasUserDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
resp.Schema = schema.Schema{
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "data source", "data.mongodbatlas_organization.users, data.mongodbatlas_team.users or data.mongodbatlas_project.users attributes", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
Attributes: map[string]schema.Attribute{
"id": schema.StringAttribute{ // required by hashicorps terraform plugin testing framework: https://github.com/hashicorp/terraform-plugin-testing/issues/84#issuecomment-1480006432
DeprecationMessage: "Please use user_id id attribute instead",
@@ -89,7 +92,8 @@ func (d *atlasUserDS) Schema(ctx context.Context, req datasource.SchemaRequest,
Computed: true,
},
"email_address": schema.StringAttribute{
- Computed: true,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "attribute", "data.mongodbatlas_organization.users.username, data.mongodbatlas_team.users.username or data.mongodbatlas_project.users.username attributes", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ Computed: true,
},
"first_name": schema.StringAttribute{
Computed: true,
diff --git a/internal/service/atlasuser/data_source_atlas_users.go b/internal/service/atlasuser/data_source_atlas_users.go
index 2bfd6147be..cd3e7ba72a 100644
--- a/internal/service/atlasuser/data_source_atlas_users.go
+++ b/internal/service/atlasuser/data_source_atlas_users.go
@@ -13,6 +13,8 @@ import (
"github.com/hashicorp/terraform-plugin-framework/schema/validator"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/id"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
@@ -52,6 +54,7 @@ type tfAtlasUsersDSModel struct {
func (d *atlasUsersDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
resp.Schema = schema.Schema{
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "data source", "data.mongodbatlas_organization.users, data.mongodbatlas_team.users or data.mongodbatlas_project.users attributes", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
Attributes: map[string]schema.Attribute{
"id": schema.StringAttribute{ // required by hashicorps terraform plugin testing framework: https://github.com/hashicorp/terraform-plugin-testing/issues/84#issuecomment-1480006432
DeprecationMessage: "Please use each user's id attribute instead",
@@ -107,7 +110,8 @@ func (d *atlasUsersDS) Schema(ctx context.Context, req datasource.SchemaRequest,
Computed: true,
},
"email_address": schema.StringAttribute{
- Computed: true,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "attribute", "data.mongodbatlas_organization.users.username, data.mongodbatlas_team.users.username or data.mongodbatlas_project.users.username attributes", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ Computed: true,
},
"first_name": schema.StringAttribute{
Computed: true,
diff --git a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go
index 7cea07b905..a5fb980ee9 100644
--- a/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go
+++ b/internal/service/backupcompliancepolicy/resource_backup_compliance_policy_test.go
@@ -388,7 +388,7 @@ func configOverwriteIncompatibleBackupPoliciesError(projectName, orgID, projectO
cloud_provider = "AWS"
frequencies = ["DAILY"]
region_name = "US_WEST_1"
- replication_spec_id = one(%[2]s.replication_specs).id
+ zone_id = %[2]s.replication_specs.*.zone_id[0]
should_copy_oplogs = false
}
}
@@ -432,7 +432,7 @@ func configClusterWithBackupSchedule(projectName, orgID, projectOwnerID string,
cloud_provider = "AWS"
frequencies = ["DAILY"]
region_name = "US_WEST_1"
- replication_spec_id = one(%[2]s.replication_specs).id
+ zone_id = %[2]s.replication_specs.*.zone_id[0]
should_copy_oplogs = false
}
}
diff --git a/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go b/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go
index f85c08d516..01d520b34e 100644
--- a/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go
+++ b/internal/service/cloudbackupschedule/data_source_cloud_backup_schedule.go
@@ -3,16 +3,13 @@ package cloudbackupschedule
import (
"context"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-)
-
-const (
- AsymmetricShardsUnsupportedActionDS = "Ensure you use copy_settings.#.zone_id instead of copy_settings.#.replication_spec_id for asymmetric sharded clusters by setting `use_zone_id_for_copy_settings = true`. To learn more, see our examples, documentation, and 1.18.0 migration guide at https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/1.18.0-upgrade-guide"
)
func DataSource() *schema.Resource {
@@ -27,10 +24,6 @@ func DataSource() *schema.Resource {
Type: schema.TypeString,
Required: true,
},
- "use_zone_id_for_copy_settings": {
- Type: schema.TypeBool,
- Optional: true,
- },
"cluster_id": {
Type: schema.TypeString,
Computed: true,
@@ -55,11 +48,6 @@ func DataSource() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
- "replication_spec_id": {
- Type: schema.TypeString,
- Computed: true,
- Deprecated: DeprecationMsgOldSchema,
- },
"zone_id": {
Type: schema.TypeString,
Computed: true,
@@ -260,42 +248,22 @@ func DataSource() *schema.Resource {
}
func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530
connV2 := meta.(*config.MongoDBClient).AtlasV2
projectID := d.Get("project_id").(string)
clusterName := d.Get("cluster_name").(string)
- useZoneIDForCopySettings := false
var backupSchedule *admin.DiskBackupSnapshotSchedule20240805
- var backupScheduleOldSDK *admin20240530.DiskBackupSnapshotSchedule
var copySettings []map[string]any
var err error
- if v, ok := d.GetOk("use_zone_id_for_copy_settings"); ok {
- useZoneIDForCopySettings = v.(bool)
- }
-
- if !useZoneIDForCopySettings {
- backupScheduleOldSDK, _, err = connV220240530.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute()
- if err != nil {
- if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError {
- return diag.Errorf("%s : %s : %s", errorSnapshotBackupScheduleRead, ErrorOperationNotPermitted, AsymmetricShardsUnsupportedActionDS)
- }
- return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
- }
-
- copySettings = flattenCopySettingsOldSDK(backupScheduleOldSDK.GetCopySettings())
- backupSchedule = convertBackupScheduleToLatestExcludeCopySettings(backupScheduleOldSDK)
- } else {
- backupSchedule, _, err = connV2.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute()
- if err != nil {
- return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
- }
- copySettings = FlattenCopySettings(backupSchedule.GetCopySettings())
+ backupSchedule, _, err = connV2.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute()
+ if err != nil {
+ return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
}
+ copySettings = FlattenCopySettings(backupSchedule.GetCopySettings())
- diags := setSchemaFieldsExceptCopySettings(d, backupSchedule)
+ diags := setSchemaFields(d, backupSchedule)
if diags.HasError() {
return diags
}
diff --git a/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go b/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go
index 780a2be274..c2782ffb4b 100644
--- a/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go
+++ b/internal/service/cloudbackupschedule/model_cloud_backup_schedule.go
@@ -1,7 +1,6 @@
package cloudbackupschedule
import (
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
)
@@ -33,20 +32,6 @@ func FlattenExport(roles *admin.DiskBackupSnapshotSchedule20240805) []map[string
return exportList
}
-func flattenCopySettingsOldSDK(copySettingList []admin20240530.DiskBackupCopySetting) []map[string]any {
- copySettings := make([]map[string]any, 0)
- for _, v := range copySettingList {
- copySettings = append(copySettings, map[string]any{
- "cloud_provider": v.GetCloudProvider(),
- "frequencies": v.GetFrequencies(),
- "region_name": v.GetRegionName(),
- "replication_spec_id": v.GetReplicationSpecId(),
- "should_copy_oplogs": v.GetShouldCopyOplogs(),
- })
- }
- return copySettings
-}
-
func FlattenCopySettings(copySettingList []admin.DiskBackupCopySetting20240805) []map[string]any {
copySettings := make([]map[string]any, 0)
for _, v := range copySettingList {
diff --git a/internal/service/cloudbackupschedule/model_sdk_version_conversion.go b/internal/service/cloudbackupschedule/model_sdk_version_conversion.go
deleted file mode 100644
index 8cb1295dd9..0000000000
--- a/internal/service/cloudbackupschedule/model_sdk_version_conversion.go
+++ /dev/null
@@ -1,116 +0,0 @@
-package cloudbackupschedule
-
-import (
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-)
-
-// Conversions from one SDK model version to another are used to avoid duplicating our flatten/expand conversion functions.
-// - These functions must not contain any business logic.
-// - All will be removed once we rely on a single API version.
-
-func convertPolicyItemsToOldSDK(slice *[]admin.DiskBackupApiPolicyItem) []admin20240530.DiskBackupApiPolicyItem {
- if slice == nil {
- return nil
- }
- policyItemsSlice := *slice
- results := make([]admin20240530.DiskBackupApiPolicyItem, len(policyItemsSlice))
- for i := range len(policyItemsSlice) {
- policyItem := policyItemsSlice[i]
- results[i] = admin20240530.DiskBackupApiPolicyItem{
- FrequencyInterval: policyItem.FrequencyInterval,
- FrequencyType: policyItem.FrequencyType,
- Id: policyItem.Id,
- RetentionUnit: policyItem.RetentionUnit,
- RetentionValue: policyItem.RetentionValue,
- }
- }
- return results
-}
-
-func convertPoliciesToLatest(slice *[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin.AdvancedDiskBackupSnapshotSchedulePolicy {
- if slice == nil {
- return nil
- }
-
- policySlice := *slice
- results := make([]admin.AdvancedDiskBackupSnapshotSchedulePolicy, len(policySlice))
- for i := range len(policySlice) {
- policyItem := policySlice[i]
- results[i] = admin.AdvancedDiskBackupSnapshotSchedulePolicy{
- Id: policyItem.Id,
- PolicyItems: convertPolicyItemsToLatest(policyItem.PolicyItems),
- }
- }
- return &results
-}
-
-func convertPolicyItemsToLatest(slice *[]admin20240530.DiskBackupApiPolicyItem) *[]admin.DiskBackupApiPolicyItem {
- if slice == nil {
- return nil
- }
- policyItemsSlice := *slice
- results := make([]admin.DiskBackupApiPolicyItem, len(policyItemsSlice))
- for i := range len(policyItemsSlice) {
- policyItem := policyItemsSlice[i]
- results[i] = admin.DiskBackupApiPolicyItem{
- FrequencyInterval: policyItem.FrequencyInterval,
- FrequencyType: policyItem.FrequencyType,
- Id: policyItem.Id,
- RetentionUnit: policyItem.RetentionUnit,
- RetentionValue: policyItem.RetentionValue,
- }
- }
- return &results
-}
-
-func convertAutoExportPolicyToOldSDK(exportPolicy *admin.AutoExportPolicy) *admin20240530.AutoExportPolicy {
- if exportPolicy == nil {
- return nil
- }
-
- return &admin20240530.AutoExportPolicy{
- ExportBucketId: exportPolicy.ExportBucketId,
- FrequencyType: exportPolicy.FrequencyType,
- }
-}
-
-func convertAutoExportPolicyToLatest(exportPolicy *admin20240530.AutoExportPolicy) *admin.AutoExportPolicy {
- if exportPolicy == nil {
- return nil
- }
-
- return &admin.AutoExportPolicy{
- ExportBucketId: exportPolicy.ExportBucketId,
- FrequencyType: exportPolicy.FrequencyType,
- }
-}
-
-func convertBackupScheduleReqToOldSDK(req *admin.DiskBackupSnapshotSchedule20240805,
- copySettingsOldSDK *[]admin20240530.DiskBackupCopySetting,
- policiesOldSDK *[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy) *admin20240530.DiskBackupSnapshotSchedule {
- return &admin20240530.DiskBackupSnapshotSchedule{
- CopySettings: copySettingsOldSDK,
- Policies: policiesOldSDK,
- AutoExportEnabled: req.AutoExportEnabled,
- Export: convertAutoExportPolicyToOldSDK(req.Export),
- UseOrgAndGroupNamesInExportPrefix: req.UseOrgAndGroupNamesInExportPrefix,
- ReferenceHourOfDay: req.ReferenceHourOfDay,
- ReferenceMinuteOfHour: req.ReferenceMinuteOfHour,
- RestoreWindowDays: req.RestoreWindowDays,
- UpdateSnapshots: req.UpdateSnapshots,
- }
-}
-
-func convertBackupScheduleToLatestExcludeCopySettings(backupSchedule *admin20240530.DiskBackupSnapshotSchedule) *admin.DiskBackupSnapshotSchedule20240805 {
- return &admin.DiskBackupSnapshotSchedule20240805{
- Policies: convertPoliciesToLatest(backupSchedule.Policies),
- AutoExportEnabled: backupSchedule.AutoExportEnabled,
- Export: convertAutoExportPolicyToLatest(backupSchedule.Export),
- UseOrgAndGroupNamesInExportPrefix: backupSchedule.UseOrgAndGroupNamesInExportPrefix,
- ReferenceHourOfDay: backupSchedule.ReferenceHourOfDay,
- ReferenceMinuteOfHour: backupSchedule.ReferenceMinuteOfHour,
- RestoreWindowDays: backupSchedule.RestoreWindowDays,
- UpdateSnapshots: backupSchedule.UpdateSnapshots,
- }
-}
diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go
index 28821cdced..1a0705c239 100644
--- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go
+++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule.go
@@ -7,15 +7,16 @@ import (
"net/http"
"strings"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/spf13/cast"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "github.com/spf13/cast"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
const (
@@ -28,7 +29,6 @@ const (
errorSnapshotBackupScheduleUpdate = "error updating a Cloud Backup Schedule: %s"
errorSnapshotBackupScheduleRead = "error getting a Cloud Backup Schedule for the cluster(%s): %s"
ErrorOperationNotPermitted = "error operation not permitted"
- AsymmetricShardsUnsupportedAction = "Ensure resource schema uses copy_settings.#.zone_id instead of copy_settings.#.replication_spec_id for asymmetric sharded clusters. Please refer to our examples, documentation, and 1.18.0 migration guide for more details at https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/1.18.0-upgrade-guide"
errorSnapshotBackupScheduleSetting = "error setting `%s` for Cloud Backup Schedule(%s): %s"
DeprecationOldSchemaAction = "To learn more, see our examples, documentation, and 1.18.0 migration guide for more details at https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/1.18.0-upgrade-guide"
AsymmetricShardsUnsupportedAPIError = "ASYMMETRIC_SHARD_BACKUP_UNSUPPORTED"
@@ -63,7 +63,6 @@ func Resource() *schema.Resource {
"auto_export_enabled": {
Type: schema.TypeBool,
Optional: true,
- Computed: true,
},
"use_org_and_group_names_in_export_prefix": {
Type: schema.TypeBool,
@@ -93,12 +92,6 @@ func Resource() *schema.Resource {
Optional: true,
Computed: true,
},
- "replication_spec_id": {
- Type: schema.TypeString,
- Optional: true,
- Computed: true,
- Deprecated: DeprecationMsgOldSchema,
- },
"zone_id": {
Type: schema.TypeString,
Optional: true,
@@ -116,7 +109,6 @@ func Resource() *schema.Resource {
Type: schema.TypeList,
MaxItems: 1,
Optional: true,
- Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"export_bucket_id": {
@@ -322,7 +314,6 @@ func Resource() *schema.Resource {
func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
var diags diag.Diagnostics
- connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530
connV2 := meta.(*config.MongoDBClient).AtlasV2
projectID := d.Get("project_id").(string)
clusterName := d.Get("cluster_name").(string)
@@ -340,7 +331,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
diags = append(diags, diagWarning)
}
- if err := cloudBackupScheduleCreateOrUpdate(ctx, connV220240530, connV2, d, projectID, clusterName, true); err != nil {
+ if err := cloudBackupScheduleCreateOrUpdate(ctx, connV2, d, projectID, clusterName); err != nil {
diags = append(diags, diag.Errorf(errorSnapshotBackupScheduleCreate, err)...)
return diags
}
@@ -354,124 +345,89 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
}
func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530
connV2 := meta.(*config.MongoDBClient).AtlasV2
ids := conversion.DecodeStateID(d.Id())
projectID := ids["project_id"]
clusterName := ids["cluster_name"]
var backupSchedule *admin.DiskBackupSnapshotSchedule20240805
- var backupScheduleOldSDK *admin20240530.DiskBackupSnapshotSchedule
- var copySettings []map[string]any
var resp *http.Response
var err error
- useOldAPI, err := shouldUseOldAPI(d, false)
+ backupSchedule, resp, err = connV2.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute()
if err != nil {
- return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
- }
-
- if useOldAPI {
- backupScheduleOldSDK, resp, err = connV220240530.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute()
- if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError {
- return diag.Errorf("%s : %s : %s", errorSnapshotBackupScheduleRead, ErrorOperationNotPermitted, AsymmetricShardsUnsupportedAction)
+ if validate.StatusNotFound(resp) {
+ d.SetId("")
+ return nil
}
- if err != nil {
- if validate.StatusNotFound(resp) {
- d.SetId("")
- return nil
- }
- return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
- }
-
- copySettings = flattenCopySettingsOldSDK(backupScheduleOldSDK.GetCopySettings())
- backupSchedule = convertBackupScheduleToLatestExcludeCopySettings(backupScheduleOldSDK)
- } else {
- backupSchedule, resp, err = connV2.CloudBackupsApi.GetBackupSchedule(context.Background(), projectID, clusterName).Execute()
- if err != nil {
- if validate.StatusNotFound(resp) {
- d.SetId("")
- return nil
- }
- return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
- }
- copySettings = FlattenCopySettings(backupSchedule.GetCopySettings())
+ return diag.Errorf(errorSnapshotBackupScheduleRead, clusterName, err)
}
- diags := setSchemaFieldsExceptCopySettings(d, backupSchedule)
+ diags := setSchemaFields(d, backupSchedule)
if diags.HasError() {
return diags
}
- if err := d.Set("copy_settings", copySettings); err != nil {
- return diag.Errorf(errorSnapshotBackupScheduleSetting, "copy_settings", clusterName, err)
- }
-
return nil
}
-func setSchemaFieldsExceptCopySettings(d *schema.ResourceData, backupPolicy *admin.DiskBackupSnapshotSchedule20240805) diag.Diagnostics {
- clusterName := backupPolicy.GetClusterName()
- if err := d.Set("cluster_id", backupPolicy.GetClusterId()); err != nil {
+func setSchemaFields(d *schema.ResourceData, backupSchedule *admin.DiskBackupSnapshotSchedule20240805) diag.Diagnostics {
+ clusterName := backupSchedule.GetClusterName()
+ if err := d.Set("cluster_id", backupSchedule.GetClusterId()); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "cluster_id", clusterName, err)
}
- if err := d.Set("reference_hour_of_day", backupPolicy.GetReferenceHourOfDay()); err != nil {
+ if err := d.Set("reference_hour_of_day", backupSchedule.GetReferenceHourOfDay()); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "reference_hour_of_day", clusterName, err)
}
- if err := d.Set("reference_minute_of_hour", backupPolicy.GetReferenceMinuteOfHour()); err != nil {
+ if err := d.Set("reference_minute_of_hour", backupSchedule.GetReferenceMinuteOfHour()); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "reference_minute_of_hour", clusterName, err)
}
- if err := d.Set("restore_window_days", backupPolicy.GetRestoreWindowDays()); err != nil {
+ if err := d.Set("restore_window_days", backupSchedule.GetRestoreWindowDays()); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "restore_window_days", clusterName, err)
}
- if err := d.Set("next_snapshot", conversion.TimePtrToStringPtr(backupPolicy.NextSnapshot)); err != nil {
+ if err := d.Set("next_snapshot", conversion.TimePtrToStringPtr(backupSchedule.NextSnapshot)); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "next_snapshot", clusterName, err)
}
- if err := d.Set("id_policy", backupPolicy.GetPolicies()[0].GetId()); err != nil {
+ if err := d.Set("id_policy", backupSchedule.GetPolicies()[0].GetId()); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "id_policy", clusterName, err)
}
- if err := d.Set("export", FlattenExport(backupPolicy)); err != nil {
- return diag.Errorf(errorSnapshotBackupScheduleSetting, "export", clusterName, err)
- }
-
- if err := d.Set("auto_export_enabled", backupPolicy.GetAutoExportEnabled()); err != nil {
- return diag.Errorf(errorSnapshotBackupScheduleSetting, "auto_export_enabled", clusterName, err)
- }
-
- if err := d.Set("use_org_and_group_names_in_export_prefix", backupPolicy.GetUseOrgAndGroupNamesInExportPrefix()); err != nil {
+ if err := d.Set("use_org_and_group_names_in_export_prefix", backupSchedule.GetUseOrgAndGroupNamesInExportPrefix()); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "use_org_and_group_names_in_export_prefix", clusterName, err)
}
- if err := d.Set("policy_item_hourly", FlattenPolicyItem(backupPolicy.GetPolicies()[0].GetPolicyItems(), Hourly)); err != nil {
+ if err := d.Set("policy_item_hourly", FlattenPolicyItem(backupSchedule.GetPolicies()[0].GetPolicyItems(), Hourly)); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_hourly", clusterName, err)
}
- if err := d.Set("policy_item_daily", FlattenPolicyItem(backupPolicy.GetPolicies()[0].GetPolicyItems(), Daily)); err != nil {
+ if err := d.Set("policy_item_daily", FlattenPolicyItem(backupSchedule.GetPolicies()[0].GetPolicyItems(), Daily)); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_daily", clusterName, err)
}
- if err := d.Set("policy_item_weekly", FlattenPolicyItem(backupPolicy.GetPolicies()[0].GetPolicyItems(), Weekly)); err != nil {
+ if err := d.Set("policy_item_weekly", FlattenPolicyItem(backupSchedule.GetPolicies()[0].GetPolicyItems(), Weekly)); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_weekly", clusterName, err)
}
- if err := d.Set("policy_item_monthly", FlattenPolicyItem(backupPolicy.GetPolicies()[0].GetPolicyItems(), Monthly)); err != nil {
+ if err := d.Set("policy_item_monthly", FlattenPolicyItem(backupSchedule.GetPolicies()[0].GetPolicyItems(), Monthly)); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_monthly", clusterName, err)
}
- if err := d.Set("policy_item_yearly", FlattenPolicyItem(backupPolicy.GetPolicies()[0].GetPolicyItems(), Yearly)); err != nil {
+ if err := d.Set("policy_item_yearly", FlattenPolicyItem(backupSchedule.GetPolicies()[0].GetPolicyItems(), Yearly)); err != nil {
return diag.Errorf(errorSnapshotBackupScheduleSetting, "policy_item_yearly", clusterName, err)
}
+
+ if err := d.Set("copy_settings", FlattenCopySettings(backupSchedule.GetCopySettings())); err != nil {
+ return diag.Errorf(errorSnapshotBackupScheduleSetting, "copy_settings", clusterName, err)
+ }
return nil
}
func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530
connV2 := meta.(*config.MongoDBClient).AtlasV2
ids := conversion.DecodeStateID(d.Id())
@@ -484,7 +440,7 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.
}
}
- err := cloudBackupScheduleCreateOrUpdate(ctx, connV220240530, connV2, d, projectID, clusterName, false)
+ err := cloudBackupScheduleCreateOrUpdate(ctx, connV2, d, projectID, clusterName)
if err != nil {
return diag.Errorf(errorSnapshotBackupScheduleUpdate, err)
}
@@ -540,15 +496,10 @@ func resourceImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*s
return []*schema.ResourceData{d}, nil
}
-func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220240530 *admin20240530.APIClient, connV2 *admin.APIClient, d *schema.ResourceData, projectID, clusterName string, isCreate bool) error {
+func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV2 *admin.APIClient, d *schema.ResourceData, projectID, clusterName string) error {
var err error
copySettings := d.Get("copy_settings")
- useOldAPI, err := shouldUseOldAPI(d, isCreate)
- if err != nil {
- return err
- }
-
req := &admin.DiskBackupSnapshotSchedule20240805{}
var policiesItem []admin.DiskBackupApiPolicyItem
@@ -573,7 +524,7 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220240530 *admi
}
if v, ok := d.GetOk("export"); ok {
- req.Export = expandAutoExportPolicy(v.([]any), d)
+ req.Export = expandAutoExportPolicy(v.([]any))
}
if d.HasChange("use_org_and_group_names_in_export_prefix") {
@@ -595,33 +546,6 @@ func cloudBackupScheduleCreateOrUpdate(ctx context.Context, connV220240530 *admi
req.UpdateSnapshots = value
}
- if useOldAPI {
- resp, _, err := connV220240530.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute()
- if err != nil {
- if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError {
- return fmt.Errorf("%s : %s", ErrorOperationNotPermitted, AsymmetricShardsUnsupportedAction)
- }
- return fmt.Errorf("error getting MongoDB Cloud Backup Schedule (%s): %s", clusterName, err)
- }
- var copySettingsOldSDK *[]admin20240530.DiskBackupCopySetting
- if isCopySettingsNonEmptyOrChanged(d) {
- copySettingsOldSDK = expandCopySettingsOldSDK(copySettings.([]any))
- }
-
- policiesOldSDK := getRequestPoliciesOldSDK(convertPolicyItemsToOldSDK(&policiesItem), resp.GetPolicies())
-
- reqOld := convertBackupScheduleReqToOldSDK(req, copySettingsOldSDK, policiesOldSDK)
- _, _, err = connV220240530.CloudBackupsApi.UpdateBackupSchedule(context.Background(), projectID, clusterName, reqOld).Execute()
- if err != nil {
- if apiError, ok := admin20240530.AsError(err); ok && apiError.GetErrorCode() == AsymmetricShardsUnsupportedAPIError {
- return fmt.Errorf("%s : %s", ErrorOperationNotPermitted, AsymmetricShardsUnsupportedAction)
- }
- return err
- }
-
- return nil
- }
-
resp, _, err := connV2.CloudBackupsApi.GetBackupSchedule(ctx, projectID, clusterName).Execute()
if err != nil {
return fmt.Errorf("error getting MongoDB Cloud Backup Schedule (%s): %s", clusterName, err)
@@ -670,46 +594,12 @@ func ExpandCopySettings(tfList []any) *[]admin.DiskBackupCopySetting20240805 {
return ©Settings
}
-func expandCopySettingsOldSDK(tfList []any) *[]admin20240530.DiskBackupCopySetting {
- copySettings := make([]admin20240530.DiskBackupCopySetting, 0)
-
- for _, tfMapRaw := range tfList {
- tfMap, ok := tfMapRaw.(map[string]any)
- if !ok {
- continue
- }
- apiObject := expandCopySettingOldSDK(tfMap)
- copySettings = append(copySettings, *apiObject)
- }
- return ©Settings
-}
-
-func expandCopySettingOldSDK(tfMap map[string]any) *admin20240530.DiskBackupCopySetting {
- if tfMap == nil {
- return nil
- }
-
- frequencies := conversion.ExpandStringList(tfMap["frequencies"].(*schema.Set).List())
- copySetting := &admin20240530.DiskBackupCopySetting{
- CloudProvider: conversion.Pointer(tfMap["cloud_provider"].(string)),
- Frequencies: &frequencies,
- RegionName: conversion.Pointer(tfMap["region_name"].(string)),
- ReplicationSpecId: conversion.Pointer(tfMap["replication_spec_id"].(string)),
- ShouldCopyOplogs: conversion.Pointer(tfMap["should_copy_oplogs"].(bool)),
- }
- return copySetting
-}
-
-func expandAutoExportPolicy(items []any, d *schema.ResourceData) *admin.AutoExportPolicy {
+func expandAutoExportPolicy(items []any) *admin.AutoExportPolicy {
itemObj := items[0].(map[string]any)
-
- if autoExportEnabled := d.Get("auto_export_enabled"); autoExportEnabled != nil && autoExportEnabled.(bool) {
- return &admin.AutoExportPolicy{
- ExportBucketId: conversion.StringPtr(itemObj["export_bucket_id"].(string)),
- FrequencyType: conversion.StringPtr(itemObj["frequency_type"].(string)),
- }
+ return &admin.AutoExportPolicy{
+ ExportBucketId: conversion.StringPtr(itemObj["export_bucket_id"].(string)),
+ FrequencyType: conversion.StringPtr(itemObj["frequency_type"].(string)),
}
- return nil
}
func ExpandPolicyItems(items []any, frequencyType string) *[]admin.DiskBackupApiPolicyItem {
@@ -742,69 +632,11 @@ func policyItemID(policyState map[string]any) *string {
return nil
}
-func shouldUseOldAPI(d *schema.ResourceData, isCreate bool) (bool, error) {
- copySettings := d.Get("copy_settings")
- if isCopySettingsNonEmptyOrChanged(d) {
- return CheckCopySettingsToUseOldAPI(copySettings.([]any), isCreate)
- }
- return false, nil
-}
-
func isCopySettingsNonEmptyOrChanged(d *schema.ResourceData) bool {
copySettings := d.Get("copy_settings")
return copySettings != nil && (conversion.HasElementsSliceOrMap(copySettings) || d.HasChange("copy_settings"))
}
-// CheckCopySettingsToUseOldAPI verifies that all elements in tfList use either `replication_spec_id` or `zone_id`
-// Returns an error if any element has both `replication_spec_id` and `zone_id` set during create
-// and returns a bool if the old API should be used or not
-func CheckCopySettingsToUseOldAPI(tfList []any, isCreate bool) (bool, error) {
- allHaveRepID := true
-
- for _, tfMapRaw := range tfList {
- tfMap, ok := tfMapRaw.(map[string]any)
- if !ok {
- return false, fmt.Errorf("element is not a valid map[string]any")
- }
-
- repSpecID, repOk := tfMap["replication_spec_id"].(string)
- zoneID, zoneOk := tfMap["zone_id"].(string)
-
- if repOk && repSpecID != "" && zoneOk && zoneID != "" {
- if isCreate {
- return false, fmt.Errorf("both 'replication_spec_id' and 'zone_id' cannot be set")
- }
- return false, nil
- }
-
- if (repOk && repSpecID != "" && zoneOk && zoneID != "") || (!repOk && !zoneOk) {
- return false, fmt.Errorf("each element must have either 'replication_spec_id' or 'zone_id' set")
- }
-
- if !repOk || repSpecID == "" {
- allHaveRepID = false
- }
- }
-
- if allHaveRepID {
- return true, nil
- }
- return false, nil
-}
-
-func getRequestPoliciesOldSDK(policiesItem []admin20240530.DiskBackupApiPolicyItem, respPolicies []admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy {
- if len(policiesItem) > 0 {
- policy := admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy{
- PolicyItems: &policiesItem,
- }
- if len(respPolicies) == 1 {
- policy.Id = respPolicies[0].Id
- }
- return &[]admin20240530.AdvancedDiskBackupSnapshotSchedulePolicy{policy}
- }
- return nil
-}
-
func getRequestPolicies(policiesItem []admin.DiskBackupApiPolicyItem, respPolicies []admin.AdvancedDiskBackupSnapshotSchedulePolicy) *[]admin.AdvancedDiskBackupSnapshotSchedulePolicy {
if len(policiesItem) > 0 {
policy := admin.AdvancedDiskBackupSnapshotSchedulePolicy{
diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go
index d5cef4cb6f..8475d75015 100644
--- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go
+++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_migration_test.go
@@ -1,16 +1,21 @@
package cloudbackupschedule_test
import (
+ "os"
"testing"
+ admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
+
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/plancheck"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
)
func TestMigBackupRSCloudBackupSchedule_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
var (
clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true})
useYearly = mig.IsProviderVersionAtLeast("1.16.0") // attribute introduced in this version
@@ -22,7 +27,7 @@ func TestMigBackupRSCloudBackupSchedule_basic(t *testing.T) {
)
resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
+ PreCheck: func() { mig.PreCheckBasicSleep(t); mig.PreCheckOldPreviewEnv(t) },
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
@@ -46,9 +51,10 @@ func TestMigBackupRSCloudBackupSchedule_basic(t *testing.T) {
}
func TestMigBackupRSCloudBackupSchedule_copySettings(t *testing.T) {
- mig.SkipIfVersionBelow(t, "1.16.0") // yearly policy item introduced in this version
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
var (
- clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{
+ lastVersionRepSpecID = os.Getenv("MONGODB_ATLAS_LAST_1X_VERSION")
+ clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{
CloudBackup: true,
ReplicationSpecs: []acc.ReplicationSpecRequest{
{Region: "US_EAST_2"},
@@ -109,15 +115,14 @@ func TestMigBackupRSCloudBackupSchedule_copySettings(t *testing.T) {
checksUpdateWithZoneID := acc.AddAttrSetChecks(resourceName, checksCreate, "copy_settings.0.zone_id")
resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
+ PreCheck: func() { mig.PreCheckBasicSleep(t); mig.PreCheckOldPreviewEnv(t) },
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
- ExternalProviders: mig.ExternalProviders(),
+ ExternalProviders: acc.ExternalProviders(lastVersionRepSpecID),
Config: copySettingsConfigWithRepSpecID,
Check: resource.ComposeAggregateTestCheckFunc(checksCreateWithReplicationSpecID...),
},
- mig.TestStepCheckEmptyPlan(copySettingsConfigWithRepSpecID),
{
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
Config: copySettingsConfigWithZoneID,
@@ -127,3 +132,55 @@ func TestMigBackupRSCloudBackupSchedule_copySettings(t *testing.T) {
},
})
}
+
+func TestMigBackupRSCloudBackupSchedule_export(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // in 2.0.0 we made auto_export_enabled and export fields optional only
+ var (
+ clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true, ResourceDependencyName: "mongodbatlas_cloud_backup_snapshot_export_bucket.test"})
+ policyName = acc.RandomName()
+ roleName = acc.RandomIAMRole()
+ bucketName = acc.RandomS3BucketName()
+
+ configWithExport = configExportPolicies(&clusterInfo, policyName, roleName, bucketName, true, true)
+ configWithoutExport = configExportPolicies(&clusterInfo, policyName, roleName, bucketName, false, false)
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: mig.PreCheckBasicSleep(t),
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ // Step 1: Apply config with export and auto_export_enabled (old provider)
+ {
+ ExternalProviders: mig.ExternalProvidersWithAWS(),
+ Config: configWithExport,
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name),
+ resource.TestCheckResourceAttr(resourceName, "auto_export_enabled", "true"),
+ resource.TestCheckResourceAttr(resourceName, "export.#", "1"),
+ ),
+ },
+ // Step 2: Remove export and auto_export_enabled, expect empty plan (old provider)
+ {
+ ExternalProviders: mig.ExternalProvidersWithAWS(),
+ Config: configWithExport,
+ ConfigPlanChecks: resource.ConfigPlanChecks{
+ PreApply: []plancheck.PlanCheck{
+ plancheck.ExpectEmptyPlan(),
+ },
+ },
+ },
+ // Step 3: Apply config without export and auto_export_enabled (new provider)
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configWithoutExport,
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name),
+ resource.TestCheckResourceAttr(resourceName, "auto_export_enabled", "false"),
+ resource.TestCheckResourceAttr(resourceName, "export.#", "0"),
+ ),
+ },
+ },
+ })
+}
diff --git a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go
index 610e7dc3ab..9bc22321c3 100644
--- a/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go
+++ b/internal/service/cloudbackupschedule/resource_cloud_backup_schedule_test.go
@@ -5,13 +5,14 @@ import (
"fmt"
"testing"
+ admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
+
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cloudbackupschedule"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
- admin20240530 "go.mongodb.org/atlas-sdk/v20240530005/admin"
)
var (
@@ -165,7 +166,7 @@ func TestAccBackupRSCloudBackupSchedule_export(t *testing.T) {
Steps: []resource.TestStep{
{
- Config: configExportPolicies(&clusterInfo, policyName, roleName, bucketName),
+ Config: configExportPolicies(&clusterInfo, policyName, roleName, bucketName, true, true),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name),
@@ -179,6 +180,15 @@ func TestAccBackupRSCloudBackupSchedule_export(t *testing.T) {
resource.TestCheckResourceAttr(resourceName, "policy_item_daily.0.retention_value", "4"),
),
},
+ {
+ Config: configExportPolicies(&clusterInfo, policyName, roleName, bucketName, false, false),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttr(resourceName, "cluster_name", clusterInfo.Name),
+ resource.TestCheckResourceAttr(resourceName, "auto_export_enabled", "false"),
+ resource.TestCheckResourceAttr(resourceName, "export.#", "0"),
+ ),
+ },
},
})
}
@@ -251,92 +261,6 @@ func TestAccBackupRSCloudBackupSchedule_onePolicy(t *testing.T) {
})
}
-func TestAccBackupRSCloudBackupSchedule_copySettings_repSpecId(t *testing.T) {
- var (
- clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{
- CloudBackup: true,
- ReplicationSpecs: []acc.ReplicationSpecRequest{
- {Region: "US_EAST_2"},
- },
- PitEnabled: true, // you cannot copy oplogs when pit is not enabled
- })
- clusterName = clusterInfo.Name
- terraformStr = clusterInfo.TerraformStr
- clusterResourceName = clusterInfo.ResourceName
- projectID = clusterInfo.ProjectID
- checkMap = map[string]string{
- "cluster_name": clusterName,
- "reference_hour_of_day": "3",
- "reference_minute_of_hour": "45",
- "restore_window_days": "1",
- "policy_item_hourly.#": "1",
- "policy_item_daily.#": "1",
- "policy_item_weekly.#": "1",
- "policy_item_monthly.#": "1",
- "policy_item_yearly.#": "1",
- "policy_item_hourly.0.frequency_interval": "1",
- "policy_item_hourly.0.retention_unit": "days",
- "policy_item_hourly.0.retention_value": "1",
- "policy_item_daily.0.frequency_interval": "1",
- "policy_item_daily.0.retention_unit": "days",
- "policy_item_daily.0.retention_value": "2",
- "policy_item_weekly.0.frequency_interval": "4",
- "policy_item_weekly.0.retention_unit": "weeks",
- "policy_item_weekly.0.retention_value": "3",
- "policy_item_monthly.0.frequency_interval": "5",
- "policy_item_monthly.0.retention_unit": "months",
- "policy_item_monthly.0.retention_value": "4",
- "policy_item_yearly.0.frequency_interval": "1",
- "policy_item_yearly.0.retention_unit": "years",
- "policy_item_yearly.0.retention_value": "1",
- }
- copySettingsChecks = map[string]string{
- "copy_settings.#": "1",
- "copy_settings.0.cloud_provider": "AWS",
- "copy_settings.0.region_name": "US_EAST_1",
- "copy_settings.0.should_copy_oplogs": "true",
- }
- emptyCopySettingsChecks = map[string]string{
- "copy_settings.#": "0",
- }
- )
- checksDefaultRS := acc.AddAttrChecks(resourceName, []resource.TestCheckFunc{checkExists(resourceName)}, checkMap)
- checksCreateRS := acc.AddAttrChecks(resourceName, checksDefaultRS, copySettingsChecks)
- checksCreateAll := acc.AddAttrSetChecks(resourceName, checksCreateRS, "copy_settings.0.replication_spec_id")
-
- checksDefaultDS := acc.AddAttrChecks(dataSourceName, []resource.TestCheckFunc{}, checkMap)
- checksCreateDS := acc.AddAttrChecks(dataSourceName, checksDefaultDS, copySettingsChecks)
- checksCreateDSAll := acc.AddAttrSetChecks(dataSourceName, checksCreateDS, "copy_settings.0.replication_spec_id")
-
- checksCreateAll = append(checksCreateAll, checksCreateDSAll...)
-
- checksUpdate := acc.AddAttrChecks(resourceName, checksDefaultRS, emptyCopySettingsChecks)
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: acc.PreCheckBasicSleep(t, &clusterInfo, "", ""),
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- Config: configCopySettings(terraformStr, projectID, clusterResourceName, false, true, &admin20240530.DiskBackupSnapshotSchedule{
- ReferenceHourOfDay: conversion.Pointer(3),
- ReferenceMinuteOfHour: conversion.Pointer(45),
- RestoreWindowDays: conversion.Pointer(1),
- }),
- Check: resource.ComposeAggregateTestCheckFunc(checksCreateAll...),
- },
- {
- Config: configCopySettings(terraformStr, projectID, clusterResourceName, true, true, &admin20240530.DiskBackupSnapshotSchedule{
- ReferenceHourOfDay: conversion.Pointer(3),
- ReferenceMinuteOfHour: conversion.Pointer(45),
- RestoreWindowDays: conversion.Pointer(1),
- }),
- Check: resource.ComposeAggregateTestCheckFunc(checksUpdate...),
- },
- },
- })
-}
-
func TestAccBackupRSCloudBackupSchedule_copySettings_zoneId(t *testing.T) {
var (
clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{
@@ -525,82 +449,6 @@ func TestAccBackupRSCloudBackupSchedule_azure(t *testing.T) {
})
}
-func TestCheckCopySettingsToUseOldAPI(t *testing.T) {
- testCases := []struct {
- name string
- errMsg string
- tfList []any
- isCreate bool
- expectedShouldUseOldAPI bool
- expectErr bool
- }{
- {
- name: "Valid - all replication_spec_id set",
- tfList: []any{
- map[string]any{"replication_spec_id": "123"},
- map[string]any{"replication_spec_id": "456"},
- },
- isCreate: true,
- expectedShouldUseOldAPI: true,
- expectErr: false,
- },
- {
- name: "Valid - all zone_id set",
- tfList: []any{
- map[string]any{"zone_id": "123"},
- map[string]any{"zone_id": "456"},
- },
- isCreate: true,
- expectedShouldUseOldAPI: false,
- expectErr: false,
- },
- {
- name: "Invalid - both IDs set on Create",
- tfList: []any{
- map[string]any{"replication_spec_id": "123", "zone_id": "zone123"},
- },
- isCreate: true,
- expectedShouldUseOldAPI: false,
- expectErr: true,
- errMsg: "both 'replication_spec_id' and 'zone_id' cannot be set",
- },
- {
- name: "Valid - Both IDs set on Update/Read",
- tfList: []any{
- map[string]any{"replication_spec_id": "123", "zone_id": "zone123"},
- },
- isCreate: false,
- expectedShouldUseOldAPI: false,
- expectErr: false,
- },
- {
- name: "Invalid - neither ID set",
- tfList: []any{
- map[string]any{},
- },
- isCreate: false,
- expectedShouldUseOldAPI: false,
- expectErr: true,
- errMsg: "each element must have either 'replication_spec_id' or 'zone_id' set",
- },
- }
-
- for _, tc := range testCases {
- t.Run(tc.name, func(t *testing.T) {
- result, err := cloudbackupschedule.CheckCopySettingsToUseOldAPI(tc.tfList, tc.isCreate)
- if result != tc.expectedShouldUseOldAPI {
- t.Errorf("%s failed: expected result %v, got %v", tc.name, tc.expectedShouldUseOldAPI, result)
- }
- if (err != nil) != tc.expectErr {
- t.Errorf("%s failed: expected error %v, got %v", tc.name, tc.expectErr, err)
- }
- if err != nil && err.Error() != tc.errMsg {
- t.Errorf("%s failed: expected error message %q, got %q", tc.name, tc.errMsg, err.Error())
- }
- })
- }
-}
-
func checkExists(resourceName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[resourceName]
@@ -748,7 +596,6 @@ func configCopySettings(terraformStr, projectID, clusterResourceName string, emp
dataSourceConfig = `data "mongodbatlas_cloud_backup_schedule" "schedule_test" {
cluster_name = mongodbatlas_cloud_backup_schedule.schedule_test.cluster_name
project_id = mongodbatlas_cloud_backup_schedule.schedule_test.project_id
- use_zone_id_for_copy_settings = true
}`
}
}
@@ -934,12 +781,23 @@ func configAdvancedPolicies(info *acc.ClusterInfo, p *admin20240530.DiskBackupSn
`, info.TerraformNameRef, info.ProjectID, p.GetReferenceHourOfDay(), p.GetReferenceMinuteOfHour(), p.GetRestoreWindowDays())
}
-func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketName string) string {
+func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketName string, includeAutoExport, includeExport bool) string {
+ autoExport := ""
+ export := ""
+ if includeAutoExport {
+ autoExport = "auto_export_enabled = true"
+ }
+ if includeExport {
+ export = `export {
+ export_bucket_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id
+ frequency_type = "monthly"
+ }`
+ }
return info.TerraformStr + fmt.Sprintf(`
resource "mongodbatlas_cloud_backup_schedule" "schedule_test" {
cluster_name = %[1]s
project_id = %[2]q
- auto_export_enabled = true
+ %[6]s
reference_hour_of_day = 20
reference_minute_of_hour = "05"
restore_window_days = 4
@@ -966,10 +824,7 @@ func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketNam
retention_value = 4
}
- export {
- export_bucket_id = mongodbatlas_cloud_backup_snapshot_export_bucket.test.export_bucket_id
- frequency_type = "monthly"
- }
+ %[7]s
}
resource "aws_s3_bucket" "backup" {
@@ -1040,7 +895,7 @@ func configExportPolicies(info *acc.ClusterInfo, policyName, roleName, bucketNam
}
EOF
}
- `, info.TerraformNameRef, info.ProjectID, policyName, roleName, bucketName)
+ `, info.TerraformNameRef, info.ProjectID, policyName, roleName, bucketName, autoExport, export)
}
func importStateIDFunc(resourceName string) resource.ImportStateIdFunc {
diff --git a/internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshot.go b/internal/service/cloudbackupsnapshot/data_source.go
similarity index 100%
rename from internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshot.go
rename to internal/service/cloudbackupsnapshot/data_source.go
diff --git a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot.go b/internal/service/cloudbackupsnapshot/model.go
similarity index 100%
rename from internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot.go
rename to internal/service/cloudbackupsnapshot/model.go
diff --git a/internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go b/internal/service/cloudbackupsnapshot/model_test.go
similarity index 100%
rename from internal/service/cloudbackupsnapshot/model_cloud_backup_snapshot_test.go
rename to internal/service/cloudbackupsnapshot/model_test.go
diff --git a/internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshots.go b/internal/service/cloudbackupsnapshot/plural_data_source.go
similarity index 100%
rename from internal/service/cloudbackupsnapshot/data_source_cloud_backup_snapshots.go
rename to internal/service/cloudbackupsnapshot/plural_data_source.go
diff --git a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go b/internal/service/cloudbackupsnapshot/resource.go
similarity index 86%
rename from internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go
rename to internal/service/cloudbackupsnapshot/resource.go
index c011439cfc..0668c42359 100644
--- a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot.go
+++ b/internal/service/cloudbackupsnapshot/resource.go
@@ -10,6 +10,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -19,9 +20,9 @@ import (
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- DeleteContext: resourceDelete,
+ CreateWithoutTimeout: resourceCreate,
+ ReadWithoutTimeout: resourceRead,
+ DeleteWithoutTimeout: resourceDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceImport,
},
@@ -121,10 +122,20 @@ func Resource() *schema.Resource {
Type: schema.TypeString,
},
},
+ "delete_on_create_timeout": { // Don't use Default: true to avoid unplanned changes when upgrading from previous versions.
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
+const (
+ oneMinute = 1 * time.Minute
+)
+
func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
connV2 := meta.(*config.MongoDBClient).AtlasV2
groupID := d.Get("project_id").(string)
@@ -155,12 +166,20 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
Target: []string{"completed", "failed"},
Refresh: resourceRefreshFunc(ctx, requestParams, connV2),
Timeout: d.Timeout(schema.TimeoutCreate) - time.Minute,
- MinTimeout: 60 * time.Second,
- Delay: 1 * time.Minute,
+ MinTimeout: oneMinute,
+ Delay: oneMinute,
}
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(err)
+ _, errWait := stateConf.WaitForStateContext(ctx)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
+ }
+ errWait = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, errWait, func(ctxCleanup context.Context) error {
+ _, errCleanup := connV2.CloudBackupsApi.DeleteClusterBackupSnapshot(ctxCleanup, groupID, clusterName, snapshot.GetId()).Execute()
+ return errCleanup
+ })
+ if errWait != nil {
+ return diag.Errorf("error creating a snapshot: %s", errWait)
}
d.SetId(conversion.EncodeStateID(map[string]string{
diff --git a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go b/internal/service/cloudbackupsnapshot/resource_migration_test.go
similarity index 88%
rename from internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go
rename to internal/service/cloudbackupsnapshot/resource_migration_test.go
index 3bac919f4b..f4c43dc38f 100644
--- a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_migration_test.go
+++ b/internal/service/cloudbackupsnapshot/resource_migration_test.go
@@ -4,11 +4,13 @@ import (
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
)
func TestMigBackupRSCloudBackupSnapshot_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
var (
clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true})
description = "My description in my cluster"
@@ -17,7 +19,7 @@ func TestMigBackupRSCloudBackupSnapshot_basic(t *testing.T) {
)
resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
+ PreCheck: func() { mig.PreCheckBasicSleep(t); mig.PreCheckOldPreviewEnv(t) },
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
@@ -42,6 +44,7 @@ func TestMigBackupRSCloudBackupSnapshot_basic(t *testing.T) {
}
func TestMigBackupRSCloudBackupSnapshot_sharded(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
var (
projectID = acc.ProjectIDExecution(t)
clusterName = acc.RandomClusterName()
@@ -51,7 +54,7 @@ func TestMigBackupRSCloudBackupSnapshot_sharded(t *testing.T) {
)
resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
+ PreCheck: func() { mig.PreCheckBasicSleep(t); mig.PreCheckOldPreviewEnv(t) },
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
diff --git a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go b/internal/service/cloudbackupsnapshot/resource_test.go
similarity index 81%
rename from internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go
rename to internal/service/cloudbackupsnapshot/resource_test.go
index 51781dbd3e..8883ff67ba 100644
--- a/internal/service/cloudbackupsnapshot/resource_cloud_backup_snapshot_test.go
+++ b/internal/service/cloudbackupsnapshot/resource_test.go
@@ -3,10 +3,12 @@ package cloudbackupsnapshot_test
import (
"context"
"fmt"
+ "regexp"
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -99,6 +101,26 @@ func TestAccBackupRSCloudBackupSnapshot_sharded(t *testing.T) {
})
}
+func TestAccBackupRSCloudBackupSnapshot_deleteOnCreateTimeout(t *testing.T) {
+ var (
+ clusterInfo = acc.GetClusterInfo(t, &acc.ClusterRequest{CloudBackup: true})
+ description = "Timeout test snapshot"
+ retentionInDays = "1"
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: acc.PreCheckBasicSleep(t, &clusterInfo, "", ""),
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configCreateTimeoutAndDeleteOnCreateTimeout(&clusterInfo, description, retentionInDays),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
func checkExists(resourceName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[resourceName]
@@ -185,24 +207,51 @@ func configSharded(projectID, clusterName, description, retentionInDays string)
cluster_type = "SHARDED"
backup_enabled = true
- replication_specs {
- num_shards = 3
-
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- analytics_specs {
+ analytics_specs = {
instance_size = "M10"
node_count = 1
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
-
- }
+ }]
+ },
+ {
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ }]
+ },
+ {
+ region_configs = [{
+ electable_specs = {
+ instance_size = "M10"
+ node_count = 3
+ }
+ analytics_specs = {
+ instance_size = "M10"
+ node_count = 1
+ }
+ provider_name = "AWS"
+ priority = 7
+ region_name = "US_EAST_1"
+ }]
+ }]
}
resource "mongodbatlas_cloud_backup_snapshot" "test" {
@@ -220,3 +269,19 @@ func configSharded(projectID, clusterName, description, retentionInDays string)
`, projectID, clusterName, description, retentionInDays)
}
+
+func configCreateTimeoutAndDeleteOnCreateTimeout(info *acc.ClusterInfo, description, retentionInDays string) string {
+ return info.TerraformStr + fmt.Sprintf(`
+ resource "mongodbatlas_cloud_backup_snapshot" "test" {
+ cluster_name = %[1]s
+ project_id = %[2]q
+ description = %[3]q
+ retention_in_days = %[4]q
+ delete_on_create_timeout = true
+
+ timeouts {
+ create = "10s"
+ }
+ }
+ `, info.TerraformNameRef, info.ProjectID, description, retentionInDays)
+}
diff --git a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go
index e6ed4c4713..30309f8417 100644
--- a/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go
+++ b/internal/service/cloudbackupsnapshotexportbucket/resource_cloud_backup_snapshot_export_bucket_test.go
@@ -8,6 +8,7 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -19,11 +20,11 @@ var (
)
func TestAccBackupSnapshotExportBucket_basicAWS(t *testing.T) {
- resource.ParallelTest(t, *basicAWSTestCase(t))
+ resource.Test(t, *basicAWSTestCase(t))
}
func TestAccBackupSnapshotExportBucket_basicAzure(t *testing.T) {
- resource.ParallelTest(t, *basicAzureTestCase(t))
+ resource.Test(t, *basicAzureTestCase(t))
}
func basicAWSTestCase(tb testing.TB) *resource.TestCase {
diff --git a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_migration_test.go b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_migration_test.go
index 78e47031f6..1a8bd2dca5 100644
--- a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_migration_test.go
+++ b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_migration_test.go
@@ -7,6 +7,6 @@ import (
)
func TestMigBackupSnapshotExportJob_basic(t *testing.T) {
- mig.SkipIfVersionBelow(t, "1.16.1")
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
mig.CreateTestAndRunUseExternalProviderNonParallel(t, basicTestCase(t), mig.ExternalProvidersWithAWS(), nil)
}
diff --git a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go
index b2501a97da..96e56eabd6 100644
--- a/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go
+++ b/internal/service/cloudbackupsnapshotexportjob/resource_cloud_backup_snapshot_export_job_test.go
@@ -7,7 +7,9 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
)
var (
@@ -54,7 +56,7 @@ func basicTestCase(tb testing.TB) *resource.TestCase {
checks = acc.AddAttrChecks(dataSourcePluralName, checks, attrsPluralDS)
return &resource.TestCase{
- PreCheck: acc.PreCheckBasicSleep(tb, &clusterInfo, "", ""),
+ PreCheck: func() { acc.PreCheckBasicSleep(tb, &clusterInfo, "", ""); mig.PreCheckOldPreviewEnv(tb) },
ExternalProviders: acc.ExternalProvidersOnlyAWS(),
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
Steps: []resource.TestStep{
diff --git a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_migration_test.go b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_migration_test.go
index 924e0f955e..0caf7b8654 100644
--- a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_migration_test.go
+++ b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_migration_test.go
@@ -7,6 +7,6 @@ import (
)
func TestMigCloudBackupSnapshotRestoreJob_basic(t *testing.T) {
- mig.SkipIfVersionBelow(t, "1.22.0") // this is when the new `failed` field was added
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
mig.CreateAndRunTest(t, basicTestCase(t))
}
diff --git a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go
index b62442afae..a779fb841f 100644
--- a/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go
+++ b/internal/service/cloudbackupsnapshotrestorejob/resource_cloud_backup_snapshot_restore_job_test.go
@@ -9,8 +9,10 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
)
const (
@@ -75,7 +77,7 @@ func basicTestCase(tb testing.TB) *resource.TestCase {
)
return &resource.TestCase{
- PreCheck: acc.PreCheckBasicSleep(tb, &clusterInfo, "", ""),
+ PreCheck: func() { acc.PreCheckBasicSleep(tb, &clusterInfo, "", ""); mig.PreCheckOldPreviewEnv(tb) },
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
diff --git a/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go b/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go
index f0eb1f5ba5..f7ed9b76ab 100644
--- a/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go
+++ b/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup.go
@@ -11,6 +11,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
@@ -34,13 +35,16 @@ const (
func ResourceSetup() *schema.Resource {
return &schema.Resource{
- ReadContext: resourceCloudProviderAccessSetupRead,
- CreateContext: resourceCloudProviderAccessSetupCreate,
- UpdateContext: resourceCloudProviderAccessAuthorizationPlaceHolder,
- DeleteContext: resourceCloudProviderAccessSetupDelete,
+ ReadWithoutTimeout: resourceCloudProviderAccessSetupRead,
+ CreateWithoutTimeout: resourceCloudProviderAccessSetupCreate,
+ UpdateWithoutTimeout: resourceCloudProviderAccessAuthorizationPlaceHolder,
+ DeleteWithoutTimeout: resourceCloudProviderAccessSetupDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceCloudProviderAccessSetupImportState,
},
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(defaultTimeout),
+ },
Schema: map[string]*schema.Schema{
"project_id": {
@@ -116,6 +120,11 @@ func ResourceSetup() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
+ "delete_on_create_timeout": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
@@ -181,7 +190,11 @@ func resourceCloudProviderAccessSetupCreate(ctx context.Context, d *schema.Resou
}
if role.ProviderName == constant.GCP {
- r, err := waitForGCPProviderAccessCompletion(ctx, projectID, resourceID, conn)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
+ }
+ r, err := waitForGCPProviderAccessCompletion(ctx, projectID, resourceID, conn, d.Timeout(schema.TimeoutCreate), deleteOnCreateTimeout)
if err != nil {
return diag.FromErr(err)
}
@@ -209,7 +222,7 @@ func resourceCloudProviderAccessSetupCreate(ctx context.Context, d *schema.Resou
return nil
}
-func waitForGCPProviderAccessCompletion(ctx context.Context, projectID, resourceID string, conn *admin.APIClient) (*admin.CloudProviderAccessRole, error) {
+func waitForGCPProviderAccessCompletion(ctx context.Context, projectID, resourceID string, conn *admin.APIClient, timeout time.Duration, deleteOnCreateTimeout bool) (*admin.CloudProviderAccessRole, error) {
requestParams := &admin.GetCloudProviderAccessApiParams{
RoleId: resourceID,
GroupId: projectID,
@@ -219,12 +232,16 @@ func waitForGCPProviderAccessCompletion(ctx context.Context, projectID, resource
Pending: []string{"IN_PROGRESS", "NOT_INITIATED"},
Target: []string{"COMPLETE", "FAILED"},
Refresh: resourceRefreshFunc(ctx, requestParams, conn),
- Timeout: defaultTimeout,
+ Timeout: timeout,
MinTimeout: 60 * time.Second,
Delay: 30 * time.Second,
}
finalResponse, err := stateConf.WaitForStateContext(ctx)
+ err = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, err, func(ctxCleanup context.Context) error {
+ _, errCleanup := conn.CloudProviderAccessApi.DeauthorizeProviderAccessRole(ctxCleanup, projectID, constant.GCP, resourceID).Execute()
+ return errCleanup
+ })
if err != nil {
return nil, err
}
diff --git a/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup_test.go b/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup_test.go
index 8266f15f9d..a2097afb91 100644
--- a/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup_test.go
+++ b/internal/service/cloudprovideraccess/resource_cloud_provider_access_setup_test.go
@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"os"
+ "regexp"
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
@@ -17,6 +18,10 @@ func TestAccCloudProviderAccessSetupAWS_basic(t *testing.T) {
resource.ParallelTest(t, *basicSetupTestCase(t))
}
+func TestAccCloudProviderAccessSetupAWS_createTimeoutWithDeleteOnCreateTimeout(t *testing.T) {
+ resource.Test(t, *basicSetupTestCaseWithDeleteOnCreateTimeout(t))
+}
+
const (
cloudProviderAzureDataSource = `
data "mongodbatlas_cloud_provider_access_setup" "test" {
@@ -123,6 +128,26 @@ func basicSetupTestCase(tb testing.TB) *resource.TestCase {
}
}
+func basicSetupTestCaseWithDeleteOnCreateTimeout(tb testing.TB) *resource.TestCase {
+ tb.Helper()
+
+ var (
+ projectID = acc.ProjectIDExecution(tb)
+ )
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(tb) },
+ CheckDestroy: checkDestroy,
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configSetupGCPWithTimeoutAndDeleteOnCreateTimeout(projectID),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ }
+}
+
func configSetupAWS(projectID string) string {
return fmt.Sprintf(`
resource "mongodbatlas_cloud_provider_access_setup" "test" {
@@ -139,6 +164,19 @@ func configSetupAWS(projectID string) string {
`, projectID)
}
+func configSetupGCPWithTimeoutAndDeleteOnCreateTimeout(projectID string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_cloud_provider_access_setup" "test" {
+ project_id = %[1]q
+ provider_name = "GCP"
+ delete_on_create_timeout = true
+ timeouts {
+ create = "1s"
+ }
+ }
+ `, projectID)
+}
+
func configSetupGCP(projectID string) string {
return fmt.Sprintf(`
resource "mongodbatlas_cloud_provider_access_setup" "test" {
diff --git a/internal/service/clouduserorgassignment/data_source.go b/internal/service/clouduserorgassignment/data_source.go
new file mode 100644
index 0000000000..6c2bbe4243
--- /dev/null
+++ b/internal/service/clouduserorgassignment/data_source.go
@@ -0,0 +1,85 @@
+package clouduserorgassignment
+
+import (
+ "context"
+ "fmt"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
+ "github.com/hashicorp/terraform-plugin-framework/datasource"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+var _ datasource.DataSource = &cloudUserOrgAssignmentDS{}
+var _ datasource.DataSourceWithConfigure = &cloudUserOrgAssignmentDS{}
+
+func DataSource() datasource.DataSource {
+ return &cloudUserOrgAssignmentDS{
+ DSCommon: config.DSCommon{
+ DataSourceName: resourceName,
+ },
+ }
+}
+
+type cloudUserOrgAssignmentDS struct {
+ config.DSCommon
+}
+
+func (d *cloudUserOrgAssignmentDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
+ resp.Schema = dataSourceSchema()
+}
+
+func (d *cloudUserOrgAssignmentDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
+ var cloudUserOrgAssignmentConfig TFModel
+ resp.Diagnostics.Append(req.Config.Get(ctx, &cloudUserOrgAssignmentConfig)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := d.Client.AtlasV2
+ orgID := cloudUserOrgAssignmentConfig.OrgId.ValueString()
+ username := cloudUserOrgAssignmentConfig.Username.ValueString()
+ userID := cloudUserOrgAssignmentConfig.UserId.ValueString()
+
+ if username == "" && userID == "" {
+ resp.Diagnostics.AddError("invalid configuration", "either username or user_id must be provided")
+ return
+ }
+
+ var orgUser *admin.OrgUserResponse
+ var err error
+
+ if userID != "" {
+ orgUser, _, err = connV2.MongoDBCloudUsersApi.GetOrgUser(ctx, orgID, userID).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error retrieving resource by user_id: %s", userID), err.Error())
+ return
+ }
+ } else {
+ params := &admin.ListOrgUsersApiParams{
+ OrgId: orgID,
+ Username: &username,
+ }
+ usersResp, _, err := connV2.MongoDBCloudUsersApi.ListOrgUsersWithParams(ctx, params).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error retrieving resource by username: %s", username), err.Error())
+ return
+ }
+
+ if usersResp == nil || usersResp.Results == nil || len(*usersResp.Results) == 0 {
+ resp.Diagnostics.AddError("resource not found", "no user found with the specified username")
+ return
+ }
+
+ orgUser = &(*usersResp.Results)[0]
+ }
+
+ tfModel, diags := NewTFModel(ctx, orgUser, cloudUserOrgAssignmentConfig.OrgId.ValueString())
+ resp.Diagnostics.Append(diags...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ resp.Diagnostics.Append(resp.State.Set(ctx, tfModel)...)
+}
diff --git a/internal/service/privatelinkendpointserverless/main_test.go b/internal/service/clouduserorgassignment/main_test.go
similarity index 84%
rename from internal/service/privatelinkendpointserverless/main_test.go
rename to internal/service/clouduserorgassignment/main_test.go
index f56f5bcd94..996868ef1a 100644
--- a/internal/service/privatelinkendpointserverless/main_test.go
+++ b/internal/service/clouduserorgassignment/main_test.go
@@ -1,4 +1,4 @@
-package privatelinkendpointserverless_test
+package clouduserorgassignment_test
import (
"os"
diff --git a/internal/service/clouduserorgassignment/model.go b/internal/service/clouduserorgassignment/model.go
new file mode 100644
index 0000000000..77b8577433
--- /dev/null
+++ b/internal/service/clouduserorgassignment/model.go
@@ -0,0 +1,126 @@
+package clouduserorgassignment
+
+import (
+ "context"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ "github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/hashicorp/terraform-plugin-framework/types/basetypes"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+)
+
+func NewTFModel(ctx context.Context, apiResp *admin.OrgUserResponse, orgID string) (*TFModel, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+ var rolesObj types.Object
+ var rolesDiags diag.Diagnostics
+
+ if apiResp == nil {
+ return nil, diags
+ }
+
+ rolesObj, rolesDiags = NewTFRoles(ctx, &apiResp.Roles)
+ diags.Append(rolesDiags...)
+
+ teamIDs := conversion.TFSetValueOrNull(ctx, apiResp.TeamIds, types.StringType)
+
+ return &TFModel{
+ OrgId: types.StringValue(orgID),
+ Country: types.StringPointerValue(apiResp.Country),
+ CreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.CreatedAt)),
+ FirstName: types.StringPointerValue(apiResp.FirstName),
+ UserId: types.StringValue(apiResp.GetId()),
+ InvitationCreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.InvitationCreatedAt)),
+ InvitationExpiresAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.InvitationExpiresAt)),
+ InviterUsername: types.StringPointerValue(apiResp.InviterUsername),
+ LastAuth: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.LastAuth)),
+ LastName: types.StringPointerValue(apiResp.LastName),
+ MobileNumber: types.StringPointerValue(apiResp.MobileNumber),
+ OrgMembershipStatus: types.StringValue(apiResp.GetOrgMembershipStatus()),
+ Roles: rolesObj,
+ TeamIds: teamIDs,
+ Username: types.StringValue(apiResp.GetUsername()),
+ }, diags
+}
+
+func NewTFRoles(ctx context.Context, roles *admin.OrgUserRolesResponse) (types.Object, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+ if roles == nil {
+ return types.ObjectNull(RolesObjectAttrTypes), diags
+ }
+ orgRoles := conversion.TFSetValueOrNull(ctx, roles.OrgRoles, types.StringType)
+ praList := NewTFProjectRoleAssignments(ctx, roles.GroupRoleAssignments)
+ rolesObj, _ := types.ObjectValue(
+ RolesObjectAttrTypes,
+ map[string]attr.Value{
+ "org_roles": orgRoles,
+ "project_role_assignments": praList,
+ },
+ )
+ return rolesObj, diags
+}
+
+func NewTFProjectRoleAssignments(ctx context.Context, groupRoleAssignments *[]admin.GroupRoleAssignment) types.List {
+ if groupRoleAssignments == nil {
+ return types.ListNull(ProjectRoleAssignmentsAttrType)
+ }
+
+ var projectRoleAssignments []TFRolesProjectRoleAssignmentsModel
+
+ for _, pra := range *groupRoleAssignments {
+ projectID := types.StringPointerValue(pra.GroupId)
+ projectRoles := conversion.TFSetValueOrNull(ctx, pra.GroupRoles, types.StringType)
+
+ projectRoleAssignments = append(projectRoleAssignments, TFRolesProjectRoleAssignmentsModel{
+ ProjectId: projectID,
+ ProjectRoles: projectRoles,
+ })
+ }
+
+ praList, _ := types.ListValueFrom(ctx, ProjectRoleAssignmentsAttrType.ElemType.(types.ObjectType), projectRoleAssignments)
+ return praList
+}
+
+func NewOrgUserReq(ctx context.Context, plan *TFModel) (*admin.OrgUserRequest, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+ roles, rolesDiags := NewOrgUserRolesRequest(ctx, plan.Roles)
+ diags.Append(rolesDiags...)
+ return &admin.OrgUserRequest{
+ Roles: *roles,
+ Username: plan.Username.ValueString(),
+ }, diags
+}
+
+func NewAtlasUpdateReq(ctx context.Context, plan *TFModel) (*admin.OrgUserUpdateRequest, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+ roles, rolesDiags := NewOrgUserRolesRequest(ctx, plan.Roles)
+ diags.Append(rolesDiags...)
+
+ return &admin.OrgUserUpdateRequest{
+ Roles: roles,
+ }, diags
+}
+
+func NewOrgUserRolesRequest(ctx context.Context, rolesObj types.Object) (*admin.OrgUserRolesRequest, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+ if rolesObj.IsNull() || rolesObj.IsUnknown() {
+ return &admin.OrgUserRolesRequest{
+ OrgRoles: nil,
+ }, diags
+ }
+ var rolesModel TFRolesModel
+ diags.Append(rolesObj.As(ctx, &rolesModel, basetypes.ObjectAsOptions{})...)
+ var orgRoles []string
+ if !rolesModel.OrgRoles.IsNull() && !rolesModel.OrgRoles.IsUnknown() {
+ rolesModel.OrgRoles.ElementsAs(ctx, &orgRoles, false)
+ } else {
+ orgRoles = nil
+ }
+
+ return &admin.OrgUserRolesRequest{
+ OrgRoles: orgRoles,
+ }, diags
+}
diff --git a/internal/service/clouduserorgassignment/model_test.go b/internal/service/clouduserorgassignment/model_test.go
new file mode 100644
index 0000000000..84e6521d2d
--- /dev/null
+++ b/internal/service/clouduserorgassignment/model_test.go
@@ -0,0 +1,264 @@
+package clouduserorgassignment_test
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/stretchr/testify/assert"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/clouduserorgassignment"
+)
+
+const (
+ testUserID = "user-123"
+ testUsername = "jdoe"
+ testFirstName = "John"
+ testLastName = "Doe"
+ testCountry = "CA"
+ testMobile = "+1555123456"
+ testInviter = "admin"
+ testOrgMembershipStatus = "ACTIVE"
+
+ testOrgRoleOwner = "ORG_OWNER"
+ testOrgRoleMember = "ORG_MEMBER"
+ testProjectRoleOwner = "PROJECT_OWNER"
+ testProjectRoleRead = "PROJECT_READ_ONLY"
+ testProjectRoleMember = "PROJECT_MEMBER"
+
+ testProjectID1 = "project1"
+ testProjectID2 = "project2"
+
+ testOrgID = "org-123"
+)
+
+var (
+ when = time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC)
+ testCreatedAt = when.Format(time.RFC3339)
+
+ testLastAuth = when.Add(-2 * time.Hour).Format(time.RFC3339)
+
+ testTeamIDs = []string{"teamA", "teamB"}
+ testOrgRoles = []string{"owner", "readWrite"}
+
+ testOrgRolesMultiple = []string{testOrgRoleOwner, testOrgRoleMember}
+ testProjectRolesSingle = []string{testProjectRoleOwner}
+)
+
+func createRolesObject(ctx context.Context, orgRoles []string, projectAssignments []clouduserorgassignment.TFRolesProjectRoleAssignmentsModel) types.Object {
+ orgRolesSet, _ := types.SetValueFrom(ctx, types.StringType, orgRoles)
+ var praList types.List
+ if len(projectAssignments) == 0 {
+ praList = types.ListNull(clouduserorgassignment.ProjectRoleAssignmentsAttrType.ElemType.(types.ObjectType))
+ } else {
+ praList, _ = types.ListValueFrom(ctx,
+ clouduserorgassignment.ProjectRoleAssignmentsAttrType.ElemType.(types.ObjectType),
+ projectAssignments,
+ )
+ }
+ obj, _ := types.ObjectValue(
+ clouduserorgassignment.RolesObjectAttrTypes,
+ map[string]attr.Value{
+ "org_roles": orgRolesSet,
+ "project_role_assignments": praList,
+ },
+ )
+ return obj
+}
+
+type sdkToTFModelTestCase struct {
+ SDKResp *admin.OrgUserResponse
+ expectedTFModel *clouduserorgassignment.TFModel
+}
+
+func TestNewTFModel_SDKToTFModel(t *testing.T) {
+ ctx := t.Context()
+
+ fullResp := &admin.OrgUserResponse{
+ Id: testUserID,
+ Username: testUsername,
+ FirstName: admin.PtrString(testFirstName),
+ LastName: admin.PtrString(testLastName),
+ Country: admin.PtrString(testCountry),
+ MobileNumber: admin.PtrString(testMobile),
+ OrgMembershipStatus: testOrgMembershipStatus,
+ CreatedAt: admin.PtrTime(when),
+ LastAuth: admin.PtrTime(when.Add(-2 * time.Hour)),
+ TeamIds: &testTeamIDs,
+ Roles: admin.OrgUserRolesResponse{
+ OrgRoles: &testOrgRoles,
+ },
+ }
+
+ orgRolesSet, _ := types.SetValueFrom(ctx, types.StringType, testOrgRoles)
+ expectedRoles, _ := types.ObjectValue(
+ clouduserorgassignment.RolesObjectAttrTypes,
+ map[string]attr.Value{
+ "org_roles": orgRolesSet,
+ "project_role_assignments": types.ListNull(clouduserorgassignment.ProjectRoleAssignmentsAttrType),
+ },
+ )
+
+ expectedTeams, _ := types.SetValueFrom(ctx, types.StringType, testTeamIDs)
+
+ expectedFullModel := &clouduserorgassignment.TFModel{
+ UserId: types.StringValue(testUserID),
+ Username: types.StringValue(testUsername),
+ FirstName: types.StringValue(testFirstName),
+ LastName: types.StringValue(testLastName),
+ Country: types.StringValue(testCountry),
+ MobileNumber: types.StringValue(testMobile),
+ InviterUsername: types.StringNull(),
+ OrgMembershipStatus: types.StringValue(testOrgMembershipStatus),
+ CreatedAt: types.StringValue(testCreatedAt),
+ InvitationCreatedAt: types.StringNull(),
+ InvitationExpiresAt: types.StringNull(),
+ LastAuth: types.StringValue(testLastAuth),
+ Roles: expectedRoles,
+ TeamIds: expectedTeams,
+ OrgId: types.StringValue(testOrgID),
+ }
+
+ testCases := map[string]sdkToTFModelTestCase{
+ "nil SDK response": {
+ SDKResp: nil,
+ expectedTFModel: nil,
+ },
+ "fully populated SDK response": {
+ SDKResp: fullResp,
+ expectedTFModel: expectedFullModel,
+ },
+ }
+
+ for name, tc := range testCases {
+ t.Run(name, func(t *testing.T) {
+ gotModel, diags := clouduserorgassignment.NewTFModel(ctx, tc.SDKResp, testOrgID)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expectedTFModel, gotModel, "TFModel did not match expected")
+ })
+ }
+}
+
+func TestNewOrgUserReq(t *testing.T) {
+ ctx := t.Context()
+
+ singleOrgRole := []string{"owner"}
+ projectAssignment := clouduserorgassignment.TFRolesProjectRoleAssignmentsModel{
+ ProjectId: types.StringValue(testProjectID1),
+ ProjectRoles: types.SetValueMust(types.StringType, []attr.Value{types.StringValue(testProjectRoleOwner)}),
+ }
+
+ testCases := map[string]struct {
+ plan *clouduserorgassignment.TFModel
+ expected *admin.OrgUserRequest
+ }{
+ "with org roles": {
+ plan: &clouduserorgassignment.TFModel{
+ Username: types.StringValue("bob"),
+ Roles: createRolesObject(ctx, singleOrgRole, nil),
+ },
+ expected: &admin.OrgUserRequest{
+ Username: "bob",
+ Roles: admin.OrgUserRolesRequest{OrgRoles: singleOrgRole},
+ },
+ },
+ "with both org roles and project role assignments": {
+ plan: &clouduserorgassignment.TFModel{
+ Username: types.StringValue("alice"),
+ Roles: createRolesObject(ctx, testOrgRolesMultiple, []clouduserorgassignment.TFRolesProjectRoleAssignmentsModel{projectAssignment}),
+ },
+ expected: &admin.OrgUserRequest{
+ Username: "alice",
+ Roles: admin.OrgUserRolesRequest{OrgRoles: testOrgRolesMultiple},
+ },
+ },
+ }
+
+ for name, tc := range testCases {
+ t.Run(name, func(t *testing.T) {
+ req, diags := clouduserorgassignment.NewOrgUserReq(ctx, tc.plan)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expected, req)
+ })
+ }
+}
+
+func TestNewAtlasUpdateReq(t *testing.T) {
+ ctx := t.Context()
+
+ singleOrgRole := []string{"owner"}
+
+ testCases := map[string]struct {
+ plan *clouduserorgassignment.TFModel
+ expected *admin.OrgUserUpdateRequest
+ }{
+ "null roles": {
+ plan: &clouduserorgassignment.TFModel{
+ Roles: types.ObjectNull(clouduserorgassignment.RolesObjectAttrTypes),
+ },
+ expected: &admin.OrgUserUpdateRequest{
+ Roles: &admin.OrgUserRolesRequest{OrgRoles: nil},
+ },
+ },
+ "with org roles": {
+ plan: &clouduserorgassignment.TFModel{
+ Roles: createRolesObject(ctx, singleOrgRole, nil),
+ },
+ expected: &admin.OrgUserUpdateRequest{
+ Roles: &admin.OrgUserRolesRequest{OrgRoles: singleOrgRole},
+ },
+ },
+ }
+
+ for name, tc := range testCases {
+ t.Run(name, func(t *testing.T) {
+ req, diags := clouduserorgassignment.NewAtlasUpdateReq(ctx, tc.plan)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expected, req)
+ })
+ }
+}
+
+func TestNewTFRoles(t *testing.T) {
+ ctx := t.Context()
+
+ testCases := map[string]struct {
+ roles *admin.OrgUserRolesResponse
+ expectedObject types.Object
+ }{
+ "nil roles": {
+ roles: nil,
+ expectedObject: types.ObjectNull(clouduserorgassignment.RolesObjectAttrTypes),
+ },
+
+ "roles with both roles": {
+ roles: &admin.OrgUserRolesResponse{
+ OrgRoles: &testOrgRolesMultiple,
+ GroupRoleAssignments: &[]admin.GroupRoleAssignment{
+ {
+ GroupId: admin.PtrString(testProjectID1),
+ GroupRoles: &testProjectRolesSingle,
+ },
+ },
+ },
+ expectedObject: createRolesObject(ctx, testOrgRolesMultiple, []clouduserorgassignment.TFRolesProjectRoleAssignmentsModel{
+ {
+ ProjectId: types.StringValue(testProjectID1),
+ ProjectRoles: types.SetValueMust(types.StringType, []attr.Value{types.StringValue(testProjectRoleOwner)}),
+ },
+ }),
+ },
+ }
+
+ for name, tc := range testCases {
+ t.Run(name, func(t *testing.T) {
+ obj, diags := clouduserorgassignment.NewTFRoles(ctx, tc.roles)
+ assert.False(t, diags.HasError(), "unexpected diagnostics")
+ assert.Equal(t, tc.expectedObject, obj, "created roles object did not match expected")
+ })
+ }
+}
diff --git a/internal/service/clouduserorgassignment/move_state.go b/internal/service/clouduserorgassignment/move_state.go
new file mode 100644
index 0000000000..5a73d64bfc
--- /dev/null
+++ b/internal/service/clouduserorgassignment/move_state.go
@@ -0,0 +1,77 @@
+package clouduserorgassignment
+
+import (
+ "context"
+ "fmt"
+ "strings"
+
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ "github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/resource"
+ "github.com/hashicorp/terraform-plugin-framework/tfsdk"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/hashicorp/terraform-plugin-go/tfprotov6"
+ "github.com/hashicorp/terraform-plugin-go/tftypes"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/schemafunc"
+)
+
+// MoveState is used with moved block to migrate from mongodbatlas_org_invitation to mongodbatlas_cloud_user_org_assignment
+func (r *rs) MoveState(context.Context) []resource.StateMover {
+ return []resource.StateMover{{StateMover: stateMover}}
+}
+
+func stateMover(ctx context.Context, req resource.MoveStateRequest, resp *resource.MoveStateResponse) {
+ if req.SourceTypeName != "mongodbatlas_org_invitation" || !strings.HasSuffix(req.SourceProviderAddress, "/mongodbatlas") {
+ return
+ }
+
+ setStateResponse(ctx, &resp.Diagnostics, req.SourceRawState, &resp.TargetState)
+}
+
+var stateAttrs = map[string]tftypes.Type{
+ "org_id": tftypes.String,
+ "username": tftypes.String,
+ "roles": tftypes.List{ElementType: tftypes.String},
+}
+
+func setStateResponse(ctx context.Context, diags *diag.Diagnostics, stateIn *tfprotov6.RawState, stateOut *tfsdk.State) {
+ rawStateValue, err := stateIn.UnmarshalWithOpts(tftypes.Object{
+ AttributeTypes: stateAttrs,
+ }, tfprotov6.UnmarshalOpts{ValueFromJSONOpts: tftypes.ValueFromJSONOpts{IgnoreUndefinedAttributes: true}})
+ if err != nil {
+ diags.AddError("Unable to Unmarshal state", err.Error())
+ return
+ }
+ var stateObj map[string]tftypes.Value
+ if err := rawStateValue.As(&stateObj); err != nil {
+ diags.AddError("Unable to Parse state", err.Error())
+ return
+ }
+ orgID, username := getOrgIDUsernameRolesFromStateObj(diags, stateObj)
+ if diags.HasError() {
+ return
+ }
+
+ model := TFModel{
+ OrgId: types.StringPointerValue(orgID),
+ Username: types.StringPointerValue(username),
+ Roles: types.ObjectNull(RolesObjectAttrTypes), // Let roles be populated during Read
+ TeamIds: types.SetValueMust(types.StringType, []attr.Value{}), // Empty set for team IDs, will be populated during Read
+ }
+
+ diags.Append(stateOut.Set(ctx, model)...)
+}
+
+func getOrgIDUsernameRolesFromStateObj(diags *diag.Diagnostics, stateObj map[string]tftypes.Value) (orgID, username *string) {
+ orgID = schemafunc.GetAttrFromStateObj[string](stateObj, "org_id")
+ username = schemafunc.GetAttrFromStateObj[string](stateObj, "username")
+ if !conversion.IsStringPresent(orgID) || !conversion.IsStringPresent(username) {
+ diags.AddError("Unable to read org_id or username from state", fmt.Sprintf("org_id: %s, username: %s",
+ conversion.SafeString(orgID), conversion.SafeString(username)))
+ return
+ }
+
+ return orgID, username
+}
diff --git a/internal/service/clouduserorgassignment/move_state_test.go b/internal/service/clouduserorgassignment/move_state_test.go
new file mode 100644
index 0000000000..34fd948e6b
--- /dev/null
+++ b/internal/service/clouduserorgassignment/move_state_test.go
@@ -0,0 +1,70 @@
+package clouduserorgassignment_test
+
+import (
+ "fmt"
+ "os"
+ "strings"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+)
+
+func TestAccCloudUserOrgAssignmentRS_moveFromOrgInvitation(t *testing.T) {
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ username := acc.RandomEmail()
+ roles := []string{"ORG_MEMBER", "ORG_GROUP_CREATOR"}
+ teamsIDs := []string{acc.GetProjectTeamsIDsWithPos(0), acc.GetProjectTeamsIDsWithPos(1)}
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configOrgInvitationFirst(orgID, username, roles, teamsIDs),
+ },
+ {
+ Config: configMoveFromOrgInvitationSecond(orgID, username, roles),
+ Check: resource.ComposeTestCheckFunc(
+ cloudUserOrgAssignmentChecks("mongodbatlas_cloud_user_org_assignment.test", orgID, username, "PENDING", roles),
+ resource.TestCheckResourceAttr("mongodbatlas_cloud_user_org_assignment.test", "team_ids.#", "2"),
+ ),
+ },
+ },
+ })
+}
+
+func configOrgInvitationFirst(orgID, username string, roles, teamsIDs []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+ teamsIDsStr := `"` + strings.Join(teamsIDs, `", "`) + `"`
+
+ return fmt.Sprintf(`
+resource "mongodbatlas_org_invitation" "old" {
+ org_id = "%s"
+ username = "%s"
+ roles = [%s]
+ teams_ids = [%s]
+}
+`, orgID, username, rolesStr, teamsIDsStr)
+}
+
+func configMoveFromOrgInvitationSecond(orgID, username string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+
+ return fmt.Sprintf(`
+moved {
+ from = mongodbatlas_org_invitation.old
+ to = mongodbatlas_cloud_user_org_assignment.test
+}
+
+resource "mongodbatlas_cloud_user_org_assignment" "test" {
+ org_id = "%s"
+ username = "%s"
+ roles = {
+ org_roles = [%s]
+ }
+}
+`, orgID, username, rolesStr)
+}
diff --git a/internal/service/clouduserorgassignment/resource.go b/internal/service/clouduserorgassignment/resource.go
new file mode 100644
index 0000000000..362adefcfd
--- /dev/null
+++ b/internal/service/clouduserorgassignment/resource.go
@@ -0,0 +1,195 @@
+package clouduserorgassignment
+
+import (
+ "context"
+ "fmt"
+ "net/http"
+ "regexp"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
+ "github.com/hashicorp/terraform-plugin-framework/path"
+ "github.com/hashicorp/terraform-plugin-framework/resource"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+const resourceName = "cloud_user_org_assignment"
+
+var _ resource.ResourceWithConfigure = &rs{}
+var _ resource.ResourceWithImportState = &rs{}
+var _ resource.ResourceWithMoveState = &rs{}
+
+func Resource() resource.Resource {
+ return &rs{
+ RSCommon: config.RSCommon{
+ ResourceName: resourceName,
+ },
+ }
+}
+
+type rs struct {
+ config.RSCommon
+}
+
+func (r *rs) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) {
+ resp.Schema = resourceSchema()
+ conversion.UpdateSchemaDescription(&resp.Schema)
+}
+
+func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
+ var plan TFModel
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ orgID := plan.OrgId.ValueString()
+ orgUserRequest, diags := NewOrgUserReq(ctx, &plan)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ apiResp, _, err := connV2.MongoDBCloudUsersApi.CreateOrgUser(ctx, orgID, orgUserRequest).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error assigning user to OrgID(%s):", orgID), err.Error())
+ return
+ }
+
+ newCloudUserOrgAssignmentModel, diags := NewTFModel(ctx, apiResp, orgID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserOrgAssignmentModel)...)
+}
+
+func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
+ var state TFModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ orgID := state.OrgId.ValueString()
+ var userResp *admin.OrgUserResponse
+ var httpResp *http.Response
+ var err error
+
+ if !state.UserId.IsNull() && state.UserId.ValueString() != "" {
+ userID := state.UserId.ValueString()
+ userResp, httpResp, err = connV2.MongoDBCloudUsersApi.GetOrgUser(ctx, orgID, userID).Execute()
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ } else if !state.Username.IsNull() && state.Username.ValueString() != "" { // required for import
+ username := state.Username.ValueString()
+ params := &admin.ListOrgUsersApiParams{
+ OrgId: orgID,
+ Username: &username,
+ }
+ usersResp, _, err := connV2.MongoDBCloudUsersApi.ListOrgUsersWithParams(ctx, params).Execute()
+ if err == nil && usersResp != nil && usersResp.Results != nil {
+ if len(*usersResp.Results) == 0 {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ userResp = &(*usersResp.Results)[0]
+ }
+ }
+
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error fetching user(%s) from OrgID(%s):", userResp.Username, orgID), err.Error())
+ return
+ }
+
+ newCloudUserOrgAssignmentModel, diags := NewTFModel(ctx, userResp, orgID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserOrgAssignmentModel)...)
+}
+
+func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
+ var plan TFModel
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ orgID := plan.OrgId.ValueString()
+ userID := plan.UserId.ValueString()
+ username := plan.Username.ValueString()
+
+ updateReq, diags := NewAtlasUpdateReq(ctx, &plan)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ apiResp, _, err := connV2.MongoDBCloudUsersApi.UpdateOrgUser(ctx, orgID, userID, updateReq).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error updating user(%s) in OrgID(%s):", username, orgID), err.Error())
+ return
+ }
+
+ newCloudUserOrgAssignmentModel, diags := NewTFModel(ctx, apiResp, orgID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserOrgAssignmentModel)...)
+}
+
+func (r *rs) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
+ var state TFModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ orgID := state.OrgId.ValueString()
+ userID := state.UserId.ValueString()
+ username := state.Username.ValueString()
+
+ httpResp, err := connV2.MongoDBCloudUsersApi.RemoveOrgUser(ctx, orgID, userID).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ resp.Diagnostics.AddError(fmt.Sprintf("error deleting user(%s) from OrgID(%s):", username, orgID), err.Error())
+ return
+ }
+}
+
+func (r *rs) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) {
+ importID := req.ID
+ ok, parts := conversion.ImportSplit(req.ID, 2)
+ if !ok {
+ resp.Diagnostics.AddError("invalid import ID format", "expected 'org_id/user_id' or 'org_id/username', got: "+importID)
+ return
+ }
+ orgID, userID := parts[0], parts[1]
+
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("org_id"), orgID)...)
+
+ emailRegex := regexp.MustCompile(`^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$`)
+
+ if emailRegex.MatchString(userID) {
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("username"), userID)...)
+ } else {
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("user_id"), userID)...)
+ }
+}
diff --git a/internal/service/clouduserorgassignment/resource_migration_test.go b/internal/service/clouduserorgassignment/resource_migration_test.go
new file mode 100644
index 0000000000..146619ab85
--- /dev/null
+++ b/internal/service/clouduserorgassignment/resource_migration_test.go
@@ -0,0 +1,12 @@
+package clouduserorgassignment_test
+
+import (
+ "testing"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
+)
+
+func TestMigCloudUserOrgAssignmentRS_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // when resource 1st released
+ mig.CreateAndRunTest(t, basicTestCase(t))
+}
diff --git a/internal/service/clouduserorgassignment/resource_test.go b/internal/service/clouduserorgassignment/resource_test.go
new file mode 100644
index 0000000000..360d42a764
--- /dev/null
+++ b/internal/service/clouduserorgassignment/resource_test.go
@@ -0,0 +1,186 @@
+package clouduserorgassignment_test
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+ "testing"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+)
+
+var resourceName = "mongodbatlas_cloud_user_org_assignment.test"
+
+func TestAccCloudUserOrgAssignmentRS_basic(t *testing.T) {
+ resource.ParallelTest(t, *basicTestCase(t))
+}
+
+func TestAccCloudUserOrgAssignmentDS_basic(t *testing.T) {
+ resource.ParallelTest(t, *dataSourceTestCase(t))
+}
+
+func basicTestCase(t *testing.T) *resource.TestCase {
+ t.Helper()
+
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ username := acc.RandomEmail()
+ roles := []string{"ORG_MEMBER"}
+ rolesUpdated := []string{"ORG_MEMBER", "ORG_GROUP_CREATOR"}
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccCloudUserOrgAssignmentConfig(orgID, username, roles),
+ Check: cloudUserOrgAssignmentChecks(resourceName, orgID, username, "PENDING", roles),
+ },
+ {
+ Config: testAccCloudUserOrgAssignmentConfig(orgID, username, rolesUpdated),
+ Check: cloudUserOrgAssignmentChecks(resourceName, orgID, username, "PENDING", rolesUpdated),
+ },
+ {
+ ResourceName: resourceName,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceName].Primary.Attributes
+ orgID := attrs["org_id"]
+ userID := attrs["user_id"]
+ return orgID + "/" + userID, nil
+ },
+ },
+ {
+ ResourceName: resourceName,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceName].Primary.Attributes
+ orgID := attrs["org_id"]
+ username := attrs["username"]
+ return orgID + "/" + username, nil
+ },
+ },
+ },
+ }
+}
+
+func testAccCloudUserOrgAssignmentConfig(orgID, username string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+
+ return fmt.Sprintf(`
+resource "mongodbatlas_cloud_user_org_assignment" "test" {
+ org_id = "%s"
+ username = "%s"
+ roles = {
+ org_roles = [%s]
+ }
+}
+`, orgID, username, rolesStr)
+}
+
+func dataSourceTestCase(t *testing.T) *resource.TestCase {
+ t.Helper()
+
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ username := acc.RandomEmail()
+ roles := []string{"ORG_MEMBER"}
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: testAccCloudUserOrgAssignmentWithDataSourceConfig(orgID, username, roles),
+ Check: resource.ComposeTestCheckFunc(
+ cloudUserOrgAssignmentChecks("data.mongodbatlas_cloud_user_org_assignment.by_username", orgID, username, "PENDING", roles),
+ cloudUserOrgAssignmentChecks("data.mongodbatlas_cloud_user_org_assignment.by_user_id", orgID, username, "PENDING", roles),
+ ),
+ },
+ },
+ }
+}
+
+func testAccCloudUserOrgAssignmentWithDataSourceConfig(orgID, username string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+
+ return fmt.Sprintf(`
+resource "mongodbatlas_cloud_user_org_assignment" "test" {
+ org_id = "%s"
+ username = "%s"
+ roles = {
+ org_roles = [%s]
+ }
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "by_username" {
+ org_id = "%s"
+ username = mongodbatlas_cloud_user_org_assignment.test.username
+}
+
+data "mongodbatlas_cloud_user_org_assignment" "by_user_id" {
+ org_id = "%s"
+ user_id = mongodbatlas_cloud_user_org_assignment.test.user_id
+}
+`, orgID, username, rolesStr, orgID, orgID)
+}
+
+func cloudUserOrgAssignmentChecks(resourceName, orgID, username, orgMembershipStatus string, roles []string) resource.TestCheckFunc {
+ checks := []resource.TestCheckFunc{}
+ attributes := map[string]string{
+ "org_id": orgID,
+ "username": username,
+ "org_membership_status": orgMembershipStatus,
+ "roles.org_roles.#": strconv.Itoa(len(roles)),
+ }
+ checks = acc.AddAttrChecks(resourceName, checks, attributes)
+
+ if orgMembershipStatus == "PENDING" {
+ checks = acc.AddAttrSetChecks(resourceName, checks, "user_id", "invitation_created_at", "invitation_expires_at", "inviter_username")
+ } else {
+ checks = acc.AddAttrSetChecks(resourceName, checks, "user_id", "country", "created_at", "first_name", "last_auth", "last_name", "mobile_number")
+ }
+
+ return resource.ComposeAggregateTestCheckFunc(checks...)
+}
+
+func checkDestroy(s *terraform.State) error {
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "mongodbatlas_cloud_user_org_assignment" {
+ continue
+ }
+ orgID := rs.Primary.Attributes["org_id"]
+ userID := rs.Primary.Attributes["user_id"]
+ username := rs.Primary.Attributes["username"]
+ conn := acc.ConnV2()
+
+ if userID != "" {
+ _, resp, err := conn.MongoDBCloudUsersApi.GetOrgUser(context.Background(), orgID, userID).Execute()
+ if err == nil && resp != nil && resp.StatusCode != 404 {
+ return fmt.Errorf("cloud user org assignment (%s) still exists", userID)
+ }
+ } else if username != "" {
+ params := &admin.ListOrgUsersApiParams{
+ OrgId: orgID,
+ Username: &username,
+ }
+
+ users, _, err := conn.MongoDBCloudUsersApi.ListOrgUsersWithParams(context.Background(), params).Execute()
+ if err == nil && users != nil && len(*users.Results) > 0 {
+ return fmt.Errorf("cloud user org assignment (%s) still exists", username)
+ }
+ }
+ }
+ return nil
+}
diff --git a/internal/service/clouduserorgassignment/schema.go b/internal/service/clouduserorgassignment/schema.go
new file mode 100755
index 0000000000..17cf146ea7
--- /dev/null
+++ b/internal/service/clouduserorgassignment/schema.go
@@ -0,0 +1,200 @@
+package clouduserorgassignment
+
+import (
+ "github.com/hashicorp/terraform-plugin-framework-validators/setvalidator"
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ dsschema "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
+ "github.com/hashicorp/terraform-plugin-framework/schema/validator"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+)
+
+func resourceSchema() schema.Schema {
+ return schema.Schema{
+ Attributes: map[string]schema.Attribute{
+ "country": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "created_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "first_name": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "First or given name that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "user_id": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "invitation_created_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "invitation_expires_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "inviter_username": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Username of the MongoDB Cloud user who sent the invitation to join the organization.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "last_auth": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.",
+ },
+ "last_name": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Last name, family name, or surname that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "mobile_number": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Mobile phone number that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "org_id": schema.StringAttribute{
+ Required: true,
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.RequiresReplace(),
+ },
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the organization that contains your projects. Use the [/orgs](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-organizations) endpoint to retrieve all organizations to which the authenticated user has access.",
+ },
+ "org_membership_status": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "roles": schema.SingleNestedAttribute{
+ Required: true,
+ MarkdownDescription: "Organization and project level roles to assign the MongoDB Cloud user within one organization.",
+ Attributes: map[string]schema.Attribute{
+ "project_role_assignments": schema.ListNestedAttribute{
+ Computed: true,
+ MarkdownDescription: "List of project level role assignments to assign the MongoDB Cloud user.",
+ NestedObject: schema.NestedAttributeObject{
+ Attributes: map[string]schema.Attribute{
+ "project_id": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the project to which these roles belong.",
+ },
+ "project_roles": schema.SetAttribute{
+ Computed: true,
+ MarkdownDescription: "One or more project-level roles assigned to the MongoDB Cloud user.",
+ ElementType: types.StringType,
+ },
+ },
+ },
+ },
+ "org_roles": schema.SetAttribute{
+ Validators: []validator.Set{setvalidator.SizeAtLeast(1)},
+ Optional: true,
+ MarkdownDescription: "One or more organization level roles to assign the MongoDB Cloud user.",
+ ElementType: types.StringType,
+ },
+ },
+ },
+ "team_ids": schema.SetAttribute{
+ Computed: true,
+ MarkdownDescription: "List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.",
+ ElementType: types.StringType,
+ },
+ "username": schema.StringAttribute{
+ Required: true,
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.RequiresReplace(),
+ },
+ MarkdownDescription: "Email address that represents the username of the MongoDB Cloud user.",
+ },
+ },
+ }
+}
+
+func dataSourceSchema() dsschema.Schema {
+ return conversion.DataSourceSchemaFromResource(resourceSchema(), &conversion.DataSourceSchemaRequest{
+ RequiredFields: []string{"org_id"},
+
+ OverridenFields: dataSourceOverridenFields(),
+ })
+}
+
+func dataSourceOverridenFields() map[string]dsschema.Attribute {
+ return map[string]dsschema.Attribute{
+ "user_id": dsschema.StringAttribute{
+ Optional: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.",
+ },
+ "username": dsschema.StringAttribute{
+ Optional: true,
+ MarkdownDescription: "Email address that represents the username of the MongoDB Cloud user.",
+ },
+ }
+}
+
+type TFModel struct {
+ Country types.String `tfsdk:"country"`
+ CreatedAt types.String `tfsdk:"created_at"`
+ FirstName types.String `tfsdk:"first_name"`
+ UserId types.String `tfsdk:"user_id"`
+ InvitationCreatedAt types.String `tfsdk:"invitation_created_at"`
+ InvitationExpiresAt types.String `tfsdk:"invitation_expires_at"`
+ InviterUsername types.String `tfsdk:"inviter_username"`
+ LastAuth types.String `tfsdk:"last_auth"`
+ LastName types.String `tfsdk:"last_name"`
+ MobileNumber types.String `tfsdk:"mobile_number"`
+ OrgId types.String `tfsdk:"org_id"`
+ OrgMembershipStatus types.String `tfsdk:"org_membership_status"`
+ Roles types.Object `tfsdk:"roles"`
+ TeamIds types.Set `tfsdk:"team_ids"`
+ Username types.String `tfsdk:"username" autogen:"omitjsonupdate"`
+}
+type TFRolesModel struct {
+ ProjectRoleAssignments types.List `tfsdk:"project_role_assignments"`
+ OrgRoles types.Set `tfsdk:"org_roles"`
+}
+type TFRolesProjectRoleAssignmentsModel struct {
+ ProjectId types.String `tfsdk:"project_id"`
+ ProjectRoles types.Set `tfsdk:"project_roles"`
+}
+
+var ProjectRoleAssignmentsAttrType = types.ListType{ElemType: types.ObjectType{AttrTypes: map[string]attr.Type{
+ "project_id": types.StringType,
+ "project_roles": types.SetType{ElemType: types.StringType},
+}}}
+
+var RolesObjectAttrTypes = map[string]attr.Type{
+ "org_roles": types.SetType{ElemType: types.StringType},
+ "project_role_assignments": ProjectRoleAssignmentsAttrType,
+}
diff --git a/internal/service/clouduserorgassignment/tfplugingen/generator_config.yml b/internal/service/clouduserorgassignment/tfplugingen/generator_config.yml
new file mode 100644
index 0000000000..739a88c63c
--- /dev/null
+++ b/internal/service/clouduserorgassignment/tfplugingen/generator_config.yml
@@ -0,0 +1,23 @@
+provider:
+ name: mongodbatlas
+
+resources:
+ cloud_user_org_assignment:
+ read:
+ path: /api/atlas/v2/orgs/{orgId}/users/{userId}
+ method: GET
+ create:
+ path: /api/atlas/v2/orgs/{orgId}/users
+ method: POST
+ update:
+ path: /api/atlas/v2/orgs/{orgId}/users/{userId}
+ method: PATCH
+ delete:
+ path: /api/atlas/v2/orgs/{orgId}/users/{userId}
+ method: DELETE
+
+data_sources:
+ cloud_user_org_assignment:
+ read:
+ path: /api/atlas/v2/orgs/{orgId}/users/{userId}
+ method: GET
diff --git a/internal/service/clouduserprojectassignment/data_source.go b/internal/service/clouduserprojectassignment/data_source.go
new file mode 100644
index 0000000000..890055f1a2
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/data_source.go
@@ -0,0 +1,61 @@
+package clouduserprojectassignment
+
+import (
+ "context"
+
+ "github.com/hashicorp/terraform-plugin-framework/datasource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+var _ datasource.DataSource = &cloudUserProjectAssignmentDS{}
+var _ datasource.DataSourceWithConfigure = &cloudUserProjectAssignmentDS{}
+
+func DataSource() datasource.DataSource {
+ return &cloudUserProjectAssignmentDS{
+ DSCommon: config.DSCommon{
+ DataSourceName: resourceName,
+ },
+ }
+}
+
+type cloudUserProjectAssignmentDS struct {
+ config.DSCommon
+}
+
+func (d *cloudUserProjectAssignmentDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
+ resp.Schema = dataSourceSchema()
+}
+
+func (d *cloudUserProjectAssignmentDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
+ var state TFModel
+ resp.Diagnostics.Append(req.Config.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := d.Client.AtlasV2
+ projectID := state.ProjectId.ValueString()
+ userID := state.UserId.ValueString()
+ username := state.Username.ValueString()
+
+ if username == "" && userID == "" {
+ resp.Diagnostics.AddError("invalid configuration", "either username or user_id must be provided")
+ return
+ }
+ userResp, err := fetchProjectUser(ctx, connV2, projectID, userID, username)
+ if err != nil {
+ resp.Diagnostics.AddError(errorReadingUser, err.Error())
+ return
+ }
+ if userResp == nil {
+ resp.Diagnostics.AddError("resource not found", "no user found with the specified identifier")
+ return
+ }
+
+ newCloudUserProjectAssignmentModel, diags := NewTFModel(ctx, projectID, userResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserProjectAssignmentModel)...)
+}
diff --git a/internal/service/privatelinkendpointserviceserverless/main_test.go b/internal/service/clouduserprojectassignment/main_test.go
similarity index 82%
rename from internal/service/privatelinkendpointserviceserverless/main_test.go
rename to internal/service/clouduserprojectassignment/main_test.go
index e1f828ba3d..1f5e9c216e 100644
--- a/internal/service/privatelinkendpointserviceserverless/main_test.go
+++ b/internal/service/clouduserprojectassignment/main_test.go
@@ -1,4 +1,4 @@
-package privatelinkendpointserviceserverless_test
+package clouduserprojectassignment_test
import (
"os"
diff --git a/internal/service/clouduserprojectassignment/model.go b/internal/service/clouduserprojectassignment/model.go
new file mode 100644
index 0000000000..40faee04ee
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/model.go
@@ -0,0 +1,98 @@
+package clouduserprojectassignment
+
+import (
+ "context"
+
+ "github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+func NewTFModel(ctx context.Context, projectID string, apiResp *admin.GroupUserResponse) (*TFModel, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+
+ if apiResp == nil {
+ return nil, diags
+ }
+
+ roles := conversion.TFSetValueOrNull(ctx, &apiResp.Roles, types.StringType)
+
+ return &TFModel{
+ Country: types.StringPointerValue(apiResp.Country),
+ CreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.CreatedAt)),
+ FirstName: types.StringPointerValue(apiResp.FirstName),
+ ProjectId: types.StringValue(projectID),
+ UserId: types.StringValue(apiResp.Id),
+ InvitationCreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.InvitationCreatedAt)),
+ InvitationExpiresAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.InvitationExpiresAt)),
+ InviterUsername: types.StringPointerValue(apiResp.InviterUsername),
+ LastAuth: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.LastAuth)),
+ LastName: types.StringPointerValue(apiResp.LastName),
+ MobileNumber: types.StringPointerValue(apiResp.MobileNumber),
+ OrgMembershipStatus: types.StringValue(apiResp.GetOrgMembershipStatus()),
+ Roles: roles,
+ Username: types.StringValue(apiResp.GetUsername()),
+ }, diags
+}
+
+func NewProjectUserReq(ctx context.Context, plan *TFModel) (*admin.GroupUserRequest, diag.Diagnostics) {
+ roleNames := []string{}
+ if !plan.Roles.IsNull() && !plan.Roles.IsUnknown() {
+ roleNames = conversion.TypesSetToString(ctx, plan.Roles)
+ }
+
+ addProjectUserReq := admin.GroupUserRequest{
+ Username: plan.Username.ValueString(),
+ Roles: roleNames,
+ }
+ return &addProjectUserReq, nil
+}
+
+func NewAtlasUpdateReq(ctx context.Context, plan *TFModel, currentRoles []string) (addRequests, removeRequests []*admin.AddOrRemoveGroupRole, diags diag.Diagnostics) {
+ var desiredRoles []string
+ if !plan.Roles.IsNull() && !plan.Roles.IsUnknown() {
+ desiredRoles = conversion.TypesSetToString(ctx, plan.Roles)
+ }
+
+ rolesToAdd, rolesToRemove := diffRoles(currentRoles, desiredRoles)
+
+ addRequests = make([]*admin.AddOrRemoveGroupRole, 0, len(rolesToAdd))
+ for _, role := range rolesToAdd {
+ addRequests = append(addRequests, &admin.AddOrRemoveGroupRole{
+ GroupRole: role,
+ })
+ }
+
+ removeRequests = make([]*admin.AddOrRemoveGroupRole, 0, len(rolesToRemove))
+ for _, role := range rolesToRemove {
+ removeRequests = append(removeRequests, &admin.AddOrRemoveGroupRole{
+ GroupRole: role,
+ })
+ }
+
+ return addRequests, removeRequests, nil
+}
+
+func diffRoles(oldRoles, newRoles []string) (toAdd, toRemove []string) {
+ oldRolesMap := make(map[string]bool, len(oldRoles))
+
+ for _, role := range oldRoles {
+ oldRolesMap[role] = true
+ }
+
+ for _, role := range newRoles {
+ if oldRolesMap[role] {
+ delete(oldRolesMap, role)
+ } else {
+ toAdd = append(toAdd, role)
+ }
+ }
+
+ for role := range oldRolesMap {
+ toRemove = append(toRemove, role)
+ }
+
+ return toAdd, toRemove
+}
diff --git a/internal/service/clouduserprojectassignment/model_test.go b/internal/service/clouduserprojectassignment/model_test.go
new file mode 100644
index 0000000000..f37ec5d495
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/model_test.go
@@ -0,0 +1,283 @@
+package clouduserprojectassignment_test
+
+import (
+ "testing"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/clouduserprojectassignment"
+ "github.com/stretchr/testify/assert"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+const (
+ testUserID = "user-123"
+ testUsername = "jdoe"
+ testFirstName = "John"
+ testLastName = "Doe"
+ testCountry = "CA"
+ testMobile = "+1555123456"
+ testInviter = "admin"
+ testOrgMembershipStatus = "ACTIVE"
+ testInviterUsername = ""
+
+ testProjectRoleOwner = "PROJECT_OWNER"
+ testProjectRoleRead = "PROJECT_READ_ONLY"
+ testProjectRoleMember = "PROJECT_MEMBER"
+
+ testProjectID = "project-123"
+ testOrgID = "org-123"
+)
+
+var (
+ when = time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC)
+ testCreatedAt = when.Format(time.RFC3339)
+ testInvitationCreatedAt = when.Add(-24 * time.Hour).Format(time.RFC3339)
+ testInvitationExpiresAt = when.Add(24 * time.Hour).Format(time.RFC3339)
+ testLastAuth = when.Add(-2 * time.Hour).Format(time.RFC3339)
+
+ testProjectRoles = []string{testProjectRoleMember, testProjectRoleOwner}
+)
+
+type sdkToTFModelTestCase struct {
+ SDKResp *admin.GroupUserResponse
+ expectedTFModel *clouduserprojectassignment.TFModel
+}
+
+func TestCloudUserProjectAssignmentSDKToTFModel(t *testing.T) {
+ ctx := t.Context()
+
+ fullResp := &admin.GroupUserResponse{
+ Id: testUserID,
+ Username: testUsername,
+ FirstName: admin.PtrString(testFirstName),
+ LastName: admin.PtrString(testLastName),
+ Country: admin.PtrString(testCountry),
+ MobileNumber: admin.PtrString(testMobile),
+ OrgMembershipStatus: testOrgMembershipStatus,
+ CreatedAt: admin.PtrTime(when),
+ LastAuth: admin.PtrTime(when.Add(-2 * time.Hour)),
+ InvitationCreatedAt: admin.PtrTime(when.Add(-24 * time.Hour)),
+ InvitationExpiresAt: admin.PtrTime(when.Add(24 * time.Hour)),
+ InviterUsername: admin.PtrString(testInviterUsername),
+ Roles: testProjectRoles,
+ }
+
+ expectedRoles, _ := types.SetValueFrom(ctx, types.StringType, testProjectRoles)
+
+ expectedFullModel := &clouduserprojectassignment.TFModel{
+ UserId: types.StringValue(testUserID),
+ Username: types.StringValue(testUsername),
+ ProjectId: types.StringValue(testProjectID),
+ FirstName: types.StringValue(testFirstName),
+ LastName: types.StringValue(testLastName),
+ Country: types.StringValue(testCountry),
+ MobileNumber: types.StringValue(testMobile),
+ OrgMembershipStatus: types.StringValue(testOrgMembershipStatus),
+ CreatedAt: types.StringValue(testCreatedAt),
+ LastAuth: types.StringValue(testLastAuth),
+ InvitationCreatedAt: types.StringValue(testInvitationCreatedAt),
+ InvitationExpiresAt: types.StringValue(testInvitationExpiresAt),
+ InviterUsername: types.StringValue(testInviterUsername),
+ Roles: expectedRoles,
+ }
+
+ testCases := map[string]sdkToTFModelTestCase{
+ "nil SDK response": {
+ SDKResp: nil,
+ expectedTFModel: nil,
+ },
+ "Complete SDK response": {
+ SDKResp: fullResp,
+ expectedTFModel: expectedFullModel,
+ },
+ "Empty SDK response": {
+ SDKResp: &admin.GroupUserResponse{
+ Id: "",
+ Username: "",
+ FirstName: nil,
+ LastName: nil,
+ Country: nil,
+ MobileNumber: nil,
+ OrgMembershipStatus: "",
+ CreatedAt: nil,
+ LastAuth: nil,
+ InvitationCreatedAt: nil,
+ InvitationExpiresAt: nil,
+ InviterUsername: nil,
+ Roles: nil,
+ },
+ expectedTFModel: &clouduserprojectassignment.TFModel{
+ UserId: types.StringValue(""),
+ Username: types.StringValue(""),
+ ProjectId: types.StringValue(testProjectID),
+ FirstName: types.StringNull(),
+ LastName: types.StringNull(),
+ Country: types.StringNull(),
+ MobileNumber: types.StringNull(),
+ OrgMembershipStatus: types.StringValue(""),
+ CreatedAt: types.StringNull(),
+ LastAuth: types.StringNull(),
+ InvitationCreatedAt: types.StringNull(),
+ InvitationExpiresAt: types.StringNull(),
+ InviterUsername: types.StringNull(),
+ Roles: types.SetNull(types.StringType),
+ },
+ },
+ }
+
+ for testName, tc := range testCases {
+ t.Run(testName, func(t *testing.T) {
+ resultModel, diags := clouduserprojectassignment.NewTFModel(t.Context(), testProjectID, tc.SDKResp)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expectedTFModel, resultModel, "TFModel did not match expected")
+ })
+ }
+}
+
+func TestNewProjectUserRequest(t *testing.T) {
+ ctx := t.Context()
+ expectedRoles, _ := types.SetValueFrom(ctx, types.StringType, testProjectRoles)
+
+ testCases := map[string]struct {
+ plan *clouduserprojectassignment.TFModel
+ expected *admin.GroupUserRequest
+ }{
+ "Complete model": {
+ plan: &clouduserprojectassignment.TFModel{
+ UserId: types.StringValue(testUserID),
+ Username: types.StringValue(testUsername),
+ ProjectId: types.StringValue(testProjectID),
+ FirstName: types.StringValue(testFirstName),
+ LastName: types.StringValue(testLastName),
+ Country: types.StringValue(testCountry),
+ MobileNumber: types.StringValue(testMobile),
+ OrgMembershipStatus: types.StringValue(testOrgMembershipStatus),
+ CreatedAt: types.StringValue(testCreatedAt),
+ LastAuth: types.StringValue(testLastAuth),
+ InvitationCreatedAt: types.StringValue(testInvitationCreatedAt),
+ InvitationExpiresAt: types.StringValue(testInvitationExpiresAt),
+ InviterUsername: types.StringValue(testInviterUsername),
+ Roles: expectedRoles,
+ },
+ expected: &admin.GroupUserRequest{
+ Username: testUsername,
+ Roles: testProjectRoles,
+ },
+ },
+ "Nil model": {
+ plan: &clouduserprojectassignment.TFModel{
+ Username: types.StringNull(),
+ Roles: types.SetNull(types.StringType),
+ },
+ expected: &admin.GroupUserRequest{
+ Username: "",
+ Roles: []string{},
+ },
+ },
+ "Empty model": {
+ plan: &clouduserprojectassignment.TFModel{
+ Username: types.StringValue(""),
+ Roles: types.SetValueMust(types.StringType, []attr.Value{}),
+ },
+ expected: &admin.GroupUserRequest{
+ Username: "",
+ Roles: []string{},
+ },
+ },
+ }
+
+ for name, tc := range testCases {
+ t.Run(name, func(t *testing.T) {
+ req, diags := clouduserprojectassignment.NewProjectUserReq(ctx, tc.plan)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expected, req)
+ })
+ }
+}
+
+func TestNewAtlasUpdateReq(t *testing.T) {
+ ctx := t.Context()
+
+ type args struct {
+ stateRoles []string
+ planRoles []string
+ }
+ tests := []struct {
+ name string
+ args args
+ wantAddRoles []string
+ wantRemoveRoles []string
+ }{
+ {
+ name: "add and remove roles",
+ args: args{
+ stateRoles: []string{"GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_ONLY"},
+ planRoles: []string{"GROUP_OWNER", "GROUP_DATA_ACCESS_READ_ONLY"},
+ },
+ wantAddRoles: []string{"GROUP_OWNER"},
+ wantRemoveRoles: []string{"GROUP_READ_ONLY"},
+ },
+ {
+ name: "no changes",
+ args: args{
+ stateRoles: []string{"GROUP_OWNER"},
+ planRoles: []string{"GROUP_OWNER"},
+ },
+ wantAddRoles: []string{},
+ wantRemoveRoles: []string{},
+ },
+ {
+ name: "all roles removed",
+ args: args{
+ stateRoles: []string{"GROUP_OWNER"},
+ planRoles: []string{},
+ },
+ wantAddRoles: []string{},
+ wantRemoveRoles: []string{"GROUP_OWNER"},
+ },
+ {
+ name: "all roles added",
+ args: args{
+ stateRoles: []string{},
+ planRoles: []string{"GROUP_OWNER"},
+ },
+ wantAddRoles: []string{"GROUP_OWNER"},
+ wantRemoveRoles: []string{},
+ },
+ {
+ name: "nil roles",
+ args: args{
+ stateRoles: nil,
+ planRoles: []string{},
+ },
+ wantAddRoles: []string{},
+ wantRemoveRoles: []string{},
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ planRoles, _ := types.SetValueFrom(ctx, types.StringType, tt.args.planRoles)
+
+ state := tt.args.stateRoles
+ plan := &clouduserprojectassignment.TFModel{Roles: planRoles}
+
+ addReqs, removeReqs, diags := clouduserprojectassignment.NewAtlasUpdateReq(ctx, plan, state)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+
+ var gotAddRoles, gotRemoveRoles []string
+ for _, r := range addReqs {
+ gotAddRoles = append(gotAddRoles, r.GroupRole)
+ }
+ for _, r := range removeReqs {
+ gotRemoveRoles = append(gotRemoveRoles, r.GroupRole)
+ }
+
+ assert.ElementsMatch(t, tt.wantAddRoles, gotAddRoles, "add roles mismatch")
+ assert.ElementsMatch(t, tt.wantRemoveRoles, gotRemoveRoles, "remove roles mismatch")
+ })
+ }
+}
diff --git a/internal/service/clouduserprojectassignment/resource.go b/internal/service/clouduserprojectassignment/resource.go
new file mode 100644
index 0000000000..bca2b29160
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/resource.go
@@ -0,0 +1,252 @@
+package clouduserprojectassignment
+
+import (
+ "context"
+ "fmt"
+ "net/http"
+ "regexp"
+
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
+ "github.com/hashicorp/terraform-plugin-framework/path"
+ "github.com/hashicorp/terraform-plugin-framework/resource"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+const (
+ resourceName = "cloud_user_project_assignment"
+ errorReadingByUserID = "Error getting project users by user_id"
+ errorReadingByUsername = "Error getting project users by username"
+ invalidImportID = "Invalid import ID format"
+ errorReadingUser = "Error retrieving project users"
+)
+
+var _ resource.ResourceWithConfigure = &rs{}
+var _ resource.ResourceWithImportState = &rs{}
+
+func Resource() resource.Resource {
+ return &rs{
+ RSCommon: config.RSCommon{
+ ResourceName: resourceName,
+ },
+ }
+}
+
+type rs struct {
+ config.RSCommon
+}
+
+func (r *rs) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) {
+ resp.Schema = resourceSchema()
+ conversion.UpdateSchemaDescription(&resp.Schema)
+}
+
+func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
+ var plan TFModel
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := plan.ProjectId.ValueString()
+ projectUserRequest, diags := NewProjectUserReq(ctx, &plan)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ apiResp, _, err := connV2.MongoDBCloudUsersApi.AddGroupUsers(ctx, projectID, projectUserRequest).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error assigning user to ProjectID(%s):", projectID), err.Error())
+ return
+ }
+
+ newCloudUserProjectAssignmentModel, diags := NewTFModel(ctx, projectID, apiResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserProjectAssignmentModel)...)
+}
+
+func fetchProjectUser(ctx context.Context, connV2 *admin.APIClient, projectID, userID, username string) (*admin.GroupUserResponse, error) {
+ var userResp *admin.GroupUserResponse
+ var httpResp *http.Response
+ var err error
+ if userID != "" {
+ userResp, httpResp, err = connV2.MongoDBCloudUsersApi.GetGroupUser(ctx, projectID, userID).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ return nil, nil
+ }
+ return nil, err
+ }
+ } else if username != "" {
+ var userListResp *admin.PaginatedGroupUser
+ params := &admin.ListGroupUsersApiParams{
+ GroupId: projectID,
+ Username: &username,
+ }
+ userListResp, httpResp, err = connV2.MongoDBCloudUsersApi.ListGroupUsersWithParams(ctx, params).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ return nil, nil
+ }
+ return nil, err
+ }
+ if userListResp == nil || len(userListResp.GetResults()) == 0 {
+ return nil, nil
+ }
+ userResp = &userListResp.GetResults()[0]
+ }
+
+ return userResp, nil
+}
+
+func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
+ var state TFModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := state.ProjectId.ValueString()
+ var userResp *admin.GroupUserResponse
+ var err error
+
+ userID := state.UserId.ValueString()
+ username := state.Username.ValueString()
+
+ userResp, err = fetchProjectUser(ctx, connV2, projectID, userID, username)
+ if err != nil {
+ resp.Diagnostics.AddError(errorReadingUser, err.Error())
+ return
+ }
+ if userResp == nil {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+
+ newCloudUserProjectAssignmentModel, diags := NewTFModel(ctx, projectID, userResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserProjectAssignmentModel)...)
+}
+
+func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
+ var plan TFModel
+ var state TFModel
+ var err error
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := plan.ProjectId.ValueString()
+ userID := plan.UserId.ValueString()
+ username := plan.Username.ValueString()
+
+ userInfo, _, err := connV2.MongoDBCloudUsersApi.GetGroupUser(ctx, projectID, userID).Execute() // Fetch current user roles from API (more reliable than state)
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error fetching user(%s) from ProjectID(%s):", username, projectID), err.Error())
+ return
+ }
+
+ addRequests, removeRequests, diags := NewAtlasUpdateReq(ctx, &plan, userInfo.GetRoles())
+ resp.Diagnostics.Append(diags...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ for _, addReq := range addRequests {
+ _, _, err := connV2.MongoDBCloudUsersApi.AddGroupUserRole(ctx, projectID, userID, addReq).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(
+ fmt.Sprintf("Error adding role %s to user(%s) in ProjectID(%s):", addReq.GroupRole, username, projectID),
+ err.Error(),
+ )
+ return
+ }
+ }
+
+ for _, removeReq := range removeRequests {
+ _, _, err := connV2.MongoDBCloudUsersApi.RemoveGroupUserRole(ctx, projectID, userID, removeReq).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(
+ fmt.Sprintf("Error removing role %s from user(%s) in ProjectID(%s):", removeReq.GroupRole, username, projectID),
+ err.Error(),
+ )
+ return
+ }
+ }
+
+ var userResp *admin.GroupUserResponse
+
+ if !state.UserId.IsNull() && state.UserId.ValueString() != "" {
+ userID := state.UserId.ValueString()
+ userResp, _, err = connV2.MongoDBCloudUsersApi.GetGroupUser(ctx, projectID, userID).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error fetching user(%s) from ProjectID(%s):", username, projectID), err.Error())
+ return
+ }
+ }
+
+ newCloudUserProjectAssignmentModel, diags := NewTFModel(ctx, projectID, userResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserProjectAssignmentModel)...)
+}
+
+func (r *rs) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
+ var state *TFModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := state.ProjectId.ValueString()
+ userID := state.UserId.ValueString()
+ username := state.Username.ValueString()
+
+ httpResp, err := connV2.MongoDBCloudUsersApi.RemoveGroupUser(ctx, projectID, userID).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ resp.Diagnostics.AddError(fmt.Sprintf("error deleting user(%s) from ProjectID(%s):", username, projectID), err.Error())
+ return
+ }
+}
+
+func (r *rs) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) {
+ importID := req.ID
+ ok, parts := conversion.ImportSplit(req.ID, 2)
+ if !ok {
+ resp.Diagnostics.AddError(invalidImportID, "expected 'project_id/user_id' or 'project_id/username', got: "+importID)
+ return
+ }
+ projectID, user := parts[0], parts[1]
+
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("project_id"), projectID)...)
+
+ emailRegex := regexp.MustCompile(`^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$`)
+
+ if emailRegex.MatchString(user) {
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("username"), user)...)
+ } else {
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("user_id"), user)...)
+ }
+}
diff --git a/internal/service/clouduserprojectassignment/resource_migration_test.go b/internal/service/clouduserprojectassignment/resource_migration_test.go
new file mode 100644
index 0000000000..7313818a43
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/resource_migration_test.go
@@ -0,0 +1,138 @@
+package clouduserprojectassignment_test
+
+import (
+ "fmt"
+ "os"
+ "strings"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/plancheck"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
+)
+
+const (
+ resourceInvitationName = "mongodbatlas_project_invitation.mig_test"
+ resourceProjectName = "mongodbatlas_project.mig_test"
+ resourceUserProjectAssignmentName = "mongodbatlas_cloud_user_project_assignment.user_mig_test"
+)
+
+func TestMigCloudUserProjectAssignmentRS_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // when resource 1st released
+ mig.CreateAndRunTest(t, basicTestCase(t))
+}
+
+func TestMigCloudUserProjectAssignmentRS_migrationJourney(t *testing.T) {
+ var (
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ username = acc.RandomEmail()
+ projectName = fmt.Sprintf("mig_user_project_%s", acc.RandomName())
+ roles = []string{"GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_ONLY"}
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: mig.ExternalProviders(),
+ Config: legacyProjectInvitationConfig(username, projectName, orgID, roles),
+ },
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: userProjectAssignmentConfigSecond(username, projectName, orgID, roles),
+ Check: checksSecond(username, roles),
+ },
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ ConfigPlanChecks: resource.ConfigPlanChecks{
+ PreApply: []plancheck.PlanCheck{
+ plancheck.ExpectResourceAction(resourceInvitationName, plancheck.ResourceActionDestroy),
+ },
+ },
+ Config: removeProjectInvitationConfigThird(username, projectName, orgID, roles),
+ },
+ mig.TestStepCheckEmptyPlan(removeProjectInvitationConfigThird(username, projectName, orgID, roles)),
+ },
+ })
+}
+
+func legacyProjectInvitationConfig(username, projectName, orgID string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+ config := fmt.Sprintf(`
+ locals {
+ username = %[1]q
+ roles = [%[2]s]
+ }
+
+ resource "mongodbatlas_project" "mig_test" {
+ name = %[3]q
+ org_id = %[4]q
+ }
+
+ resource "mongodbatlas_project_invitation" "mig_test" {
+ project_id = mongodbatlas_project.mig_test.id
+ username = local.username
+ roles = local.roles
+ }
+ `, username, rolesStr, projectName, orgID)
+ return config
+}
+
+func userProjectAssignmentConfigSecond(username, projectName, orgID string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+ return fmt.Sprintf(`
+ locals {
+ username = %[1]q
+ roles = [%[2]s]
+ }
+
+ resource "mongodbatlas_project" "mig_test" {
+ name = %[3]q
+ org_id = %[4]q
+ }
+
+ resource "mongodbatlas_project_invitation" "mig_test" {
+ project_id = mongodbatlas_project.mig_test.id
+ username = local.username
+ roles = local.roles
+ }
+
+ resource "mongodbatlas_cloud_user_project_assignment" "user_mig_test" {
+ project_id = mongodbatlas_project.mig_test.id
+ username = local.username
+ roles = local.roles
+ }
+ `, username, rolesStr, projectName, orgID)
+}
+
+func removeProjectInvitationConfigThird(username, projectName, orgID string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+ return fmt.Sprintf(`
+ locals {
+ username = %[1]q
+ roles = [%[2]s]
+ }
+
+ resource "mongodbatlas_project" "mig_test" {
+ name = %[3]q
+ org_id = %[4]q
+ }
+
+ resource "mongodbatlas_cloud_user_project_assignment" "user_mig_test" {
+ project_id = mongodbatlas_project.mig_test.id
+ username = local.username
+ roles = local.roles
+ }
+ `, username, rolesStr, projectName, orgID)
+}
+
+func checksSecond(username string, roles []string) resource.TestCheckFunc {
+ checkFuncs := []resource.TestCheckFunc{
+ resource.TestCheckResourceAttr(resourceUserProjectAssignmentName, "username", username),
+ resource.TestCheckResourceAttrSet(resourceUserProjectAssignmentName, "project_id"),
+ resource.TestCheckResourceAttr(resourceUserProjectAssignmentName, "roles.#", fmt.Sprintf("%d", len(roles))),
+ }
+ return resource.ComposeAggregateTestCheckFunc(checkFuncs...)
+}
diff --git a/internal/service/clouduserprojectassignment/resource_test.go b/internal/service/clouduserprojectassignment/resource_test.go
new file mode 100644
index 0000000000..6bd4e01005
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/resource_test.go
@@ -0,0 +1,202 @@
+package clouduserprojectassignment_test
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "regexp"
+ "strings"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+)
+
+var resourceNamePending = "mongodbatlas_cloud_user_project_assignment.test_pending"
+var resourceNameActive = "mongodbatlas_cloud_user_project_assignment.test_active"
+var DSNameUsername = "data.mongodbatlas_cloud_user_project_assignment.test_username"
+var DSNameUserID = "data.mongodbatlas_cloud_user_project_assignment.test_user_id"
+
+func TestAccCloudUserProjectAssignment_basic(t *testing.T) {
+ resource.ParallelTest(t, *basicTestCase(t))
+}
+
+func TestAccCloudUserProjectAssignmentDS_error(t *testing.T) {
+ resource.ParallelTest(t, *errorTestCase(t))
+}
+
+func basicTestCase(t *testing.T) *resource.TestCase {
+ t.Helper()
+
+ // Use MONGODB_ATLAS_USERNAME_2 to avoid USER_ALREADY_IN_GROUP.
+ // The default MONGODB_ATLAS_USERNAME (Org Owner) is auto-assigned if no ProjectOwner is set.
+ activeUsername := os.Getenv("MONGODB_ATLAS_USERNAME_2")
+ pendingUsername := acc.RandomEmail()
+ projectID := acc.ProjectIDExecution(t)
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ roles := []string{"GROUP_OWNER", "GROUP_CLUSTER_MANAGER"}
+ updatedRoles := []string{"GROUP_OWNER", "GROUP_SEARCH_INDEX_EDITOR", "GROUP_READ_ONLY"}
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckAtlasUsernames(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(orgID, pendingUsername, activeUsername, projectID, roles),
+ Check: checks(pendingUsername, activeUsername, projectID, roles),
+ },
+ {
+ Config: configBasic(orgID, pendingUsername, activeUsername, projectID, updatedRoles),
+ Check: checks(pendingUsername, activeUsername, projectID, updatedRoles),
+ },
+ {
+ ResourceName: resourceNamePending,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceNamePending].Primary.Attributes
+ projectID := attrs["project_id"]
+ userID := attrs["user_id"]
+ return projectID + "/" + userID, nil
+ },
+ },
+ {
+ ResourceName: resourceNamePending,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceNamePending].Primary.Attributes
+ projectID := attrs["project_id"]
+ username := attrs["username"]
+ return projectID + "/" + username, nil
+ },
+ },
+ {
+ ResourceName: resourceNameActive,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceNameActive].Primary.Attributes
+ projectID := attrs["project_id"]
+ userID := attrs["user_id"]
+ return projectID + "/" + userID, nil
+ },
+ },
+ {
+ ResourceName: resourceNameActive,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceNameActive].Primary.Attributes
+ projectID := attrs["project_id"]
+ username := attrs["username"]
+ return projectID + "/" + username, nil
+ },
+ },
+ },
+ }
+}
+
+func errorTestCase(t *testing.T) *resource.TestCase {
+ t.Helper()
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ projectID := acc.ProjectIDExecution(t)
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configError(orgID, projectID),
+ ExpectError: regexp.MustCompile("either username or user_id must be provided"),
+ },
+ },
+ }
+}
+
+func configError(orgID, projectID string) string {
+ return fmt.Sprintf(`
+ data "mongodbatlas_cloud_user_project_assignment" "test" {
+ project_id = %[1]q
+ }
+ `, projectID, orgID)
+}
+
+func configBasic(orgID, pendingUsername, activeUsername, projectID string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+ return fmt.Sprintf(`
+ resource "mongodbatlas_project" "test" {
+ name = %[1]q
+ org_id = %[2]q
+ }
+
+ resource "mongodbatlas_cloud_user_project_assignment" "test_pending" {
+ username = %[3]q
+ project_id = %[1]q
+ roles = [%[5]s]
+ }
+
+ resource "mongodbatlas_cloud_user_project_assignment" "test_active" {
+ username = %[4]q
+ project_id = %[1]q
+ roles = [%[5]s]
+ }
+
+ data "mongodbatlas_cloud_user_project_assignment" "test_username" {
+ project_id = %[1]q
+ username = mongodbatlas_cloud_user_project_assignment.test_pending.username
+ }
+
+ data "mongodbatlas_cloud_user_project_assignment" "test_user_id" {
+ project_id = %[1]q
+ user_id = mongodbatlas_cloud_user_project_assignment.test_pending.user_id
+ }`,
+ projectID, orgID, pendingUsername, activeUsername, rolesStr)
+}
+
+func checks(pendingUsername, activeUsername, projectID string, roles []string) resource.TestCheckFunc {
+ checkFuncs := []resource.TestCheckFunc{
+ resource.TestCheckResourceAttr(resourceNamePending, "username", pendingUsername),
+ resource.TestCheckResourceAttr(resourceNamePending, "project_id", projectID),
+ resource.TestCheckResourceAttr(resourceNamePending, "roles.#", fmt.Sprintf("%d", len(roles))),
+ resource.TestCheckResourceAttr(resourceNameActive, "username", activeUsername),
+ resource.TestCheckResourceAttr(resourceNameActive, "project_id", projectID),
+ resource.TestCheckResourceAttr(resourceNameActive, "roles.#", fmt.Sprintf("%d", len(roles))),
+ resource.TestCheckResourceAttr(DSNameUserID, "username", pendingUsername),
+ resource.TestCheckResourceAttrPair(DSNameUserID, "username", resourceNamePending, "username"),
+ resource.TestCheckResourceAttrPair(DSNameUsername, "user_id", DSNameUserID, "user_id"),
+ resource.TestCheckResourceAttrPair(DSNameUsername, "project_id", DSNameUserID, "project_id"),
+ resource.TestCheckResourceAttrPair(DSNameUsername, "roles.#", DSNameUserID, "roles.#"),
+ }
+
+ for _, role := range roles {
+ checkFuncs = append(checkFuncs,
+ resource.TestCheckTypeSetElemAttr(resourceNamePending, "roles.*", role),
+ resource.TestCheckTypeSetElemAttr(resourceNameActive, "roles.*", role),
+ )
+ }
+
+ return resource.ComposeAggregateTestCheckFunc(checkFuncs...)
+}
+
+func checkDestroy(s *terraform.State) error {
+ for _, r := range s.RootModule().Resources {
+ if r.Type != "mongodbatlas_cloud_user_project_assignment" {
+ continue
+ }
+
+ userID := r.Primary.Attributes["user_id"]
+ projectID := r.Primary.Attributes["project_id"]
+
+ _, _, err := acc.ConnV2().MongoDBCloudUsersApi.GetGroupUser(context.Background(), projectID, userID).Execute()
+ if err == nil {
+ return fmt.Errorf("cloud user project assignment for user (%s) in project (%s) still exists", userID, projectID)
+ }
+ }
+ return nil
+}
diff --git a/internal/service/clouduserprojectassignment/schema.go b/internal/service/clouduserprojectassignment/schema.go
new file mode 100644
index 0000000000..f0794fa576
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/schema.go
@@ -0,0 +1,144 @@
+package clouduserprojectassignment
+
+import (
+ "github.com/hashicorp/terraform-plugin-framework-validators/setvalidator"
+ dsschema "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
+ "github.com/hashicorp/terraform-plugin-framework/schema/validator"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema"
+)
+
+func resourceSchema() schema.Schema {
+ return schema.Schema{
+ Attributes: map[string]schema.Attribute{
+ "country": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "created_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "first_name": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "First or given name that belongs to the MongoDB Cloud user.",
+ },
+ "project_id": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listprojects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.",
+ },
+ "user_id": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "invitation_created_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "invitation_expires_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "inviter_username": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Username of the MongoDB Cloud user who sent the invitation to join the organization.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "last_auth": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.",
+ },
+ "last_name": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Last name, family name, or surname that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "mobile_number": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Mobile phone number that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "org_membership_status": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "roles": schema.SetAttribute{
+ ElementType: types.StringType,
+ Required: true,
+ MarkdownDescription: "One or more project-level roles to assign the MongoDB Cloud user.",
+ Validators: []validator.Set{
+ setvalidator.SizeAtLeast(1),
+ },
+ },
+ "username": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Email address that represents the username of the MongoDB Cloud user.",
+ },
+ },
+ }
+}
+
+func dataSourceSchema() dsschema.Schema {
+ return conversion.DataSourceSchemaFromResource(resourceSchema(), &conversion.DataSourceSchemaRequest{
+ RequiredFields: []string{"project_id"},
+ OverridenFields: dataSourceOverridenFields(),
+ })
+}
+
+func dataSourceOverridenFields() map[string]dsschema.Attribute {
+ return map[string]dsschema.Attribute{
+ "user_id": dsschema.StringAttribute{
+ Optional: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.",
+ },
+ "username": dsschema.StringAttribute{
+ Optional: true,
+ MarkdownDescription: "Email address that represents the username of the MongoDB Cloud user.",
+ },
+ }
+}
+
+type TFModel struct {
+ Country types.String `tfsdk:"country"`
+ CreatedAt types.String `tfsdk:"created_at"`
+ FirstName types.String `tfsdk:"first_name"`
+ ProjectId types.String `tfsdk:"project_id"`
+ UserId types.String `tfsdk:"user_id"`
+ InvitationCreatedAt types.String `tfsdk:"invitation_created_at"`
+ InvitationExpiresAt types.String `tfsdk:"invitation_expires_at"`
+ InviterUsername types.String `tfsdk:"inviter_username"`
+ LastAuth types.String `tfsdk:"last_auth"`
+ LastName types.String `tfsdk:"last_name"`
+ MobileNumber types.String `tfsdk:"mobile_number"`
+ OrgMembershipStatus types.String `tfsdk:"org_membership_status"`
+ Roles types.Set `tfsdk:"roles"`
+ Username types.String `tfsdk:"username"`
+}
diff --git a/internal/service/clouduserprojectassignment/tfplugingen/generator_config.yml b/internal/service/clouduserprojectassignment/tfplugingen/generator_config.yml
new file mode 100644
index 0000000000..5b13d9679c
--- /dev/null
+++ b/internal/service/clouduserprojectassignment/tfplugingen/generator_config.yml
@@ -0,0 +1,22 @@
+provider:
+ name: mongodbatlas
+
+# TODO: Endpoints from Atlas Admin API must be specified for schema and model generation. Singular or plural data sources can be removed if not used.
+
+resources:
+ cloud_user_project_assignment:
+ read:
+ path: /api/atlas/v2/groups/{groupId}/users/{userId}
+ method: GET
+ create:
+ path: /api/atlas/v2/groups/{groupId}/users
+ method: POST
+ delete:
+ path: /api/atlas/v2/groups/{groupId}/users/{userId}
+ method: DELETE
+
+data_sources:
+ cloud_user_project_assignment:
+ read:
+ path: /api/atlas/v2/groups/{groupId}/users/{userId}
+ method: GET
\ No newline at end of file
diff --git a/internal/service/clouduserteamassignment/data_source.go b/internal/service/clouduserteamassignment/data_source.go
new file mode 100644
index 0000000000..8a82602868
--- /dev/null
+++ b/internal/service/clouduserteamassignment/data_source.go
@@ -0,0 +1,66 @@
+package clouduserteamassignment
+
+import (
+ "context"
+
+ "github.com/hashicorp/terraform-plugin-framework/datasource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+var _ datasource.DataSource = &cloudUserTeamAssignmentDS{}
+var _ datasource.DataSourceWithConfigure = &cloudUserTeamAssignmentDS{}
+
+func DataSource() datasource.DataSource {
+ return &cloudUserTeamAssignmentDS{
+ DSCommon: config.DSCommon{
+ DataSourceName: resourceName,
+ },
+ }
+}
+
+type cloudUserTeamAssignmentDS struct {
+ config.DSCommon
+}
+
+func (d *cloudUserTeamAssignmentDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
+ resp.Schema = dataSourceSchema()
+}
+
+func (d *cloudUserTeamAssignmentDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
+ var state TFUserTeamAssignmentModel
+ resp.Diagnostics.Append(req.Config.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := d.Client.AtlasV2
+ orgID := state.OrgId.ValueString()
+ teamID := state.TeamId.ValueString()
+ userID := state.UserId.ValueString()
+ username := state.Username.ValueString()
+
+ if username == "" && userID == "" {
+ resp.Diagnostics.AddError("invalid configuration", "either username or user_id must be provided")
+ return
+ }
+
+ userResp, err := fetchTeamUser(ctx, connV2, orgID, teamID, &userID, &username)
+ if err != nil {
+ resp.Diagnostics.AddError("error retrieving user", err.Error())
+ return
+ }
+ if userResp == nil {
+ resp.Diagnostics.AddError("resource not found", "no user found with the specified identifier")
+ return
+ }
+
+ newCloudUserTeamAssignmentModel, diags := NewTFUserTeamAssignmentModel(ctx, userResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ newCloudUserTeamAssignmentModel.OrgId = state.OrgId
+ newCloudUserTeamAssignmentModel.TeamId = state.TeamId
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserTeamAssignmentModel)...)
+}
diff --git a/internal/service/clouduserteamassignment/main_test.go b/internal/service/clouduserteamassignment/main_test.go
new file mode 100644
index 0000000000..e63b13fa75
--- /dev/null
+++ b/internal/service/clouduserteamassignment/main_test.go
@@ -0,0 +1,15 @@
+package clouduserteamassignment_test
+
+import (
+ "os"
+ "testing"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+)
+
+func TestMain(m *testing.M) {
+ cleanup := acc.SetupSharedResources()
+ exitCode := m.Run()
+ cleanup()
+ os.Exit(exitCode)
+}
diff --git a/internal/service/clouduserteamassignment/model.go b/internal/service/clouduserteamassignment/model.go
new file mode 100644
index 0000000000..cdc9b8196c
--- /dev/null
+++ b/internal/service/clouduserteamassignment/model.go
@@ -0,0 +1,90 @@
+package clouduserteamassignment
+
+import (
+ "context"
+
+ "github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+func NewTFUserTeamAssignmentModel(ctx context.Context, apiResp *admin.OrgUserResponse) (*TFUserTeamAssignmentModel, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+
+ if apiResp == nil {
+ return nil, diags
+ }
+
+ rolesObj, rolesDiags := NewTFRolesModel(ctx, &apiResp.Roles)
+ diags.Append(rolesDiags...)
+
+ teamIDs := conversion.TFSetValueOrNull(ctx, apiResp.TeamIds, types.StringType)
+
+ userTeamAssignment := TFUserTeamAssignmentModel{
+ UserId: types.StringValue(apiResp.GetId()),
+ Username: types.StringValue(apiResp.GetUsername()),
+ OrgMembershipStatus: types.StringValue(apiResp.GetOrgMembershipStatus()),
+ Roles: rolesObj,
+ TeamIds: teamIDs,
+ InvitationCreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.InvitationCreatedAt)),
+ InvitationExpiresAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.InvitationExpiresAt)),
+ InviterUsername: types.StringPointerValue(apiResp.InviterUsername),
+ Country: types.StringPointerValue(apiResp.Country),
+ FirstName: types.StringPointerValue(apiResp.FirstName),
+ LastName: types.StringPointerValue(apiResp.LastName),
+ CreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.CreatedAt)),
+ LastAuth: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.LastAuth)),
+ MobileNumber: types.StringPointerValue(apiResp.MobileNumber),
+ }
+
+ return &userTeamAssignment, nil
+}
+
+func NewTFRolesModel(ctx context.Context, roles *admin.OrgUserRolesResponse) (types.Object, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+
+ if roles == nil {
+ return types.ObjectNull(RolesObjectAttrTypes), diags
+ }
+
+ orgRoles := conversion.TFSetValueOrNull(ctx, roles.OrgRoles, types.StringType)
+
+ projectRoleAssignmentsSet := NewTFProjectRoleAssignments(ctx, roles.GroupRoleAssignments)
+
+ rolesModel := TFRolesModel{
+ OrgRoles: orgRoles,
+ ProjectRoleAssignments: projectRoleAssignmentsSet,
+ }
+
+ rolesObj, _ := types.ObjectValueFrom(ctx, RolesObjectAttrTypes, rolesModel)
+ return rolesObj, diags
+}
+
+func NewTFProjectRoleAssignments(ctx context.Context, groupRoleAssignments *[]admin.GroupRoleAssignment) types.Set {
+ if groupRoleAssignments == nil {
+ return types.SetNull(ProjectRoleAssignmentsAttrType)
+ }
+
+ var projectRoleAssignments []TFProjectRoleAssignmentsModel
+
+ for _, pra := range *groupRoleAssignments {
+ projectID := types.StringPointerValue(pra.GroupId)
+ projectRoles := conversion.TFSetValueOrNull(ctx, pra.GroupRoles, types.StringType)
+
+ projectRoleAssignments = append(projectRoleAssignments, TFProjectRoleAssignmentsModel{
+ ProjectId: projectID,
+ ProjectRoles: projectRoles,
+ })
+ }
+
+ praSet, _ := types.SetValueFrom(ctx, ProjectRoleAssignmentsAttrType.ElemType.(types.ObjectType), projectRoleAssignments)
+ return praSet
+}
+
+func NewUserTeamAssignmentReq(ctx context.Context, plan *TFUserTeamAssignmentModel) (*admin.AddOrRemoveUserFromTeam, diag.Diagnostics) {
+ addOrRemoveUserFromTeam := admin.AddOrRemoveUserFromTeam{
+ Id: plan.UserId.ValueString(),
+ }
+ return &addOrRemoveUserFromTeam, nil
+}
diff --git a/internal/service/clouduserteamassignment/model_test.go b/internal/service/clouduserteamassignment/model_test.go
new file mode 100644
index 0000000000..3f449aad6d
--- /dev/null
+++ b/internal/service/clouduserteamassignment/model_test.go
@@ -0,0 +1,186 @@
+package clouduserteamassignment_test
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/clouduserteamassignment"
+ "github.com/stretchr/testify/assert"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+const (
+ testUserID = "user-123"
+ testUsername = "jdoe"
+ testFirstName = "John"
+ testLastName = "Doe"
+ testCountry = "CA"
+ testMobile = "+1555123456"
+ testInviter = "admin"
+ testOrgMembershipStatus = "ACTIVE"
+ testInviterUsername = ""
+
+ testOrgRoleOwner = "ORG_OWNER"
+ testOrgRoleMember = "ORG_MEMBER"
+ testProjectRoleOwner = "PROJECT_OWNER"
+ testProjectRoleRead = "PROJECT_READ_ONLY"
+ testProjectRoleMember = "PROJECT_MEMBER"
+
+ testTeamID1 = "team1"
+ testTeamID2 = "team2"
+ testProjectID1 = "project-123"
+ testOrgID = "org-123"
+)
+
+var (
+ when = time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC)
+ testCreatedAt = when.Format(time.RFC3339)
+ testInvitationCreatedAt = when.Add(-24 * time.Hour).Format(time.RFC3339)
+ testInvitationExpiresAt = when.Add(24 * time.Hour).Format(time.RFC3339)
+ testLastAuth = when.Add(-2 * time.Hour).Format(time.RFC3339)
+
+ testTeamIDs = []string{"team1", "team2"}
+ testOrgRoles = []string{"owner", "readWrite"}
+)
+
+type sdkToTFModelTestCase struct {
+ SDKResp *admin.OrgUserResponse
+ expectedTFModel *clouduserteamassignment.TFUserTeamAssignmentModel
+}
+
+func TestUserTeamAssignmentSDKToTFModel(t *testing.T) {
+ ctx := t.Context()
+
+ fullResp := &admin.OrgUserResponse{
+ Id: testUserID,
+ Username: testUsername,
+ FirstName: admin.PtrString(testFirstName),
+ LastName: admin.PtrString(testLastName),
+ Country: admin.PtrString(testCountry),
+ MobileNumber: admin.PtrString(testMobile),
+ OrgMembershipStatus: testOrgMembershipStatus,
+ CreatedAt: admin.PtrTime(when),
+ LastAuth: admin.PtrTime(when.Add(-2 * time.Hour)),
+ InvitationCreatedAt: admin.PtrTime(when.Add(-24 * time.Hour)),
+ InvitationExpiresAt: admin.PtrTime(when.Add(24 * time.Hour)),
+ InviterUsername: admin.PtrString(testInviterUsername),
+ TeamIds: &testTeamIDs,
+ Roles: admin.OrgUserRolesResponse{
+ OrgRoles: &testOrgRoles,
+ },
+ }
+
+ orgRolesSet, _ := types.SetValueFrom(ctx, types.StringType, testOrgRoles)
+ expectedRoles, _ := types.ObjectValue(clouduserteamassignment.RolesObjectAttrTypes, map[string]attr.Value{
+ "org_roles": orgRolesSet,
+ "project_role_assignments": types.ListNull(clouduserteamassignment.ProjectRoleAssignmentsAttrType),
+ })
+ expectedTeams, _ := types.SetValueFrom(ctx, types.StringType, testTeamIDs)
+ expectedFullModel := &clouduserteamassignment.TFUserTeamAssignmentModel{
+ UserId: types.StringValue(testUserID),
+ Username: types.StringValue(testUsername),
+ FirstName: types.StringValue(testFirstName),
+ LastName: types.StringValue(testLastName),
+ Country: types.StringValue(testCountry),
+ MobileNumber: types.StringValue(testMobile),
+ OrgMembershipStatus: types.StringValue(testOrgMembershipStatus),
+ CreatedAt: types.StringValue(testCreatedAt),
+ LastAuth: types.StringValue(testLastAuth),
+ InvitationCreatedAt: types.StringValue(testInvitationCreatedAt),
+ InvitationExpiresAt: types.StringValue(testInvitationExpiresAt),
+ InviterUsername: types.StringValue(testInviterUsername),
+ OrgId: types.StringNull(),
+ TeamId: types.StringNull(),
+ Roles: expectedRoles,
+ TeamIds: expectedTeams,
+ }
+
+ testCases := map[string]sdkToTFModelTestCase{
+ "nil SDK response": {
+ SDKResp: nil,
+ expectedTFModel: nil,
+ },
+ "Complete SDK response": {
+ SDKResp: fullResp,
+ expectedTFModel: expectedFullModel,
+ },
+ }
+
+ for testName, tc := range testCases {
+ t.Run(testName, func(t *testing.T) {
+ resultModel, diags := clouduserteamassignment.NewTFUserTeamAssignmentModel(t.Context(), tc.SDKResp)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expectedTFModel, resultModel, "created terraform model did not match expected output")
+ })
+ }
+}
+
+func createRolesObject(ctx context.Context, orgRoles []string, projectAssignments []clouduserteamassignment.TFProjectRoleAssignmentsModel) types.Object {
+ orgRolesSet, _ := types.SetValueFrom(ctx, types.StringType, orgRoles)
+
+ var projectRoleAssignmentsList types.List
+ if len(projectAssignments) == 0 {
+ projectRoleAssignmentsList = types.ListNull(clouduserteamassignment.ProjectRoleAssignmentsAttrType.ElemType.(types.ObjectType))
+ } else {
+ projectRoleAssignmentsList, _ = types.ListValueFrom(ctx, clouduserteamassignment.ProjectRoleAssignmentsAttrType.ElemType.(types.ObjectType), projectAssignments)
+ }
+
+ obj, _ := types.ObjectValue(
+ clouduserteamassignment.RolesObjectAttrTypes,
+ map[string]attr.Value{
+ "org_roles": orgRolesSet,
+ "project_role_assignments": projectRoleAssignmentsList,
+ },
+ )
+ return obj
+}
+
+func TestNewUserTeamAssignmentReq(t *testing.T) {
+ ctx := t.Context()
+ projectAssignment := clouduserteamassignment.TFProjectRoleAssignmentsModel{
+ ProjectId: types.StringValue(testProjectID1),
+ ProjectRoles: types.SetValueMust(types.StringType, []attr.Value{types.StringValue(testProjectRoleOwner)}),
+ }
+ teams, _ := types.SetValueFrom(ctx, types.StringType, testTeamIDs)
+ testCases := map[string]struct {
+ plan *clouduserteamassignment.TFUserTeamAssignmentModel
+ expected *admin.AddOrRemoveUserFromTeam
+ }{
+ "Complete model": {
+ plan: &clouduserteamassignment.TFUserTeamAssignmentModel{
+ OrgId: types.StringValue(testOrgID),
+ TeamId: types.StringValue(testTeamID1),
+ UserId: types.StringValue(testUserID),
+ Username: types.StringValue(testUsername),
+ OrgMembershipStatus: types.StringValue(testOrgMembershipStatus),
+ Roles: createRolesObject(ctx, testOrgRoles, []clouduserteamassignment.TFProjectRoleAssignmentsModel{
+ projectAssignment,
+ }),
+ TeamIds: teams,
+ InvitationCreatedAt: types.StringValue(testInvitationCreatedAt),
+ InvitationExpiresAt: types.StringValue(testInvitationExpiresAt),
+ InviterUsername: types.StringValue(testInviterUsername),
+ Country: types.StringValue(testCountry),
+ FirstName: types.StringValue(testFirstName),
+ LastName: types.StringValue(testLastName),
+ CreatedAt: types.StringValue(testCreatedAt),
+ LastAuth: types.StringValue(testLastAuth),
+ MobileNumber: types.StringValue(testMobile),
+ },
+ expected: &admin.AddOrRemoveUserFromTeam{
+ Id: testUserID,
+ },
+ },
+ }
+
+ for name, tc := range testCases {
+ t.Run(name, func(t *testing.T) {
+ req, diags := clouduserteamassignment.NewUserTeamAssignmentReq(ctx, tc.plan)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expected, req)
+ })
+ }
+}
diff --git a/internal/service/clouduserteamassignment/resource.go b/internal/service/clouduserteamassignment/resource.go
new file mode 100644
index 0000000000..d01dee0e7c
--- /dev/null
+++ b/internal/service/clouduserteamassignment/resource.go
@@ -0,0 +1,193 @@
+package clouduserteamassignment
+
+import (
+ "context"
+ "fmt"
+ "regexp"
+
+ "github.com/hashicorp/terraform-plugin-framework/path"
+ "github.com/hashicorp/terraform-plugin-framework/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+const (
+ resourceName = "cloud_user_team_assignment"
+ warnUnsupportedOperation = "Operation not supported"
+ errorReadingUsers = "Error retrieving team users"
+ invalidImportID = "Invalid import ID format"
+)
+
+var _ resource.ResourceWithConfigure = &rs{}
+var _ resource.ResourceWithImportState = &rs{}
+
+func Resource() resource.Resource {
+ return &rs{
+ RSCommon: config.RSCommon{
+ ResourceName: resourceName,
+ },
+ }
+}
+
+type rs struct {
+ config.RSCommon
+}
+
+func (r *rs) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) {
+ resp.Schema = resourceSchema()
+ conversion.UpdateSchemaDescription(&resp.Schema)
+}
+
+func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
+ var plan TFUserTeamAssignmentModel
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+ connV2 := r.Client.AtlasV2
+ orgID := plan.OrgId.ValueString()
+ teamID := plan.TeamId.ValueString()
+ cloudUserTeamAssignmentReq, diags := NewUserTeamAssignmentReq(ctx, &plan)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ apiResp, _, err := connV2.MongoDBCloudUsersApi.AddOrgTeamUser(ctx, orgID, teamID, cloudUserTeamAssignmentReq).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf("error assigning user to TeamID(%s):", teamID), err.Error())
+ return
+ }
+
+ newUserTeamAssignmentModel, diags := NewTFUserTeamAssignmentModel(ctx, apiResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ newUserTeamAssignmentModel.OrgId = plan.OrgId
+ newUserTeamAssignmentModel.TeamId = plan.TeamId
+ resp.Diagnostics.Append(resp.State.Set(ctx, newUserTeamAssignmentModel)...)
+}
+
+func fetchTeamUser(ctx context.Context, connV2 *admin.APIClient, orgID, teamID string, userID, username *string) (*admin.OrgUserResponse, error) {
+ var params admin.ListTeamUsersApiParams
+ if userID != nil && *userID != "" {
+ params = admin.ListTeamUsersApiParams{
+ UserId: userID,
+ OrgId: orgID,
+ TeamId: teamID,
+ }
+ } else if username != nil && *username != "" {
+ params = admin.ListTeamUsersApiParams{
+ Username: username,
+ OrgId: orgID,
+ TeamId: teamID,
+ }
+ }
+
+ userListResp, httpResp, err := connV2.MongoDBCloudUsersApi.ListTeamUsersWithParams(ctx, ¶ms).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ return nil, nil
+ }
+ return nil, err
+ }
+
+ if userListResp == nil || len(userListResp.GetResults()) == 0 {
+ return nil, nil
+ }
+ userResp := userListResp.GetResults()[0]
+ return &userResp, nil
+}
+
+func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
+ var state TFUserTeamAssignmentModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ orgID := state.OrgId.ValueString()
+ teamID := state.TeamId.ValueString()
+
+ var userID, username *string
+ if !state.UserId.IsNull() && state.UserId.ValueString() != "" {
+ userID = state.UserId.ValueStringPointer()
+ } else if !state.Username.IsNull() && state.Username.ValueString() != "" {
+ username = state.Username.ValueStringPointer()
+ }
+
+ userResp, err := fetchTeamUser(ctx, connV2, orgID, teamID, userID, username)
+ if err != nil {
+ resp.Diagnostics.AddError(errorReadingUsers, err.Error())
+ return
+ }
+ if userResp == nil {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+
+ newCloudUserTeamAssignmentModel, diags := NewTFUserTeamAssignmentModel(ctx, userResp)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ newCloudUserTeamAssignmentModel.OrgId = state.OrgId
+ newCloudUserTeamAssignmentModel.TeamId = state.TeamId
+ resp.Diagnostics.Append(resp.State.Set(ctx, newCloudUserTeamAssignmentModel)...)
+}
+
+func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
+ resp.Diagnostics.AddError(warnUnsupportedOperation, "Updating the cloud user team assignment is not supported. To modify your infrastructure, please delete the existing mongodbatlas_cloud_user_team_assignment resource and create a new one with the necessary updates")
+}
+
+func (r *rs) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
+ var state *TFUserTeamAssignmentModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ orgID := state.OrgId.ValueString()
+ userID := state.UserId.ValueString()
+ teamID := state.TeamId.ValueString()
+
+ userInfo := &admin.AddOrRemoveUserFromTeam{
+ Id: userID,
+ }
+
+ _, httpResp, err := connV2.MongoDBCloudUsersApi.RemoveOrgTeamUser(ctx, orgID, teamID, userInfo).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ resp.Diagnostics.AddError(fmt.Sprintf("error deleting user(%s) from TeamID(%s):", userID, teamID), err.Error())
+ return
+ }
+}
+
+func (r *rs) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) {
+ importID := req.ID
+ ok, parts := conversion.ImportSplit(req.ID, 3)
+ if !ok {
+ resp.Diagnostics.AddError(invalidImportID, "expected 'org_id/team_id/user_id' or 'org_id/team_id/username', got: "+importID)
+ return
+ }
+ orgID, teamID, user := parts[0], parts[1], parts[2]
+
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("org_id"), orgID)...)
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("team_id"), teamID)...)
+
+ emailRegex := regexp.MustCompile(`^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$`)
+
+ if emailRegex.MatchString(user) {
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("username"), user)...)
+ } else {
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("user_id"), user)...)
+ }
+}
diff --git a/internal/service/clouduserteamassignment/resource_migration_test.go b/internal/service/clouduserteamassignment/resource_migration_test.go
new file mode 100644
index 0000000000..e651ecd9f0
--- /dev/null
+++ b/internal/service/clouduserteamassignment/resource_migration_test.go
@@ -0,0 +1,95 @@
+package clouduserteamassignment_test
+
+import (
+ "fmt"
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
+)
+
+const (
+ resourceTeamName = "mongodbatlas_team.test"
+ resourceTeamAssignmentName = "mongodbatlas_cloud_user_team_assignment.test"
+)
+
+func TestMigCloudUserTeamAssignmentRS_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // when resource 1st released
+ mig.CreateAndRunTest(t, basicTestCase(t))
+}
+
+func TestMigCloudUserTeamAssignmentRS_migrationJourney(t *testing.T) {
+ var (
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ teamName = fmt.Sprintf("team-test-%s", acc.RandomName())
+ username = os.Getenv("MONGODB_ATLAS_USERNAME")
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckAtlasUsername(t) },
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ // NOTE: 'usernames' attribute (available v1.39.0, deprecated in v2.0.0) is used in this test in team resource,
+ // which may be removed in future versions. This could cause the test to break - keep for version tracking.
+ ExternalProviders: mig.ExternalProviders(),
+ Config: configTeamWithUsernamesFirst(orgID, teamName, username),
+ },
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configWithTeamAssignmentsSecond(orgID, teamName, username), // expected to see 1 import in the plan
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttr(resourceTeamName, "name", teamName),
+
+ resource.TestCheckResourceAttrSet(resourceTeamAssignmentName, "user_id"),
+ resource.TestCheckResourceAttr(resourceTeamAssignmentName, "username", username),
+ ),
+ },
+ mig.TestStepCheckEmptyPlan(configWithTeamAssignmentsSecond(orgID, teamName, username)),
+ },
+ })
+}
+
+// Step 1: Original configuration with usernames attribute
+func configTeamWithUsernamesFirst(orgID, teamName, username string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "test" {
+ org_id = %[2]q
+ name = %[3]q
+ usernames = [%[1]q]
+ }
+ `, username, orgID, teamName)
+}
+
+// Step 2: Configuration with team assignments using import blocks
+
+// NOTE: Using static resource assignment instead of for_each with multiple usernames
+// due to a known limitation in Terraform's acceptance testing framework with indexed resources.
+// The actual migration using for_each works correctly (verified locally).
+func configWithTeamAssignmentsSecond(orgID, teamName, username string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "test" {
+ org_id = %[1]q
+ name = %[2]q
+ }
+
+ data "mongodbatlas_team" "test" {
+ org_id = %[1]q
+ team_id = mongodbatlas_team.test.team_id
+ }
+
+ resource "mongodbatlas_cloud_user_team_assignment" "test" {
+ org_id = %[1]q
+ team_id = mongodbatlas_team.test.team_id
+ user_id = data.mongodbatlas_team.test.users[0].id
+ }
+
+ import {
+ to = mongodbatlas_cloud_user_team_assignment.test
+ id = "%[1]s/${mongodbatlas_team.test.team_id}/${data.mongodbatlas_team.test.users[0].id}"
+ }
+
+ `, orgID, teamName)
+}
diff --git a/internal/service/clouduserteamassignment/resource_test.go b/internal/service/clouduserteamassignment/resource_test.go
new file mode 100644
index 0000000000..84ffdf50cb
--- /dev/null
+++ b/internal/service/clouduserteamassignment/resource_test.go
@@ -0,0 +1,138 @@
+package clouduserteamassignment_test
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+var resourceName = "mongodbatlas_cloud_user_team_assignment.test"
+var dataSourceName1 = "data.mongodbatlas_cloud_user_team_assignment.test1"
+var dataSourceName2 = "data.mongodbatlas_cloud_user_team_assignment.test2"
+
+func TestAccCloudUserTeamAssignment_basic(t *testing.T) {
+ resource.Test(t, *basicTestCase(t))
+}
+
+func basicTestCase(t *testing.T) *resource.TestCase {
+ t.Helper()
+
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ userID := os.Getenv("MONGODB_ATLAS_PROJECT_OWNER_ID")
+ teamName := acc.RandomName()
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckBasicOwnerID(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(orgID, userID, teamName),
+ Check: checks(orgID, userID),
+ },
+ {
+ ResourceName: resourceName,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceName].Primary.Attributes
+ orgID := attrs["org_id"]
+ teamID := attrs["team_id"]
+ username := attrs["username"]
+ return orgID + "/" + teamID + "/" + username, nil
+ },
+ },
+ {
+ ResourceName: resourceName,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "user_id",
+ ImportStateIdFunc: func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceName].Primary.Attributes
+ orgID := attrs["org_id"]
+ teamID := attrs["team_id"]
+ userID := attrs["user_id"]
+ return orgID + "/" + teamID + "/" + userID, nil
+ },
+ },
+ },
+ }
+}
+
+func configBasic(orgID, userID, teamName string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "test" {
+ org_id = %[1]q
+ name = %[3]q
+ }
+ resource "mongodbatlas_cloud_user_team_assignment" "test" {
+ org_id = %[1]q
+ team_id = mongodbatlas_team.test.team_id
+ user_id = %[2]q
+ }
+ data "mongodbatlas_cloud_user_team_assignment" "test1" {
+ org_id = %[1]q
+ team_id = mongodbatlas_team.test.team_id
+ user_id = mongodbatlas_cloud_user_team_assignment.test.user_id
+ }
+
+ data "mongodbatlas_cloud_user_team_assignment" "test2" {
+ org_id = %[1]q
+ team_id = mongodbatlas_team.test.team_id
+ username = mongodbatlas_cloud_user_team_assignment.test.username
+ }
+ `,
+ orgID, userID, teamName)
+}
+
+func checks(orgID, userID string) resource.TestCheckFunc {
+ return resource.ComposeAggregateTestCheckFunc(
+ resource.TestCheckResourceAttr(resourceName, "org_id", orgID),
+ resource.TestCheckResourceAttr(resourceName, "user_id", userID),
+ resource.TestCheckResourceAttrSet(resourceName, "username"),
+ resource.TestCheckResourceAttrWith(resourceName, "username", acc.IsUsername()),
+ resource.TestCheckResourceAttrWith(resourceName, "created_at", acc.IsTimestamp()),
+ resource.TestCheckResourceAttrWith(resourceName, "team_ids.#", acc.IntGreatThan(0)),
+
+ resource.TestCheckResourceAttr(dataSourceName1, "user_id", userID),
+ resource.TestCheckResourceAttrWith(dataSourceName1, "username", acc.IsUsername()),
+ resource.TestCheckResourceAttr(dataSourceName1, "org_id", orgID),
+
+ resource.TestCheckResourceAttr(dataSourceName2, "user_id", userID),
+ resource.TestCheckResourceAttrWith(dataSourceName2, "username", acc.IsUsername()),
+ resource.TestCheckResourceAttr(dataSourceName2, "org_id", orgID),
+ )
+}
+
+func checkDestroy(s *terraform.State) error {
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "mongodbatlas_cloud_user_team_assignment" {
+ continue
+ }
+ orgID := rs.Primary.Attributes["org_id"]
+ teamID := rs.Primary.Attributes["team_id"]
+ userID := rs.Primary.Attributes["user_id"]
+ conn := acc.ConnV2()
+ ctx := context.Background()
+
+ if userID != "" {
+ userIDParams := &admin.ListTeamUsersApiParams{
+ UserId: &userID,
+ OrgId: orgID,
+ TeamId: teamID,
+ }
+ userListUserID, _, err := conn.MongoDBCloudUsersApi.ListTeamUsersWithParams(ctx, userIDParams).Execute()
+ if userListUserID.HasResults() {
+ return fmt.Errorf("cloud user team assignment for user (%s) in team (%s) still exists %s", userID, teamID, err)
+ }
+ }
+ }
+ return nil
+}
diff --git a/internal/service/clouduserteamassignment/schema.go b/internal/service/clouduserteamassignment/schema.go
new file mode 100644
index 0000000000..9b2259993f
--- /dev/null
+++ b/internal/service/clouduserteamassignment/schema.go
@@ -0,0 +1,191 @@
+package clouduserteamassignment
+
+import (
+ "github.com/hashicorp/terraform-plugin-framework/attr"
+ dsschema "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+)
+
+func resourceSchema() schema.Schema {
+ return schema.Schema{
+ Attributes: map[string]schema.Attribute{
+ "org_id": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the organization that contains your projects. Use the [/orgs](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-organizations) endpoint to retrieve all organizations to which the authenticated user has access.",
+ },
+ "team_id": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the team to which you want to assign the MongoDB Cloud user. Use the [/teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/group/endpoint-teams) endpoint to retrieve all teams to which the authenticated user has access.",
+ },
+ "user_id": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.",
+ },
+ "username": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Email address that represents the username of the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "org_membership_status": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "String enum that indicates whether the MongoDB Cloud user has a pending invitation to join the organization or they are already active in the organization.",
+ },
+ "roles": schema.SingleNestedAttribute{
+ Computed: true,
+ MarkdownDescription: "Organization and project level roles to assign the MongoDB Cloud user within one organization.",
+ Attributes: map[string]schema.Attribute{
+ "project_role_assignments": schema.SetNestedAttribute{
+ Computed: true,
+ MarkdownDescription: "List of project level role assignments to assign the MongoDB Cloud user.",
+ NestedObject: schema.NestedAttributeObject{
+ Attributes: map[string]schema.Attribute{
+ "project_id": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the project to which these roles belong.",
+ },
+ "project_roles": schema.SetAttribute{
+ ElementType: types.StringType,
+ Computed: true,
+ MarkdownDescription: "One or more project-level roles assigned to the MongoDB Cloud user.",
+ },
+ },
+ },
+ },
+ "org_roles": schema.SetAttribute{
+ ElementType: types.StringType,
+ Computed: true,
+ MarkdownDescription: "One or more organization level roles to assign the MongoDB Cloud user.",
+ },
+ },
+ },
+ "team_ids": schema.SetAttribute{
+ ElementType: types.StringType,
+ Computed: true,
+ MarkdownDescription: "List of unique 24-hexadecimal digit strings that identifies the teams to which this MongoDB Cloud user belongs.",
+ },
+ "invitation_created_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when MongoDB Cloud sent the invitation. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "invitation_expires_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when the invitation from MongoDB Cloud expires. MongoDB Cloud represents this timestamp in ISO 8601 format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "inviter_username": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Username of the MongoDB Cloud user who sent the invitation to join the organization.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "country": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Two-character alphabetical string that identifies the MongoDB Cloud user's geographic location. This parameter uses the ISO 3166-1a2 code format.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "first_name": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "First or given name that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "last_name": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Last name, family name, or surname that belongs to the MongoDB Cloud user.",
+ },
+ "created_at": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when MongoDB Cloud created the current account. This value is in the ISO 8601 timestamp format in UTC.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ "last_auth": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Date and time when the current account last authenticated. This value is in the ISO 8601 timestamp format in UTC.",
+ },
+ "mobile_number": schema.StringAttribute{
+ Computed: true,
+ MarkdownDescription: "Mobile phone number that belongs to the MongoDB Cloud user.",
+ PlanModifiers: []planmodifier.String{
+ stringplanmodifier.UseStateForUnknown(),
+ },
+ },
+ },
+ }
+}
+
+func dataSourceSchema() dsschema.Schema {
+ return conversion.DataSourceSchemaFromResource(resourceSchema(), &conversion.DataSourceSchemaRequest{
+ RequiredFields: []string{"org_id", "team_id"},
+ OverridenFields: dataSourceOverridenFields(),
+ })
+}
+
+func dataSourceOverridenFields() map[string]dsschema.Attribute {
+ return map[string]dsschema.Attribute{
+ "user_id": dsschema.StringAttribute{
+ Optional: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies the MongoDB Cloud user.",
+ },
+ "username": dsschema.StringAttribute{
+ Optional: true,
+ MarkdownDescription: "Email address that represents the username of the MongoDB Cloud user.",
+ },
+ }
+}
+
+type TFUserTeamAssignmentModel struct {
+ OrgId types.String `tfsdk:"org_id"`
+ TeamId types.String `tfsdk:"team_id"`
+ UserId types.String `tfsdk:"user_id"`
+ Username types.String `tfsdk:"username"`
+ OrgMembershipStatus types.String `tfsdk:"org_membership_status"`
+ Roles types.Object `tfsdk:"roles"`
+ TeamIds types.Set `tfsdk:"team_ids"`
+ InvitationCreatedAt types.String `tfsdk:"invitation_created_at"`
+ InvitationExpiresAt types.String `tfsdk:"invitation_expires_at"`
+ InviterUsername types.String `tfsdk:"inviter_username"`
+ Country types.String `tfsdk:"country"`
+ FirstName types.String `tfsdk:"first_name"`
+ LastName types.String `tfsdk:"last_name"`
+ CreatedAt types.String `tfsdk:"created_at"`
+ LastAuth types.String `tfsdk:"last_auth"`
+ MobileNumber types.String `tfsdk:"mobile_number"`
+}
+
+type TFRolesModel struct {
+ ProjectRoleAssignments types.Set `tfsdk:"project_role_assignments"`
+ OrgRoles types.Set `tfsdk:"org_roles"`
+}
+
+type TFProjectRoleAssignmentsModel struct {
+ ProjectId types.String `tfsdk:"project_id"`
+ ProjectRoles types.Set `tfsdk:"project_roles"`
+}
+
+var ProjectRoleAssignmentsAttrType = types.SetType{ElemType: types.ObjectType{AttrTypes: map[string]attr.Type{
+ "project_id": types.StringType,
+ "project_roles": types.SetType{ElemType: types.StringType},
+}}}
+
+var RolesObjectAttrTypes = map[string]attr.Type{
+ "org_roles": types.SetType{ElemType: types.StringType},
+ "project_role_assignments": ProjectRoleAssignmentsAttrType,
+}
diff --git a/internal/service/cluster/data_source_cluster.go b/internal/service/cluster/data_source_cluster.go
index 48204e6d86..f14ed26b6e 100644
--- a/internal/service/cluster/data_source_cluster.go
+++ b/internal/service/cluster/data_source_cluster.go
@@ -10,13 +10,15 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster"
)
func DataSource() *schema.Resource {
return &schema.Resource{
- ReadContext: dataSourceRead,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "datasource", "mongodbatlas_advanced_cluster", clusterToAdvancedClusterGuide),
+ ReadContext: dataSourceRead,
Schema: map[string]*schema.Schema{
"project_id": {
Type: schema.TypeString,
diff --git a/internal/service/cluster/data_source_clusters.go b/internal/service/cluster/data_source_clusters.go
index 9fb73d41b2..1b450e0d88 100644
--- a/internal/service/cluster/data_source_clusters.go
+++ b/internal/service/cluster/data_source_clusters.go
@@ -14,13 +14,15 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/id"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedcluster"
)
func PluralDataSource() *schema.Resource {
return &schema.Resource{
- ReadContext: dataSourcePluralRead,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "datasource", "mongodbatlas_advanced_clusters", clusterToAdvancedClusterGuide),
+ ReadContext: dataSourcePluralRead,
Schema: map[string]*schema.Schema{
"project_id": {
Type: schema.TypeString,
diff --git a/internal/service/cluster/resource_cluster.go b/internal/service/cluster/resource_cluster.go
index 235759e422..9a8c426734 100644
--- a/internal/service/cluster/resource_cluster.go
+++ b/internal/service/cluster/resource_cluster.go
@@ -21,6 +21,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
"github.com/spf13/cast"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -35,12 +36,14 @@ const (
errorClusterUpdate = "error updating MongoDB Cluster (%s): %s"
errorAdvancedConfUpdate = "error updating Advanced Configuration Option %s for MongoDB Cluster (%s): %s"
ErrorSnapshotBackupPolicyRead = "error getting a Cloud Provider Snapshot Backup Policy for the cluster(%s): %s"
+ clusterToAdvancedClusterGuide = "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide"
)
var defaultLabel = matlas.Label{Key: advancedclustertpf.LegacyIgnoredLabelKey, Value: "MongoDB Atlas Terraform Provider"}
func Resource() *schema.Resource {
return &schema.Resource{
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "resource", "mongodbatlas_advanced_cluster", clusterToAdvancedClusterGuide),
CreateWithoutTimeout: resourceCreate,
ReadWithoutTimeout: resourceRead,
UpdateWithoutTimeout: resourceUpdate,
diff --git a/internal/service/cluster/resource_cluster_state_upgrader_test.go b/internal/service/cluster/resource_cluster_state_upgrader_test.go
index cc9fabc7c4..edbc0d6e04 100644
--- a/internal/service/cluster/resource_cluster_state_upgrader_test.go
+++ b/internal/service/cluster/resource_cluster_state_upgrader_test.go
@@ -2,14 +2,20 @@ package cluster_test
import (
"fmt"
+ "strings"
"testing"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/cluster"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
+const (
+ deprecatedResourceDiagSummary = "Deprecated Resource"
+)
+
func TestAccClusterRSClusterMigrateState_empty_advancedConfig(t *testing.T) {
acc.SkipInUnitTest(t)
v0State := map[string]any{
@@ -39,7 +45,7 @@ func TestAccClusterRSClusterMigrateState_empty_advancedConfig(t *testing.T) {
v1Config := terraform.NewResourceConfigRaw(v1State)
diags = cluster.Resource().Validate(v1Config)
- if len(diags) > 0 {
+ if isErrorDiags(diags) {
fmt.Println(diags)
t.Error("migrated cluster advanced config is invalid")
@@ -83,7 +89,7 @@ func TestAccClusterRSClusterMigrateState_with_advancedConfig(t *testing.T) {
v1Config := terraform.NewResourceConfigRaw(v1State)
diags = cluster.Resource().Validate(v1Config)
- if len(diags) > 0 {
+ if isErrorDiags(diags) {
fmt.Println(diags)
t.Error("migrated cluster advanced config is invalid")
@@ -127,10 +133,14 @@ func TestAccClusterRSClusterMigrateState_with_defaultAdvancedConfig_v0_5_1(t *te
v1Config := terraform.NewResourceConfigRaw(v1State)
diags = cluster.Resource().Validate(v1Config)
- if len(diags) > 0 {
+ if isErrorDiags(diags) {
fmt.Println(diags)
t.Error("migrated cluster advanced config is invalid")
return
}
}
+
+func isErrorDiags(diags diag.Diagnostics) bool {
+ return len(diags) > 0 && !strings.Contains(diags[0].Summary, deprecatedResourceDiagSummary)
+}
diff --git a/internal/service/cluster/resource_cluster_test.go b/internal/service/cluster/resource_cluster_test.go
index 885cd3bbba..c79d177cbf 100644
--- a/internal/service/cluster/resource_cluster_test.go
+++ b/internal/service/cluster/resource_cluster_test.go
@@ -1402,11 +1402,11 @@ func TestAccCluster_pinnedFCVWithVersionUpgradeAndDowngrade(t *testing.T) {
Steps: []resource.TestStep{
{
Config: configFCVPinning(orgID, projectName, clusterName, nil, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
+ Check: checkFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
},
{ // pins fcv
Config: configFCVPinning(orgID, projectName, clusterName, &firstExpirationDate, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, conversion.Pointer(firstExpirationDate), conversion.Pointer(7)),
+ Check: checkFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, conversion.Pointer(firstExpirationDate), conversion.Pointer(7)),
},
{ // using incorrect format
Config: configFCVPinning(orgID, projectName, clusterName, &invalidDateFormat, "7.0"),
@@ -1414,24 +1414,44 @@ func TestAccCluster_pinnedFCVWithVersionUpgradeAndDowngrade(t *testing.T) {
},
{ // updates expiration date of fcv
Config: configFCVPinning(orgID, projectName, clusterName, &updatedExpirationDate, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, conversion.Pointer(updatedExpirationDate), conversion.Pointer(7)),
+ Check: checkFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, conversion.Pointer(updatedExpirationDate), conversion.Pointer(7)),
},
{ // upgrade mongodb version with fcv pinned
Config: configFCVPinning(orgID, projectName, clusterName, &updatedExpirationDate, "8.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 8, conversion.Pointer(updatedExpirationDate), conversion.Pointer(7)),
+ Check: checkFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 8, conversion.Pointer(updatedExpirationDate), conversion.Pointer(7)),
},
{ // downgrade mongodb version with fcv pinned
Config: configFCVPinning(orgID, projectName, clusterName, &updatedExpirationDate, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, conversion.Pointer(updatedExpirationDate), conversion.Pointer(7)),
+ Check: checkFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, conversion.Pointer(updatedExpirationDate), conversion.Pointer(7)),
},
{ // unpins fcv
Config: configFCVPinning(orgID, projectName, clusterName, nil, "7.0"),
- Check: acc.CheckFCVPinningConfig(true, resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
+ Check: checkFCVPinningConfig(resourceName, dataSourceName, dataSourcePluralName, 7, nil, nil),
},
},
})
}
+func checkFCVPinningConfig(resourceName, dataSourceName, pluralDataSourceName string, mongoDBMajorVersion int, pinningExpirationDate *string, fcvVersion *int) resource.TestCheckFunc {
+ mapChecks := map[string]string{
+ "mongo_db_major_version": fmt.Sprintf("%d.0", mongoDBMajorVersion),
+ }
+
+ if pinningExpirationDate != nil {
+ mapChecks["pinned_fcv.0.expiration_date"] = *pinningExpirationDate
+ } else {
+ mapChecks["pinned_fcv.#"] = "0"
+ }
+
+ if fcvVersion != nil {
+ mapChecks["pinned_fcv.0.version"] = fmt.Sprintf("%d.0", *fcvVersion)
+ }
+
+ additionalCheck := resource.TestCheckResourceAttrWith(resourceName, "mongo_db_version", acc.MatchesExpression(fmt.Sprintf("%d..*", mongoDBMajorVersion)))
+
+ return acc.CheckRSAndDS(resourceName, admin.PtrString(dataSourceName), admin.PtrString(pluralDataSourceName), []string{}, mapChecks, additionalCheck)
+}
+
func configAWS(projectID, name string, backupEnabled, autoDiskGBEnabled bool) string {
return fmt.Sprintf(`
resource "mongodbatlas_cluster" "test" {
diff --git a/internal/service/clusteroutagesimulation/data_source_cluster_outage_simulation.go b/internal/service/clusteroutagesimulation/data_source.go
similarity index 100%
rename from internal/service/clusteroutagesimulation/data_source_cluster_outage_simulation.go
rename to internal/service/clusteroutagesimulation/data_source.go
diff --git a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go b/internal/service/clusteroutagesimulation/resource.go
similarity index 64%
rename from internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go
rename to internal/service/clusteroutagesimulation/resource.go
index 23f824eccb..fa8655930e 100644
--- a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation.go
+++ b/internal/service/clusteroutagesimulation/resource.go
@@ -9,6 +9,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -21,16 +22,18 @@ const (
errorClusterOutageSimulationDelete = "error ending MongoDB Atlas Cluster Outage Simulation for Project (%s), Cluster (%s): %s"
errorClusterOutageSimulationSetting = "error setting `%s` for MongoDB Atlas Cluster Outage Simulation: %s"
defaultOutageFilterType = "REGION"
+ oneMinute = 1 * time.Minute
)
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- UpdateContext: resourceUpdate,
- DeleteContext: resourceDelete,
+ CreateWithoutTimeout: resourceCreate,
+ ReadWithoutTimeout: resourceRead,
+ UpdateWithoutTimeout: resourceUpdate,
+ DeleteWithoutTimeout: resourceDelete,
Timeouts: &schema.ResourceTimeout{
Delete: schema.DefaultTimeout(25 * time.Minute),
+ Create: schema.DefaultTimeout(25 * time.Minute),
},
Schema: map[string]*schema.Schema{
"project_id": {
@@ -74,6 +77,11 @@ func Resource() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
+ "delete_on_create_timeout": { // Don't use Default: true to avoid unplanned changes when upgrading from previous versions.
+ Type: schema.TypeBool,
+ Optional: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
@@ -93,19 +101,32 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(fmt.Errorf(errorClusterOutageSimulationCreate, projectID, clusterName, err))
}
- timeout := d.Timeout(schema.TimeoutCreate)
stateConf := &retry.StateChangeConf{
Pending: []string{"START_REQUESTED", "STARTING"},
Target: []string{"SIMULATING"},
Refresh: resourceRefreshFunc(ctx, clusterName, projectID, connV2),
- Timeout: timeout,
- MinTimeout: 1 * time.Minute,
- Delay: 3 * time.Minute,
+ Timeout: d.Timeout(schema.TimeoutCreate) - oneMinute, // When using a CRUD function with a timeout, any StateChangeConf timeouts must be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.
+ MinTimeout: oneMinute,
+ Delay: oneMinute,
}
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(errorClusterOutageSimulationCreate, projectID, clusterName, err))
+ _, errWait := stateConf.WaitForStateContext(ctx)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
+ }
+ errWait = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, errWait, func(ctxCleanup context.Context) error {
+ return deleteOutageSimulationWithCleanup(
+ ctxCleanup,
+ connV2,
+ projectID,
+ clusterName,
+ 20*time.Minute, // wait timeout for reaching SIMULATING before trying to delete
+ d.Timeout(schema.TimeoutDelete),
+ )
+ })
+ if errWait != nil {
+ return diag.FromErr(fmt.Errorf(errorClusterOutageSimulationCreate, projectID, clusterName, errWait))
}
d.SetId(conversion.EncodeStateID(map[string]string{
@@ -159,16 +180,53 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di
return nil
}
-func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
+// waitForDeletableState waits for the outage simulation to reach a deletable state
+func waitForDeletableState(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, timeout time.Duration) (*admin.ClusterOutageSimulation, error) {
+ stateConf := &retry.StateChangeConf{
+ Pending: []string{"START_REQUESTED", "STARTING"},
+ Target: []string{"SIMULATING", "FAILED", "DELETED"},
+ Refresh: resourceRefreshFunc(ctx, clusterName, projectID, connV2),
+ Timeout: timeout,
+ MinTimeout: oneMinute,
+ Delay: oneMinute,
+ }
- ids := conversion.DecodeStateID(d.Id())
- projectID := ids["project_id"]
- clusterName := ids["cluster_name"]
+ result, err := stateConf.WaitForStateContext(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ if result == nil {
+ return nil, fmt.Errorf("no result returned from state change")
+ }
+
+ simulation := result.(*admin.ClusterOutageSimulation)
+ return simulation, nil
+}
+// deleteOutageSimulationWithCleanup waits for SIMULATING state and then deletes the simulation
+func deleteOutageSimulationWithCleanup(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, waitTimeout, deleteTimeout time.Duration) error {
+ simulation, err := waitForDeletableState(ctx, connV2, projectID, clusterName, waitTimeout)
+ if err != nil {
+ return nil // Don't fail cleanup if we can't reach a deletable state
+ }
+
+ finalState := simulation.GetState()
+ switch finalState {
+ case "SIMULATING": // If this isn't the state when triggering the delete, the API returns a 400 error: "INVALID_CLUSTER_OUTAGE_SIMULATION_STATE") Detail: Invalid cluster outage simulation state: START_REQUESTED, expected state: SIMULATING
+ return endOutageSimulationAndWait(ctx, connV2, projectID, clusterName, deleteTimeout)
+ case "FAILED", "DELETED":
+ return nil
+ default:
+ return nil
+ }
+}
+
+// endOutageSimulationAndWait ends the outage simulation and waits for it to complete
+func endOutageSimulationAndWait(ctx context.Context, connV2 *admin.APIClient, projectID, clusterName string, timeout time.Duration) error {
_, _, err := connV2.ClusterOutageSimulationApi.EndOutageSimulation(ctx, projectID, clusterName).Execute()
if err != nil {
- return diag.FromErr(fmt.Errorf(errorClusterOutageSimulationDelete, projectID, clusterName, err))
+ return fmt.Errorf(errorClusterOutageSimulationDelete, projectID, clusterName, err)
}
log.Println("[INFO] Waiting for MongoDB Cluster Outage Simulation to end")
@@ -177,14 +235,29 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
Pending: []string{"RECOVERY_REQUESTED", "RECOVERING", "COMPLETE"},
Target: []string{"DELETED"},
Refresh: resourceRefreshFunc(ctx, clusterName, projectID, connV2),
- Timeout: d.Timeout(schema.TimeoutDelete),
- MinTimeout: 30 * time.Second,
- Delay: 1 * time.Minute,
+ Timeout: timeout,
+ MinTimeout: oneMinute,
+ Delay: oneMinute,
}
_, err = stateConf.WaitForStateContext(ctx)
if err != nil {
- return diag.FromErr(fmt.Errorf(errorClusterOutageSimulationDelete, projectID, clusterName, err))
+ return fmt.Errorf(errorClusterOutageSimulationDelete, projectID, clusterName, err)
+ }
+
+ return nil
+}
+
+func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
+ connV2 := meta.(*config.MongoDBClient).AtlasV2
+
+ ids := conversion.DecodeStateID(d.Id())
+ projectID := ids["project_id"]
+ clusterName := ids["cluster_name"]
+
+ err := endOutageSimulationAndWait(ctx, connV2, projectID, clusterName, d.Timeout(schema.TimeoutDelete))
+ if err != nil {
+ return diag.FromErr(err)
}
return nil
diff --git a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go b/internal/service/clusteroutagesimulation/resource_migration_test.go
similarity index 70%
rename from internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go
rename to internal/service/clusteroutagesimulation/resource_migration_test.go
index 3c75ffa36a..8209182dfb 100644
--- a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_migration_test.go
+++ b/internal/service/clusteroutagesimulation/resource_migration_test.go
@@ -7,9 +7,11 @@ import (
)
func TestMigOutageSimulationCluster_SingleRegion_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // version where advanced_cluster TPF was GA
mig.CreateAndRunTest(t, singleRegionTestCase(t))
}
func TestMigOutageSimulationCluster_MultiRegion_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // version where advanced_cluster TPF was GA
mig.CreateAndRunTest(t, multiRegionTestCase(t))
}
diff --git a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go b/internal/service/clusteroutagesimulation/resource_test.go
similarity index 82%
rename from internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go
rename to internal/service/clusteroutagesimulation/resource_test.go
index 95bfada97c..2de5037056 100644
--- a/internal/service/clusteroutagesimulation/resource_cluster_outage_simulation_test.go
+++ b/internal/service/clusteroutagesimulation/resource_test.go
@@ -3,10 +3,12 @@ package clusteroutagesimulation_test
import (
"context"
"fmt"
+ "regexp"
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -153,6 +155,50 @@ func configMultiRegion(info *acc.ClusterInfo) string {
`, info.TerraformStr, info.ProjectID, info.Name, info.ResourceName)
}
+func TestAccClusterOutageSimulation_deleteOnCreateTimeout(t *testing.T) {
+ var (
+ singleRegionRequest = acc.ClusterRequest{
+ ReplicationSpecs: []acc.ReplicationSpecRequest{
+ {Region: "US_WEST_2", InstanceSize: "M10"},
+ },
+ }
+ clusterInfo = acc.GetClusterInfo(t, &singleRegionRequest)
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: acc.PreCheckBasicSleep(t, &clusterInfo, "", ""),
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configDeleteOnCreateTimeout(&clusterInfo, "1s", true),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
+func configDeleteOnCreateTimeout(info *acc.ClusterInfo, timeout string, deleteOnTimeout bool) string {
+ return fmt.Sprintf(`
+ %[1]s
+ resource "mongodbatlas_cluster_outage_simulation" "test_outage" {
+ project_id = %[2]q
+ cluster_name = %[3]q
+ delete_on_create_timeout = %[5]t
+
+ timeouts {
+ create = %[4]q
+ }
+
+ outage_filters {
+ cloud_provider = "AWS"
+ region_name = "US_WEST_2"
+ }
+
+ depends_on = [%[6]s]
+ }
+ `, info.TerraformStr, info.ProjectID, info.Name, timeout, deleteOnTimeout, info.ResourceName)
+}
+
func checkDestroy(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "mongodbatlas_cluster_outage_simulation" {
diff --git a/internal/service/customdbrole/data_source_custom_db_role.go b/internal/service/customdbrole/data_source.go
similarity index 100%
rename from internal/service/customdbrole/data_source_custom_db_role.go
rename to internal/service/customdbrole/data_source.go
diff --git a/internal/service/customdbrole/data_source_custom_db_roles.go b/internal/service/customdbrole/plural_data_source.go
similarity index 100%
rename from internal/service/customdbrole/data_source_custom_db_roles.go
rename to internal/service/customdbrole/plural_data_source.go
diff --git a/internal/service/customdbrole/resource_custom_db_role.go b/internal/service/customdbrole/resource.go
similarity index 98%
rename from internal/service/customdbrole/resource_custom_db_role.go
rename to internal/service/customdbrole/resource.go
index 610bbeb59b..7baf7d47fa 100644
--- a/internal/service/customdbrole/resource_custom_db_role.go
+++ b/internal/service/customdbrole/resource.go
@@ -51,7 +51,7 @@ func Resource() *schema.Resource {
),
},
"actions": {
- Type: schema.TypeList,
+ Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
@@ -269,8 +269,8 @@ func resourceImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*s
}
func expandActions(d *schema.ResourceData) *[]admin.DatabasePrivilegeAction {
- actions := make([]admin.DatabasePrivilegeAction, len(d.Get("actions").([]any)))
- for k, v := range d.Get("actions").([]any) {
+ actions := make([]admin.DatabasePrivilegeAction, len(d.Get("actions").(*schema.Set).List()))
+ for k, v := range d.Get("actions").(*schema.Set).List() {
a := v.(map[string]any)
actions[k] = admin.DatabasePrivilegeAction{
Action: a["action"].(string),
diff --git a/internal/service/customdbrole/resource_custom_db_role_migration_test.go b/internal/service/customdbrole/resource_migration_test.go
similarity index 100%
rename from internal/service/customdbrole/resource_custom_db_role_migration_test.go
rename to internal/service/customdbrole/resource_migration_test.go
diff --git a/internal/service/customdbrole/resource_custom_db_role_test.go b/internal/service/customdbrole/resource_test.go
similarity index 82%
rename from internal/service/customdbrole/resource_custom_db_role_test.go
rename to internal/service/customdbrole/resource_test.go
index f9b028187d..3e064c2bcb 100644
--- a/internal/service/customdbrole/resource_custom_db_role_test.go
+++ b/internal/service/customdbrole/resource_test.go
@@ -8,6 +8,7 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/knownvalue"
+ "github.com/hashicorp/terraform-plugin-testing/plancheck"
"github.com/hashicorp/terraform-plugin-testing/statecheck"
"github.com/hashicorp/terraform-plugin-testing/terraform"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
@@ -38,6 +39,10 @@ func TestAccCustomDBRoles_Basic(t *testing.T) {
resource.ParallelTest(t, *basicTestCase(t))
}
+func TestAccCustomDBRoles_BasicWithTwoActions(t *testing.T) {
+ resource.ParallelTest(t, *basicTestCaseWithTwoActions(t))
+}
+
func basicTestCase(t *testing.T) *resource.TestCase {
t.Helper()
var (
@@ -84,6 +89,39 @@ func basicTestCase(t *testing.T) *resource.TestCase {
}
}
+func basicTestCaseWithTwoActions(t *testing.T) *resource.TestCase {
+ t.Helper()
+ var (
+ projectID = acc.ProjectIDExecution(t)
+ roleName = acc.RandomName()
+ action1 = "INSERT"
+ action2 = "UPDATE"
+ databaseName1 = acc.RandomClusterName()
+ databaseName2 = acc.RandomClusterName()
+ )
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasicWithTwoActions(projectID, roleName, action1, databaseName1, action2, databaseName2),
+ Check: checkExists(resourceName),
+ },
+ {
+ Config: configBasicWithTwoActions(projectID, roleName, action2, databaseName2, action1, databaseName1), // reverse the actions order
+ Check: checkExists(resourceName),
+ ConfigPlanChecks: resource.ConfigPlanChecks{
+ PreApply: []plancheck.PlanCheck{
+ plancheck.ExpectEmptyPlan(),
+ },
+ },
+ },
+ },
+ }
+}
+
func checkAttrs(projectID, roleName, action, databaseName string) resource.TestCheckFunc {
return acc.CheckRSAndDS(
resourceName,
@@ -378,16 +416,28 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) {
resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRole.RoleName),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRole.GetActions()))),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRole.GetActions()[0].Action),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRole.GetActions()[0].GetResources()))),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRole.GetActions()[0].Action,
+ "resources.#": cast.ToString(len(inheritRole.GetActions()[0].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRole.GetActions()[1].Action,
+ "resources.#": cast.ToString(len(inheritRole.GetActions()[1].GetResources())),
+ }),
// For Test Role
checkExists(testRoleResourceName),
resource.TestCheckResourceAttrSet(testRoleResourceName, "project_id"),
resource.TestCheckResourceAttr(testRoleResourceName, "role_name", testRole.RoleName),
resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRole.GetActions()))),
- resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRole.GetActions()[0].Action),
- resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRole.GetActions()[0].GetResources()))),
+ resource.TestCheckTypeSetElemNestedAttrs(testRoleResourceName, "actions.*", map[string]string{
+ "action": testRole.GetActions()[0].Action,
+ "resources.#": cast.ToString(len(testRole.GetActions()[0].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(testRoleResourceName, "actions.*", map[string]string{
+ "action": testRole.GetActions()[1].Action,
+ "resources.#": cast.ToString(len(testRole.GetActions()[1].GetResources())),
+ }),
),
},
{
@@ -399,16 +449,28 @@ func TestAccConfigRSCustomDBRoles_MultipleCustomRoles(t *testing.T) {
resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRoleUpdated.RoleName),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRoleUpdated.GetActions()))),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRoleUpdated.GetActions()[0].Action),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated.GetActions()[0].GetResources()))),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRoleUpdated.GetActions()[0].Action,
+ "resources.#": cast.ToString(len(inheritRoleUpdated.GetActions()[0].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRoleUpdated.GetActions()[1].Action,
+ "resources.#": cast.ToString(len(inheritRoleUpdated.GetActions()[1].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRoleUpdated.GetActions()[2].Action,
+ "resources.#": cast.ToString(len(inheritRoleUpdated.GetActions()[2].GetResources())),
+ }),
// For Test Role
checkExists(testRoleResourceName),
resource.TestCheckResourceAttrSet(testRoleResourceName, "project_id"),
resource.TestCheckResourceAttr(testRoleResourceName, "role_name", testRoleUpdated.RoleName),
resource.TestCheckResourceAttr(testRoleResourceName, "actions.#", cast.ToString(len(testRoleUpdated.GetActions()))),
- resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.action", testRoleUpdated.GetActions()[0].Action),
- resource.TestCheckResourceAttr(testRoleResourceName, "actions.0.resources.#", cast.ToString(len(testRoleUpdated.GetActions()[0].GetResources()))),
+ resource.TestCheckTypeSetElemNestedAttrs(testRoleResourceName, "actions.*", map[string]string{
+ "action": testRoleUpdated.GetActions()[0].Action,
+ "resources.#": cast.ToString(len(testRoleUpdated.GetActions()[0].GetResources())),
+ }),
resource.TestCheckResourceAttr(testRoleResourceName, "inherited_roles.#", "1"),
),
},
@@ -509,8 +571,14 @@ func TestAccConfigRSCustomDBRoles_UpdatedInheritRoles(t *testing.T) {
resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRole.RoleName),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRole.GetActions()))),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRole.GetActions()[0].Action),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRole.GetActions()[0].GetResources()))),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRole.GetActions()[0].Action,
+ "resources.#": cast.ToString(len(inheritRole.GetActions()[0].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRole.GetActions()[1].Action,
+ "resources.#": cast.ToString(len(inheritRole.GetActions()[1].GetResources())),
+ }),
// For Test Role
checkExists(testRoleResourceName),
@@ -529,8 +597,18 @@ func TestAccConfigRSCustomDBRoles_UpdatedInheritRoles(t *testing.T) {
resource.TestCheckResourceAttrSet(InheritedRoleResourceName, "project_id"),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "role_name", inheritRoleUpdated.RoleName),
resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.#", cast.ToString(len(inheritRoleUpdated.GetActions()))),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.action", inheritRoleUpdated.GetActions()[0].Action),
- resource.TestCheckResourceAttr(InheritedRoleResourceName, "actions.0.resources.#", cast.ToString(len(inheritRoleUpdated.GetActions()[0].GetResources()))),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRoleUpdated.GetActions()[0].Action,
+ "resources.#": cast.ToString(len(inheritRoleUpdated.GetActions()[0].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRoleUpdated.GetActions()[1].Action,
+ "resources.#": cast.ToString(len(inheritRoleUpdated.GetActions()[1].GetResources())),
+ }),
+ resource.TestCheckTypeSetElemNestedAttrs(InheritedRoleResourceName, "actions.*", map[string]string{
+ "action": inheritRoleUpdated.GetActions()[2].Action,
+ "resources.#": cast.ToString(len(inheritRoleUpdated.GetActions()[2].GetResources())),
+ }),
// For Test Role
checkExists(testRoleResourceName),
@@ -603,6 +681,30 @@ func configBasic(projectID, roleName, action, databaseName string) string {
`, projectID, roleName, action, databaseName)
}
+func generateActionConfig(action, databaseName string) string {
+ return fmt.Sprintf(`
+ actions {
+ action = %q
+ resources {
+ collection_name = ""
+ database_name = %q
+ }
+ }
+ `, action, databaseName)
+}
+
+func configBasicWithTwoActions(projectID, roleName, action1, databaseName1, action2, databaseName2 string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_custom_db_role" "test" {
+ project_id = %[1]q
+ role_name = %[2]q
+
+ %[3]s
+ %[4]s
+ }
+ `, projectID, roleName, generateActionConfig(action1, databaseName1), generateActionConfig(action2, databaseName2))
+}
+
func configWithInheritedRoles(orgID, projectName string, inheritedRole []admin.UserCustomDBRole, testRole *admin.UserCustomDBRole) string {
return fmt.Sprintf(`
diff --git a/internal/service/encryptionatrestprivateendpoint/data_source.go b/internal/service/encryptionatrestprivateendpoint/data_source.go
index e8b0eacaa8..8e45964bdd 100644
--- a/internal/service/encryptionatrestprivateendpoint/data_source.go
+++ b/internal/service/encryptionatrestprivateendpoint/data_source.go
@@ -31,7 +31,7 @@ func (d *encryptionAtRestPrivateEndpointDS) Schema(ctx context.Context, req data
}
func (d *encryptionAtRestPrivateEndpointDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
- var earPrivateEndpointConfig TFEarPrivateEndpointModel
+ var earPrivateEndpointConfig TFEarPrivateEndpointModelDS
resp.Diagnostics.Append(req.Config.Get(ctx, &earPrivateEndpointConfig)...)
if resp.Diagnostics.HasError() {
return
@@ -48,5 +48,5 @@ func (d *encryptionAtRestPrivateEndpointDS) Read(ctx context.Context, req dataso
return
}
- resp.Diagnostics.Append(resp.State.Set(ctx, NewTFEarPrivateEndpoint(*endpointModel, projectID))...)
+ resp.Diagnostics.Append(resp.State.Set(ctx, NewTFEarPrivateEndpointDS(*endpointModel, projectID))...)
}
diff --git a/internal/service/encryptionatrestprivateendpoint/data_source_schema.go b/internal/service/encryptionatrestprivateendpoint/data_source_schema.go
index 841373c896..77222b4355 100644
--- a/internal/service/encryptionatrestprivateendpoint/data_source_schema.go
+++ b/internal/service/encryptionatrestprivateendpoint/data_source_schema.go
@@ -41,8 +41,19 @@ func DSAttributes(withArguments bool) map[string]schema.Attribute {
}
}
+// TFEarPrivateEndpointModelDS represents the model for data sources (without timeout fields)
+type TFEarPrivateEndpointModelDS struct {
+ CloudProvider types.String `tfsdk:"cloud_provider"`
+ ErrorMessage types.String `tfsdk:"error_message"`
+ ProjectID types.String `tfsdk:"project_id"`
+ ID types.String `tfsdk:"id"`
+ PrivateEndpointConnectionName types.String `tfsdk:"private_endpoint_connection_name"`
+ RegionName types.String `tfsdk:"region_name"`
+ Status types.String `tfsdk:"status"`
+}
+
type TFEncryptionAtRestPrivateEndpointsDSModel struct {
- CloudProvider types.String `tfsdk:"cloud_provider"`
- ProjectID types.String `tfsdk:"project_id"`
- Results []TFEarPrivateEndpointModel `tfsdk:"results"`
+ CloudProvider types.String `tfsdk:"cloud_provider"`
+ ProjectID types.String `tfsdk:"project_id"`
+ Results []TFEarPrivateEndpointModelDS `tfsdk:"results"`
}
diff --git a/internal/service/encryptionatrestprivateendpoint/model.go b/internal/service/encryptionatrestprivateendpoint/model.go
index 0ee368a1bb..837e79abb3 100644
--- a/internal/service/encryptionatrestprivateendpoint/model.go
+++ b/internal/service/encryptionatrestprivateendpoint/model.go
@@ -31,9 +31,9 @@ func NewEarPrivateEndpointReq(tfPlan *TFEarPrivateEndpointModel) *admin.EARPriva
}
func NewTFEarPrivateEndpoints(projectID, cloudProvider string, sdkResults []admin.EARPrivateEndpoint) *TFEncryptionAtRestPrivateEndpointsDSModel {
- results := make([]TFEarPrivateEndpointModel, len(sdkResults))
+ results := make([]TFEarPrivateEndpointModelDS, len(sdkResults))
for i := range sdkResults {
- result := NewTFEarPrivateEndpoint(sdkResults[i], projectID)
+ result := NewTFEarPrivateEndpointDS(sdkResults[i], projectID)
results[i] = result
}
return &TFEncryptionAtRestPrivateEndpointsDSModel{
@@ -42,3 +42,16 @@ func NewTFEarPrivateEndpoints(projectID, cloudProvider string, sdkResults []admi
Results: results,
}
}
+
+// NewTFEarPrivateEndpointDS creates a new data source model without timeout fields
+func NewTFEarPrivateEndpointDS(apiResp admin.EARPrivateEndpoint, projectID string) TFEarPrivateEndpointModelDS {
+ return TFEarPrivateEndpointModelDS{
+ ProjectID: types.StringValue(projectID),
+ CloudProvider: conversion.StringNullIfEmpty(apiResp.GetCloudProvider()),
+ ErrorMessage: conversion.StringNullIfEmpty(apiResp.GetErrorMessage()),
+ ID: conversion.StringNullIfEmpty(apiResp.GetId()),
+ RegionName: conversion.StringNullIfEmpty(apiResp.GetRegionName()),
+ Status: conversion.StringNullIfEmpty(apiResp.GetStatus()),
+ PrivateEndpointConnectionName: conversion.StringNullIfEmpty(apiResp.GetPrivateEndpointConnectionName()),
+ }
+}
diff --git a/internal/service/encryptionatrestprivateendpoint/model_test.go b/internal/service/encryptionatrestprivateendpoint/model_test.go
index 4c2606facf..9dc4a7e178 100644
--- a/internal/service/encryptionatrestprivateendpoint/model_test.go
+++ b/internal/service/encryptionatrestprivateendpoint/model_test.go
@@ -174,7 +174,7 @@ func TestEncryptionAtRestPrivateEndpointPluralDSSDKToTFModel(t *testing.T) {
expectedTFModel: &encryptionatrestprivateendpoint.TFEncryptionAtRestPrivateEndpointsDSModel{
CloudProvider: types.StringValue(testCloudProvider),
ProjectID: types.StringValue(testProjectID),
- Results: []encryptionatrestprivateendpoint.TFEarPrivateEndpointModel{
+ Results: []encryptionatrestprivateendpoint.TFEarPrivateEndpointModelDS{
{
CloudProvider: types.StringValue(testCloudProvider),
ErrorMessage: types.StringNull(),
diff --git a/internal/service/encryptionatrestprivateendpoint/resource.go b/internal/service/encryptionatrestprivateendpoint/resource.go
index 032c7f7604..acbfdecd5e 100644
--- a/internal/service/encryptionatrestprivateendpoint/resource.go
+++ b/internal/service/encryptionatrestprivateendpoint/resource.go
@@ -11,6 +11,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework/path"
"github.com/hashicorp/terraform-plugin-framework/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
@@ -63,13 +64,28 @@ func (r *encryptionAtRestPrivateEndpointRS) Create(ctx context.Context, req reso
return
}
- finalResp, err := waitStateTransition(ctx, projectID, cloudProvider, createResp.GetId(), connV2.EncryptionAtRestUsingCustomerKeyManagementApi)
+ createTimeout := cleanup.ResolveTimeout(ctx, &earPrivateEndpointPlan.Timeouts, cleanup.OperationCreate, &resp.Diagnostics)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ finalResp, err := waitStateTransition(ctx, projectID, cloudProvider, createResp.GetId(), connV2.EncryptionAtRestUsingCustomerKeyManagementApi, createTimeout)
+ err = cleanup.HandleCreateTimeout(cleanup.ResolveDeleteOnCreateTimeout(earPrivateEndpointPlan.DeleteOnCreateTimeout), err, func(ctxCleanup context.Context) error {
+ cleanResp, cleanErr := connV2.EncryptionAtRestUsingCustomerKeyManagementApi.RequestPrivateEndpointDeletion(ctxCleanup, projectID, cloudProvider, createResp.GetId()).Execute()
+ if validate.StatusNotFound(cleanResp) {
+ return nil
+ }
+ return cleanErr
+ })
+
if err != nil {
resp.Diagnostics.AddError("error when waiting for status transition in creation", err.Error())
return
}
privateEndpointModel := NewTFEarPrivateEndpoint(*finalResp, projectID)
+ privateEndpointModel.Timeouts = earPrivateEndpointPlan.Timeouts
+ privateEndpointModel.DeleteOnCreateTimeout = earPrivateEndpointPlan.DeleteOnCreateTimeout
resp.Diagnostics.Append(resp.State.Set(ctx, privateEndpointModel)...)
diags := CheckErrorMessageAndStatus(finalResp)
@@ -98,7 +114,10 @@ func (r *encryptionAtRestPrivateEndpointRS) Read(ctx context.Context, req resour
return
}
- resp.Diagnostics.Append(resp.State.Set(ctx, NewTFEarPrivateEndpoint(*endpointModel, projectID))...)
+ privateEndpointModel := NewTFEarPrivateEndpoint(*endpointModel, projectID)
+ privateEndpointModel.Timeouts = earPrivateEndpointState.Timeouts
+ privateEndpointModel.DeleteOnCreateTimeout = earPrivateEndpointState.DeleteOnCreateTimeout
+ resp.Diagnostics.Append(resp.State.Set(ctx, privateEndpointModel)...)
diags := CheckErrorMessageAndStatus(endpointModel)
resp.Diagnostics.Append(diags...)
@@ -124,7 +143,12 @@ func (r *encryptionAtRestPrivateEndpointRS) Delete(ctx context.Context, req reso
return
}
- model, err := WaitDeleteStateTransition(ctx, projectID, cloudProvider, endpointID, connV2.EncryptionAtRestUsingCustomerKeyManagementApi)
+ deleteTimeout := cleanup.ResolveTimeout(ctx, &earPrivateEndpointState.Timeouts, cleanup.OperationDelete, &resp.Diagnostics)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ model, err := WaitDeleteStateTransition(ctx, projectID, cloudProvider, endpointID, connV2.EncryptionAtRestUsingCustomerKeyManagementApi, deleteTimeout)
if err != nil {
resp.Diagnostics.AddError("error when waiting for status transition in delete", err.Error())
return
diff --git a/internal/service/encryptionatrestprivateendpoint/resource_schema.go b/internal/service/encryptionatrestprivateendpoint/resource_schema.go
index fe8ad6fdc6..e04b0ee1fa 100644
--- a/internal/service/encryptionatrestprivateendpoint/resource_schema.go
+++ b/internal/service/encryptionatrestprivateendpoint/resource_schema.go
@@ -3,8 +3,11 @@ package encryptionatrestprivateendpoint
import (
"context"
+ "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework/resource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
)
func ResourceSchema(ctx context.Context) schema.Schema {
@@ -38,16 +41,29 @@ func ResourceSchema(ctx context.Context) schema.Schema {
Computed: true,
MarkdownDescription: "State of the Encryption At Rest private endpoint.",
},
+ "timeouts": timeouts.Attributes(ctx, timeouts.Opts{
+ Create: true,
+ Delete: true,
+ }),
+ "delete_on_create_timeout": schema.BoolAttribute{
+ Optional: true,
+ PlanModifiers: []planmodifier.Bool{
+ customplanmodifier.CreateOnlyBoolPlanModifier(),
+ },
+ MarkdownDescription: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
type TFEarPrivateEndpointModel struct {
- CloudProvider types.String `tfsdk:"cloud_provider"`
- ErrorMessage types.String `tfsdk:"error_message"`
- ProjectID types.String `tfsdk:"project_id"`
- ID types.String `tfsdk:"id"`
- PrivateEndpointConnectionName types.String `tfsdk:"private_endpoint_connection_name"`
- RegionName types.String `tfsdk:"region_name"`
- Status types.String `tfsdk:"status"`
+ CloudProvider types.String `tfsdk:"cloud_provider"`
+ ErrorMessage types.String `tfsdk:"error_message"`
+ ProjectID types.String `tfsdk:"project_id"`
+ ID types.String `tfsdk:"id"`
+ PrivateEndpointConnectionName types.String `tfsdk:"private_endpoint_connection_name"`
+ RegionName types.String `tfsdk:"region_name"`
+ Status types.String `tfsdk:"status"`
+ Timeouts timeouts.Value `tfsdk:"timeouts"`
+ DeleteOnCreateTimeout types.Bool `tfsdk:"delete_on_create_timeout"`
}
diff --git a/internal/service/encryptionatrestprivateendpoint/resource_test.go b/internal/service/encryptionatrestprivateendpoint/resource_test.go
index 87a65cdc34..3154ed2b2d 100644
--- a/internal/service/encryptionatrestprivateendpoint/resource_test.go
+++ b/internal/service/encryptionatrestprivateendpoint/resource_test.go
@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"os"
+ "regexp"
"testing"
"time"
@@ -32,6 +33,34 @@ func TestAccEncryptionAtRestPrivateEndpoint_Azure_basic(t *testing.T) {
resource.Test(t, *basicTestCaseAzure(t))
}
+func TestAccEncryptionAtRestPrivateEndpoint_createTimeoutWithDeleteOnCreate(t *testing.T) {
+ // This test is skipped because it creates a race condition with other tests:
+ // 1. This test creates an encryption at rest private endpoint with a 1s timeout, causing it to fail and trigger cleanup
+ // 2. The private endpoint deletion doesn't complete immediately
+ // 3. Other tests share the same project and attempt to disable encryption at rest during cleanup
+ // 4. MongoDB Atlas returns "CANNOT_DISABLE_ENCRYPTION_AT_REST_REQUIRE_PRIVATE_NETWORKING_WHILE_PRIVATE_ENDPOINTS_EXIST"
+ // because the private endpoint from this test is still being deleted
+ // This race condition occurs even when tests don't run in parallel due to the async nature of private endpoint deletion.
+ acc.SkipTestForCI(t)
+ var (
+ createTimeout = "1s"
+ deleteOnCreateTimeout = true
+ region = conversion.AWSRegionToMongoDBRegion(os.Getenv("AWS_REGION"))
+ // Create encryption at rest configuration outside of test configuration to avoid cleanup issues
+ projectID = acc.EncryptionAtRestExecution(t)
+ )
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckEncryptionAtRestEnvAWS(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configEARPrivateEndpointWithTimeout(projectID, region, acc.TimeoutConfig(&createTimeout, nil, nil), &deleteOnCreateTimeout),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
func basicTestCaseAzure(tb testing.TB) *resource.TestCase {
tb.Helper()
var (
@@ -316,7 +345,19 @@ func checkBasic(projectID, cloudProvider, region string, expectApproved bool) re
}
func configAWSBasic(projectID string, awsKms *admin.AWSKMSConfiguration, region string) string {
+ return configAWSBasicWithTimeout(projectID, awsKms, region, "", nil)
+}
+
+func configAWSBasicWithTimeout(projectID string, awsKms *admin.AWSKMSConfiguration, region, timeoutConfig string, deleteOnCreateTimeout *bool) string {
encryptionAtRestConfig := acc.ConfigAwsKms(projectID, awsKms, false, true, false)
+
+ deleteOnCreateTimeoutConfig := ""
+ if deleteOnCreateTimeout != nil {
+ deleteOnCreateTimeoutConfig = fmt.Sprintf(`
+ delete_on_create_timeout = %[1]t
+ `, *deleteOnCreateTimeout)
+ }
+
config := fmt.Sprintf(`
%[1]s
@@ -324,11 +365,37 @@ func configAWSBasic(projectID string, awsKms *admin.AWSKMSConfiguration, region
project_id = mongodbatlas_encryption_at_rest.test.project_id
cloud_provider = "AWS"
region_name = %[2]q
+ %[3]s
+ %[4]s
}
- %[3]s
+ %[5]s
- `, encryptionAtRestConfig, region, configDS())
+ `, encryptionAtRestConfig, region, deleteOnCreateTimeoutConfig, timeoutConfig, configDS())
+
+ return config
+}
+
+func configEARPrivateEndpointWithTimeout(projectID, region, timeoutConfig string, deleteOnCreateTimeout *bool) string {
+ deleteOnCreateTimeoutConfig := ""
+ if deleteOnCreateTimeout != nil {
+ deleteOnCreateTimeoutConfig = fmt.Sprintf(`
+ delete_on_create_timeout = %[1]t
+ `, *deleteOnCreateTimeout)
+ }
+
+ config := fmt.Sprintf(`
+ resource "mongodbatlas_encryption_at_rest_private_endpoint" "test" {
+ project_id = %[1]q
+ cloud_provider = "AWS"
+ region_name = %[2]q
+ %[3]s
+ %[4]s
+ }
+
+ %[5]s
+
+ `, projectID, region, deleteOnCreateTimeoutConfig, timeoutConfig, configDS())
return config
}
diff --git a/internal/service/encryptionatrestprivateendpoint/state_transition.go b/internal/service/encryptionatrestprivateendpoint/state_transition.go
index 8198457674..e824c2274c 100644
--- a/internal/service/encryptionatrestprivateendpoint/state_transition.go
+++ b/internal/service/encryptionatrestprivateendpoint/state_transition.go
@@ -18,36 +18,36 @@ const (
defaultMinTimeout = 30 * time.Second // Smallest time to wait before refreshes
)
-func waitStateTransition(ctx context.Context, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
- return WaitStateTransitionWithMinTimeout(ctx, defaultMinTimeout, projectID, cloudProvider, endpointID, client)
+func waitStateTransition(ctx context.Context, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi, timeout time.Duration) (*admin.EARPrivateEndpoint, error) {
+ return WaitStateTransitionWithMinTimeoutAndTimeout(ctx, defaultMinTimeout, timeout, projectID, cloudProvider, endpointID, client)
}
-func WaitStateTransitionWithMinTimeout(ctx context.Context, minTimeout time.Duration, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
+func WaitStateTransitionWithMinTimeoutAndTimeout(ctx context.Context, minTimeout, timeout time.Duration, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
return waitStateTransitionForStates(
ctx,
[]string{retrystrategy.RetryStrategyInitiatingState},
[]string{retrystrategy.RetryStrategyPendingAcceptanceState, retrystrategy.RetryStrategyActiveState, retrystrategy.RetryStrategyFailedState},
- minTimeout, projectID, cloudProvider, endpointID, client)
+ minTimeout, timeout, projectID, cloudProvider, endpointID, client)
}
-func WaitDeleteStateTransition(ctx context.Context, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
- return WaitDeleteStateTransitionWithMinTimeout(ctx, defaultMinTimeout, projectID, cloudProvider, endpointID, client)
+func WaitDeleteStateTransition(ctx context.Context, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi, timeout time.Duration) (*admin.EARPrivateEndpoint, error) {
+ return WaitDeleteStateTransitionWithMinTimeoutAndTimeout(ctx, defaultMinTimeout, timeout, projectID, cloudProvider, endpointID, client)
}
-func WaitDeleteStateTransitionWithMinTimeout(ctx context.Context, minTimeout time.Duration, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
+func WaitDeleteStateTransitionWithMinTimeoutAndTimeout(ctx context.Context, minTimeout, timeout time.Duration, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
return waitStateTransitionForStates(
ctx,
[]string{retrystrategy.RetryStrategyDeletingState},
[]string{retrystrategy.RetryStrategyDeletedState, retrystrategy.RetryStrategyFailedState},
- minTimeout, projectID, cloudProvider, endpointID, client)
+ minTimeout, timeout, projectID, cloudProvider, endpointID, client)
}
-func waitStateTransitionForStates(ctx context.Context, pending, target []string, minTimeout time.Duration, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
+func waitStateTransitionForStates(ctx context.Context, pending, target []string, minTimeout, timeout time.Duration, projectID, cloudProvider, endpointID string, client admin.EncryptionAtRestUsingCustomerKeyManagementApi) (*admin.EARPrivateEndpoint, error) {
stateConf := &retry.StateChangeConf{
Pending: pending,
Target: target,
Refresh: refreshFunc(ctx, projectID, cloudProvider, endpointID, client),
- Timeout: defaultTimeout,
+ Timeout: timeout,
MinTimeout: minTimeout,
Delay: 0,
}
diff --git a/internal/service/encryptionatrestprivateendpoint/state_transition_test.go b/internal/service/encryptionatrestprivateendpoint/state_transition_test.go
index 244a6d5b39..ffd466c4f3 100644
--- a/internal/service/encryptionatrestprivateendpoint/state_transition_test.go
+++ b/internal/service/encryptionatrestprivateendpoint/state_transition_test.go
@@ -67,7 +67,7 @@ func TestStateTransition(t *testing.T) {
modelResp, httpResp, err := resp.get()
m.EXPECT().GetRestPrivateEndpointExecute(mock.Anything).Return(modelResp, httpResp, err).Once()
}
- resp, err := encryptionatrestprivateendpoint.WaitStateTransitionWithMinTimeout(t.Context(), 1*time.Second, "project-id", "cloud-provider", "endpoint-id", m)
+ resp, err := encryptionatrestprivateendpoint.WaitStateTransitionWithMinTimeoutAndTimeout(t.Context(), 1*time.Second, 20*time.Minute, "project-id", "cloud-provider", "endpoint-id", m)
assert.Equal(t, tc.expectedError, err != nil)
if resp != nil {
assert.Equal(t, tc.expectedState, resp.Status)
@@ -111,7 +111,7 @@ func TestDeleteStateTransition(t *testing.T) {
modelResp, httpResp, err := resp.get()
m.EXPECT().GetRestPrivateEndpointExecute(mock.Anything).Return(modelResp, httpResp, err).Once()
}
- resp, err := encryptionatrestprivateendpoint.WaitDeleteStateTransitionWithMinTimeout(t.Context(), 1*time.Second, "project-id", "cloud-provider", "endpoint-id", m)
+ resp, err := encryptionatrestprivateendpoint.WaitDeleteStateTransitionWithMinTimeoutAndTimeout(t.Context(), 1*time.Second, 20*time.Minute, "project-id", "cloud-provider", "endpoint-id", m)
assert.Equal(t, tc.expectedError, err != nil)
if resp != nil {
assert.Equal(t, tc.expectedState, resp.Status)
diff --git a/internal/service/flexcluster/data_source.go b/internal/service/flexcluster/data_source.go
index b8d75e5e9f..eb2b66f9e2 100644
--- a/internal/service/flexcluster/data_source.go
+++ b/internal/service/flexcluster/data_source.go
@@ -30,7 +30,7 @@ func (d *ds) Schema(ctx context.Context, req datasource.SchemaRequest, resp *dat
}
func (d *ds) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
- var tfModel TFModel
+ var tfModel TFModelDS
resp.Diagnostics.Append(req.Config.Get(ctx, &tfModel)...)
if resp.Diagnostics.HasError() {
return
@@ -48,5 +48,6 @@ func (d *ds) Read(ctx context.Context, req datasource.ReadRequest, resp *datasou
resp.Diagnostics.Append(diags...)
return
}
- resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClusterModel)...)
+ newFlexClusterModelDS := conversion.CopyModel[TFModelDS](newFlexClusterModel)
+ resp.Diagnostics.Append(resp.State.Set(ctx, newFlexClusterModelDS)...)
}
diff --git a/internal/service/flexcluster/model.go b/internal/service/flexcluster/model.go
index 7a1ee3cf82..1682fcdf08 100644
--- a/internal/service/flexcluster/model.go
+++ b/internal/service/flexcluster/model.go
@@ -44,13 +44,13 @@ func NewTFModel(ctx context.Context, apiResp *admin.FlexClusterDescription202411
func NewTFModelDSP(ctx context.Context, projectID string, input []admin.FlexClusterDescription20241113) (*TFModelDSP, diag.Diagnostics) {
diags := &diag.Diagnostics{}
- tfModels := make([]TFModel, len(input))
+ tfModels := make([]TFModelDS, len(input))
for i := range input {
item := &input[i]
tfModel, diagsLocal := NewTFModel(ctx, item)
diags.Append(diagsLocal...)
if tfModel != nil {
- tfModels[i] = *tfModel
+ tfModels[i] = *conversion.CopyModel[TFModelDS](tfModel)
}
}
if diags.HasError() {
diff --git a/internal/service/flexcluster/model_test.go b/internal/service/flexcluster/model_test.go
index c50b0269f8..9bb7e06d60 100644
--- a/internal/service/flexcluster/model_test.go
+++ b/internal/service/flexcluster/model_test.go
@@ -174,7 +174,7 @@ func TestNewTFModelDSP(t *testing.T) {
"Complete TF state": {
expectedTFModelDSP: &flexcluster.TFModelDSP{
ProjectId: types.StringValue(projectID),
- Results: []flexcluster.TFModel{
+ Results: []flexcluster.TFModelDS{
{
ProjectId: types.StringValue(projectID),
Id: types.StringValue(id),
@@ -277,7 +277,7 @@ func TestNewTFModelDSP(t *testing.T) {
"No Flex Clusters": {
expectedTFModelDSP: &flexcluster.TFModelDSP{
ProjectId: types.StringValue(projectID),
- Results: []flexcluster.TFModel{},
+ Results: []flexcluster.TFModelDS{},
},
input: []admin.FlexClusterDescription20241113{},
},
diff --git a/internal/service/flexcluster/resource.go b/internal/service/flexcluster/resource.go
index 1f70c8530f..f81318e60b 100644
--- a/internal/service/flexcluster/resource.go
+++ b/internal/service/flexcluster/resource.go
@@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"regexp"
+ "time"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
@@ -13,6 +14,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy"
@@ -68,8 +70,25 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
projectID := tfModel.ProjectId.ValueString()
clusterName := tfModel.Name.ValueString()
+ // Resolve timeout for create operation
+ createTimeout := cleanup.ResolveTimeout(ctx, &tfModel.Timeouts, cleanup.OperationCreate, &resp.Diagnostics)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
connV2 := r.Client.AtlasV2
- flexClusterResp, err := CreateFlexCluster(ctx, projectID, clusterName, flexClusterReq, connV2.FlexClustersApi)
+ flexClusterResp, err := CreateFlexCluster(ctx, projectID, clusterName, flexClusterReq, connV2.FlexClustersApi, &createTimeout)
+
+ // Handle timeout with cleanup logic
+ deleteOnCreateTimeout := cleanup.ResolveDeleteOnCreateTimeout(tfModel.DeleteOnCreateTimeout)
+ err = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, err, func(ctxCleanup context.Context) error {
+ cleanResp, cleanErr := r.Client.AtlasV2.FlexClustersApi.DeleteFlexCluster(ctxCleanup, projectID, clusterName).Execute()
+ if validate.StatusNotFound(cleanResp) {
+ return nil
+ }
+ return cleanErr
+ })
+
if err != nil {
resp.Diagnostics.AddError(fmt.Sprintf(ErrorCreateFlex, err.Error()), fmt.Sprintf("Name: %s, Project ID: %s", clusterName, projectID))
return
@@ -80,6 +99,8 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
resp.Diagnostics.Append(diags...)
return
}
+ newFlexClusterModel.Timeouts = tfModel.Timeouts
+ newFlexClusterModel.DeleteOnCreateTimeout = tfModel.DeleteOnCreateTimeout
if conversion.UseNilForEmpty(tfModel.Tags, newFlexClusterModel.Tags) {
newFlexClusterModel.Tags = types.MapNull(types.StringType)
@@ -113,6 +134,8 @@ func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.
resp.Diagnostics.Append(diags...)
return
}
+ newFlexClusterModel.Timeouts = flexClusterState.Timeouts
+ newFlexClusterModel.DeleteOnCreateTimeout = flexClusterState.DeleteOnCreateTimeout
if conversion.UseNilForEmpty(flexClusterState.Tags, newFlexClusterModel.Tags) {
newFlexClusterModel.Tags = types.MapNull(types.StringType)
@@ -137,9 +160,15 @@ func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resou
projectID := plan.ProjectId.ValueString()
clusterName := plan.Name.ValueString()
+ // Resolve timeout for update operation
+ updateTimeout := cleanup.ResolveTimeout(ctx, &plan.Timeouts, cleanup.OperationUpdate, &resp.Diagnostics)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
connV2 := r.Client.AtlasV2
- flexClusterResp, err := UpdateFlexCluster(ctx, projectID, clusterName, flexClusterReq, connV2.FlexClustersApi)
+ flexClusterResp, err := UpdateFlexCluster(ctx, projectID, clusterName, flexClusterReq, connV2.FlexClustersApi, updateTimeout)
if err != nil {
resp.Diagnostics.AddError(fmt.Sprintf(ErrorUpdateFlex, clusterName), err.Error())
return
@@ -150,6 +179,8 @@ func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resou
resp.Diagnostics.Append(diags...)
return
}
+ newFlexClusterModel.Timeouts = plan.Timeouts
+ newFlexClusterModel.DeleteOnCreateTimeout = plan.DeleteOnCreateTimeout
if conversion.UseNilForEmpty(plan.Tags, newFlexClusterModel.Tags) {
newFlexClusterModel.Tags = types.MapNull(types.StringType)
@@ -169,7 +200,14 @@ func (r *rs) Delete(ctx context.Context, req resource.DeleteRequest, resp *resou
projectID := flexClusterState.ProjectId.ValueString()
clusterName := flexClusterState.Name.ValueString()
- err := DeleteFlexCluster(ctx, projectID, clusterName, connV2.FlexClustersApi)
+
+ // Resolve timeout for delete operation
+ deleteTimeout := cleanup.ResolveTimeout(ctx, &flexClusterState.Timeouts, cleanup.OperationDelete, &resp.Diagnostics)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ err := DeleteFlexCluster(ctx, projectID, clusterName, connV2.FlexClustersApi, deleteTimeout)
if err != nil {
resp.Diagnostics.AddError(fmt.Sprintf(ErrorDeleteFlex, projectID, clusterName), err.Error())
@@ -203,7 +241,7 @@ func splitFlexClusterImportID(id string) (projectID, clusterName *string, err er
return
}
-func CreateFlexCluster(ctx context.Context, projectID, clusterName string, flexClusterReq *admin.FlexClusterDescriptionCreate20241113, client admin.FlexClustersApi) (*admin.FlexClusterDescription20241113, error) {
+func CreateFlexCluster(ctx context.Context, projectID, clusterName string, flexClusterReq *admin.FlexClusterDescriptionCreate20241113, client admin.FlexClustersApi, timeout *time.Duration) (*admin.FlexClusterDescription20241113, error) {
_, _, err := client.CreateFlexCluster(ctx, projectID, flexClusterReq).Execute()
if err != nil {
return nil, err
@@ -214,7 +252,7 @@ func CreateFlexCluster(ctx context.Context, projectID, clusterName string, flexC
Name: clusterName,
}
- flexClusterResp, err := WaitStateTransition(ctx, flexClusterParams, client, []string{retrystrategy.RetryStrategyCreatingState, retrystrategy.RetryStrategyUpdatingState, retrystrategy.RetryStrategyRepairingState}, []string{retrystrategy.RetryStrategyIdleState}, false, nil)
+ flexClusterResp, err := WaitStateTransition(ctx, flexClusterParams, client, []string{retrystrategy.RetryStrategyCreatingState, retrystrategy.RetryStrategyUpdatingState, retrystrategy.RetryStrategyRepairingState}, []string{retrystrategy.RetryStrategyIdleState}, false, *timeout)
if err != nil {
return nil, err
}
@@ -229,7 +267,7 @@ func GetFlexCluster(ctx context.Context, projectID, clusterName string, client a
return flexCluster, nil
}
-func UpdateFlexCluster(ctx context.Context, projectID, clusterName string, flexClusterReq *admin.FlexClusterDescriptionUpdate20241113, client admin.FlexClustersApi) (*admin.FlexClusterDescription20241113, error) {
+func UpdateFlexCluster(ctx context.Context, projectID, clusterName string, flexClusterReq *admin.FlexClusterDescriptionUpdate20241113, client admin.FlexClustersApi, timeout time.Duration) (*admin.FlexClusterDescription20241113, error) {
_, _, err := client.UpdateFlexCluster(ctx, projectID, clusterName, flexClusterReq).Execute()
if err != nil {
return nil, err
@@ -240,14 +278,14 @@ func UpdateFlexCluster(ctx context.Context, projectID, clusterName string, flexC
Name: clusterName,
}
- flexClusterResp, err := WaitStateTransition(ctx, flexClusterParams, client, []string{retrystrategy.RetryStrategyUpdatingState, retrystrategy.RetryStrategyUpdatingState, retrystrategy.RetryStrategyRepairingState}, []string{retrystrategy.RetryStrategyIdleState}, false, nil)
+ flexClusterResp, err := WaitStateTransition(ctx, flexClusterParams, client, []string{retrystrategy.RetryStrategyUpdatingState, retrystrategy.RetryStrategyUpdatingState, retrystrategy.RetryStrategyRepairingState}, []string{retrystrategy.RetryStrategyIdleState}, false, timeout)
if err != nil {
return nil, err
}
return flexClusterResp, nil
}
-func DeleteFlexCluster(ctx context.Context, projectID, clusterName string, client admin.FlexClustersApi) error {
+func DeleteFlexCluster(ctx context.Context, projectID, clusterName string, client admin.FlexClustersApi, timeout time.Duration) error {
if _, err := client.DeleteFlexCluster(ctx, projectID, clusterName).Execute(); err != nil {
return err
}
@@ -257,7 +295,7 @@ func DeleteFlexCluster(ctx context.Context, projectID, clusterName string, clien
Name: clusterName,
}
- return WaitStateTransitionDelete(ctx, flexClusterParams, client)
+ return WaitStateTransitionDelete(ctx, flexClusterParams, client, timeout)
}
func ListFlexClusters(ctx context.Context, projectID string, client admin.FlexClustersApi) (*[]admin.FlexClusterDescription20241113, error) {
diff --git a/internal/service/flexcluster/resource_schema.go b/internal/service/flexcluster/resource_schema.go
index fc3afe6430..a382145ca2 100644
--- a/internal/service/flexcluster/resource_schema.go
+++ b/internal/service/flexcluster/resource_schema.go
@@ -3,6 +3,7 @@ package flexcluster
import (
"context"
+ "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework/attr"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
@@ -21,14 +22,14 @@ func ResourceSchema(ctx context.Context) schema.Schema {
"project_id": schema.StringAttribute{
Required: true,
PlanModifiers: []planmodifier.String{
- customplanmodifier.CreateOnlyAttributePlanModifier(),
+ customplanmodifier.CreateOnlyStringPlanModifier(),
},
MarkdownDescription: "Unique 24-hexadecimal character string that identifies the project.",
},
"name": schema.StringAttribute{
Required: true,
PlanModifiers: []planmodifier.String{
- customplanmodifier.CreateOnlyAttributePlanModifier(),
+ customplanmodifier.CreateOnlyStringPlanModifier(),
},
MarkdownDescription: "Human-readable label that identifies the instance.",
},
@@ -37,7 +38,7 @@ func ResourceSchema(ctx context.Context) schema.Schema {
"backing_provider_name": schema.StringAttribute{
Required: true,
PlanModifiers: []planmodifier.String{
- customplanmodifier.CreateOnlyAttributePlanModifier(),
+ customplanmodifier.CreateOnlyStringPlanModifier(),
},
MarkdownDescription: "Cloud service provider on which MongoDB Cloud provisioned the flex cluster.",
},
@@ -58,7 +59,7 @@ func ResourceSchema(ctx context.Context) schema.Schema {
"region_name": schema.StringAttribute{
Required: true,
PlanModifiers: []planmodifier.String{
- customplanmodifier.CreateOnlyAttributePlanModifier(),
+ customplanmodifier.CreateOnlyStringPlanModifier(),
},
MarkdownDescription: "Human-readable label that identifies the geographic location of your MongoDB flex cluster. The region you choose can affect network latency for clients accessing your databases. For a complete list of region names, see [AWS](https://docs.atlas.mongodb.com/reference/amazon-aws/#std-label-amazon-aws), [GCP](https://docs.atlas.mongodb.com/reference/google-gcp/), and [Azure](https://docs.atlas.mongodb.com/reference/microsoft-azure/).",
},
@@ -145,20 +146,51 @@ func ResourceSchema(ctx context.Context) schema.Schema {
},
MarkdownDescription: "Method by which the cluster maintains the MongoDB versions.",
},
+ "delete_on_create_timeout": schema.BoolAttribute{
+ Optional: true,
+ PlanModifiers: []planmodifier.Bool{
+ customplanmodifier.CreateOnlyBoolPlanModifier(),
+ },
+ MarkdownDescription: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
+ "timeouts": timeouts.Attributes(ctx, timeouts.Opts{
+ Create: true,
+ Update: true,
+ Delete: true,
+ }),
},
}
}
type TFModel struct {
- ProviderSettings types.Object `tfsdk:"provider_settings"`
- ConnectionStrings types.Object `tfsdk:"connection_strings"`
+ Tags types.Map `tfsdk:"tags"`
+ MongoDbversion types.String `tfsdk:"mongo_db_version"`
+ ClusterType types.String `tfsdk:"cluster_type"`
+ CreateDate types.String `tfsdk:"create_date"`
+ ProjectId types.String `tfsdk:"project_id"`
+ Id types.String `tfsdk:"id"`
+ ProviderSettings types.Object `tfsdk:"provider_settings"`
+ Name types.String `tfsdk:"name"`
+ ConnectionStrings types.Object `tfsdk:"connection_strings"`
+ StateName types.String `tfsdk:"state_name"`
+ VersionReleaseSystem types.String `tfsdk:"version_release_system"`
+ BackupSettings types.Object `tfsdk:"backup_settings"`
+ Timeouts timeouts.Value `tfsdk:"timeouts"`
+ DeleteOnCreateTimeout types.Bool `tfsdk:"delete_on_create_timeout"`
+ TerminationProtectionEnabled types.Bool `tfsdk:"termination_protection_enabled"`
+}
+
+// TFModelDS differs from TFModel: removes timeouts and delete_on_create_timeout.
+type TFModelDS struct {
Tags types.Map `tfsdk:"tags"`
+ MongoDbversion types.String `tfsdk:"mongo_db_version"`
+ ClusterType types.String `tfsdk:"cluster_type"`
CreateDate types.String `tfsdk:"create_date"`
ProjectId types.String `tfsdk:"project_id"`
Id types.String `tfsdk:"id"`
- MongoDbversion types.String `tfsdk:"mongo_db_version"`
+ ProviderSettings types.Object `tfsdk:"provider_settings"`
Name types.String `tfsdk:"name"`
- ClusterType types.String `tfsdk:"cluster_type"`
+ ConnectionStrings types.Object `tfsdk:"connection_strings"`
StateName types.String `tfsdk:"state_name"`
VersionReleaseSystem types.String `tfsdk:"version_release_system"`
BackupSettings types.Object `tfsdk:"backup_settings"`
@@ -199,5 +231,5 @@ var ProviderSettingsType = types.ObjectType{AttrTypes: map[string]attr.Type{
type TFModelDSP struct {
ProjectId types.String `tfsdk:"project_id"`
- Results []TFModel `tfsdk:"results"`
+ Results []TFModelDS `tfsdk:"results"`
}
diff --git a/internal/service/flexcluster/resource_test.go b/internal/service/flexcluster/resource_test.go
index 9ef1ed55ac..2cef4ca1cc 100644
--- a/internal/service/flexcluster/resource_test.go
+++ b/internal/service/flexcluster/resource_test.go
@@ -6,6 +6,7 @@ import (
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -26,13 +27,68 @@ func TestAccFlexClusterRS_failedUpdate(t *testing.T) {
resource.Test(t, *tc)
}
+func TestAccFlexClusterRS_createTimeoutWithDeleteOnCreateFlex(t *testing.T) {
+ var (
+ projectID = acc.ProjectIDExecution(t)
+ clusterName = acc.RandomName()
+ provider = "AWS"
+ region = "US_EAST_1"
+ createTimeout = "1s"
+ deleteOnCreateTimeout = true
+ )
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(projectID, clusterName, provider, region, acc.TimeoutConfig(&createTimeout, nil, nil), true, false, &deleteOnCreateTimeout),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
+func TestAccFlexClusterRS_updateDeleteTimeout(t *testing.T) {
+ acc.SkipTestForCI(t) // Update is consistently too fast and it does not time out, making the test flaky
+ var (
+ projectID = acc.ProjectIDExecution(t)
+ clusterName = acc.RandomName()
+ provider = "AWS"
+ region = "US_EAST_1"
+ updateTimeout = "1s"
+ deleteTimeout = "1s"
+ )
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(projectID, clusterName, provider, region, acc.TimeoutConfig(nil, &updateTimeout, &deleteTimeout), false, false, nil),
+ },
+ {
+ Config: configBasic(projectID, clusterName, provider, region, acc.TimeoutConfig(nil, &updateTimeout, &deleteTimeout), false, true, nil),
+ ExpectError: regexp.MustCompile("timeout while waiting for state to become 'IDLE'"),
+ },
+ {
+ Config: acc.ConfigEmpty(), // triggers delete and because delete timeout is 1s, it times out
+ ExpectError: regexp.MustCompile("timeout while waiting for state to become 'DELETED'"),
+ },
+ {
+ // deletion of the flex cluster has been triggered, but has timed out in previous step, so this is needed in order to avoid "Error running post-test destroy, there may be dangling resource [...] Cluster already requested to be deleted"
+ Config: acc.ConfigRemove(resourceName),
+ },
+ },
+ })
+}
+
func basicTestCase(t *testing.T) *resource.TestCase {
t.Helper()
var (
- projectID = acc.ProjectIDExecution(t)
- clusterName = acc.RandomName()
- provider = "AWS"
- region = "US_EAST_1"
+ projectID = acc.ProjectIDExecution(t)
+ clusterName = acc.RandomName()
+ provider = "AWS"
+ region = "US_EAST_1"
+ emptyTimeoutConfig = ""
)
return &resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
@@ -40,15 +96,14 @@ func basicTestCase(t *testing.T) *resource.TestCase {
CheckDestroy: acc.CheckDestroyFlexCluster,
Steps: []resource.TestStep{
{
- Config: configBasic(projectID, clusterName, provider, region, true, false),
+ Config: configBasic(projectID, clusterName, provider, region, emptyTimeoutConfig, true, false, nil),
Check: checksFlexCluster(projectID, clusterName, true, false),
},
{
- Config: configBasic(projectID, clusterName, provider, region, false, true),
+ Config: configBasic(projectID, clusterName, provider, region, emptyTimeoutConfig, false, true, nil),
Check: checksFlexCluster(projectID, clusterName, false, true),
},
{
- Config: configBasic(projectID, clusterName, provider, region, true, true),
ResourceName: resourceName,
ImportStateIdFunc: acc.ImportStateIDFuncProjectIDClusterName(resourceName, "project_id", "name"),
ImportState: true,
@@ -69,6 +124,7 @@ func failedUpdateTestCase(t *testing.T) *resource.TestCase {
providerUpdated = "GCP"
region = "US_EAST_1"
regionUpdated = "US_EAST_2"
+ emptyTimeoutConfig = ""
)
return &resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
@@ -76,30 +132,30 @@ func failedUpdateTestCase(t *testing.T) *resource.TestCase {
CheckDestroy: acc.CheckDestroyFlexCluster,
Steps: []resource.TestStep{
{
- Config: configBasic(projectID, clusterName, provider, region, false, false),
+ Config: configBasic(projectID, clusterName, provider, region, emptyTimeoutConfig, false, false, nil),
Check: checksFlexCluster(projectID, clusterName, false, false),
},
{
- Config: configBasic(projectID, clusterNameUpdated, provider, region, false, false),
+ Config: configBasic(projectID, clusterNameUpdated, provider, region, emptyTimeoutConfig, false, false, nil),
ExpectError: regexp.MustCompile("name cannot be updated"),
},
{
- Config: configBasic(projectIDUpdated, clusterName, provider, region, false, false),
+ Config: configBasic(projectIDUpdated, clusterName, provider, region, emptyTimeoutConfig, false, false, nil),
ExpectError: regexp.MustCompile("project_id cannot be updated"),
},
{
- Config: configBasic(projectID, clusterName, providerUpdated, region, false, false),
+ Config: configBasic(projectID, clusterName, providerUpdated, region, emptyTimeoutConfig, false, false, nil),
ExpectError: regexp.MustCompile("provider_settings.backing_provider_name cannot be updated"),
},
{
- Config: configBasic(projectID, clusterName, provider, regionUpdated, false, false),
+ Config: configBasic(projectID, clusterName, provider, regionUpdated, emptyTimeoutConfig, false, false, nil),
ExpectError: regexp.MustCompile("provider_settings.region_name cannot be updated"),
},
},
}
}
-func configBasic(projectID, clusterName, provider, region string, terminationProtectionEnabled, tags bool) string {
+func configBasic(projectID, clusterName, provider, region, timeoutConfig string, terminationProtectionEnabled, tags bool, deleteOnCreateTimeout *bool) string {
tagsConfig := ""
if tags {
tagsConfig = `
@@ -107,6 +163,12 @@ func configBasic(projectID, clusterName, provider, region string, terminationPro
testKey = "testValue"
}`
}
+ deleteOnCreateTimeoutConfig := ""
+ if deleteOnCreateTimeout != nil {
+ deleteOnCreateTimeoutConfig = fmt.Sprintf(`
+ delete_on_create_timeout = %[1]t
+ `, *deleteOnCreateTimeout)
+ }
return fmt.Sprintf(`
resource "mongodbatlas_flex_cluster" "test" {
project_id = %[1]q
@@ -117,9 +179,11 @@ func configBasic(projectID, clusterName, provider, region string, terminationPro
}
termination_protection_enabled = %[5]t
%[6]s
+ %[7]s
+ %[8]s
}
- %[7]s
- `, projectID, clusterName, provider, region, terminationProtectionEnabled, tagsConfig, acc.FlexDataSource)
+ %[9]s
+ `, projectID, clusterName, provider, region, terminationProtectionEnabled, deleteOnCreateTimeoutConfig, tagsConfig, timeoutConfig, acc.FlexDataSource)
}
func checksFlexCluster(projectID, clusterName string, terminationProtectionEnabled, tagsCheck bool) resource.TestCheckFunc {
diff --git a/internal/service/flexcluster/state_transition.go b/internal/service/flexcluster/state_transition.go
index 85d8346943..2c6fc9d326 100644
--- a/internal/service/flexcluster/state_transition.go
+++ b/internal/service/flexcluster/state_transition.go
@@ -9,21 +9,16 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
)
-func WaitStateTransition(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi, pendingStates, desiredStates []string, isUpgradeFromM0 bool, timeout *time.Duration) (*admin.FlexClusterDescription20241113, error) {
- if timeout == nil {
- timeout = conversion.Pointer(constant.DefaultTimeout)
- }
+func WaitStateTransition(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi, pendingStates, desiredStates []string, isUpgradeFromM0 bool, timeout time.Duration) (*admin.FlexClusterDescription20241113, error) {
stateConf := &retry.StateChangeConf{
Pending: pendingStates,
Target: desiredStates,
Refresh: refreshFunc(ctx, requestParams, client, isUpgradeFromM0),
- Timeout: *timeout,
+ Timeout: timeout,
MinTimeout: 3 * time.Second,
Delay: 0,
}
@@ -40,12 +35,12 @@ func WaitStateTransition(ctx context.Context, requestParams *admin.GetFlexCluste
return nil, errors.New("did not obtain valid result when waiting for flex cluster state transition")
}
-func WaitStateTransitionDelete(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi) error {
+func WaitStateTransitionDelete(ctx context.Context, requestParams *admin.GetFlexClusterApiParams, client admin.FlexClustersApi, timeout time.Duration) error {
stateConf := &retry.StateChangeConf{
Pending: []string{retrystrategy.RetryStrategyDeletingState},
Target: []string{retrystrategy.RetryStrategyDeletedState},
Refresh: refreshFunc(ctx, requestParams, client, false),
- Timeout: 3 * time.Hour,
+ Timeout: timeout,
MinTimeout: 3 * time.Second,
Delay: 0,
}
diff --git a/internal/service/flexcluster/state_transition_test.go b/internal/service/flexcluster/state_transition_test.go
index 2638011265..45444a1150 100644
--- a/internal/service/flexcluster/state_transition_test.go
+++ b/internal/service/flexcluster/state_transition_test.go
@@ -11,6 +11,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster"
)
@@ -102,7 +103,7 @@ func TestFlexClusterStateTransition(t *testing.T) {
modelResp, httpResp, err := resp.get()
m.EXPECT().GetFlexClusterExecute(mock.Anything).Return(modelResp, httpResp, err).Once()
}
- resp, err := flexcluster.WaitStateTransition(t.Context(), requestParams, m, tc.pendingStates, tc.desiredStates, tc.isUpgradeFromM0, nil)
+ resp, err := flexcluster.WaitStateTransition(t.Context(), requestParams, m, tc.pendingStates, tc.desiredStates, tc.isUpgradeFromM0, constant.DefaultTimeout)
assert.Equal(t, tc.expectedError, err != nil)
if resp != nil {
assert.Equal(t, *tc.expectedState, *resp.StateName)
@@ -147,7 +148,7 @@ func TestFlexClusterStateTransitionForDelete(t *testing.T) {
modelResp, httpResp, err := resp.get()
m.EXPECT().GetFlexClusterExecute(mock.Anything).Return(modelResp, httpResp, err).Once()
}
- err := flexcluster.WaitStateTransitionDelete(t.Context(), requestParams, m)
+ err := flexcluster.WaitStateTransitionDelete(t.Context(), requestParams, m, constant.DefaultTimeout)
assert.Equal(t, tc.expectedError, err != nil)
})
}
diff --git a/internal/service/globalclusterconfig/data_source_global_cluster_config.go b/internal/service/globalclusterconfig/data_source_global_cluster_config.go
index f71803ac4a..9e94eada22 100644
--- a/internal/service/globalclusterconfig/data_source_global_cluster_config.go
+++ b/internal/service/globalclusterconfig/data_source_global_cluster_config.go
@@ -50,11 +50,6 @@ func DataSource() *schema.Resource {
},
},
},
- "custom_zone_mapping": {
- Deprecated: deprecationMsgOldSchema,
- Type: schema.TypeMap,
- Computed: true,
- },
"custom_zone_mapping_zone_id": {
Type: schema.TypeMap,
Computed: true,
diff --git a/internal/service/globalclusterconfig/resource_global_cluster_config.go b/internal/service/globalclusterconfig/resource_global_cluster_config.go
index db7363d1c2..7e50571f7f 100644
--- a/internal/service/globalclusterconfig/resource_global_cluster_config.go
+++ b/internal/service/globalclusterconfig/resource_global_cluster_config.go
@@ -14,7 +14,6 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -28,8 +27,6 @@ const (
deprecationOldShardingSchemaAction = "To learn more, see our examples, documentation, and 1.18.0 migration guide at https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/1.18.0-upgrade-guide"
)
-var deprecationMsgOldSchema = fmt.Sprintf("%s %s", fmt.Sprintf(constant.DeprecationParamWithReplacement, "`custom_zone_mapping_zone_id`"), deprecationOldShardingSchemaAction)
-
func Resource() *schema.Resource {
return &schema.Resource{
CreateContext: resourceCreate,
@@ -99,11 +96,6 @@ func Resource() *schema.Resource {
},
},
},
- "custom_zone_mapping": {
- Deprecated: deprecationMsgOldSchema,
- Type: schema.TypeMap,
- Computed: true,
- },
"custom_zone_mapping_zone_id": {
Type: schema.TypeMap,
Computed: true,
@@ -194,7 +186,6 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di
func readGlobalClusterConfig(ctx context.Context, meta any, projectID, clusterName string, d *schema.ResourceData) (notFound bool, err error) {
connV2 := meta.(*config.MongoDBClient).AtlasV2
- connV220240530 := meta.(*config.MongoDBClient).AtlasV220240530
resp, httpResp, err := connV2.GlobalClustersApi.GetClusterGlobalWrites(ctx, projectID, clusterName).Execute()
if err != nil {
if validate.StatusNotFound(httpResp) {
@@ -208,24 +199,6 @@ func readGlobalClusterConfig(ctx context.Context, meta any, projectID, clusterNa
if err := d.Set("custom_zone_mapping_zone_id", resp.GetCustomZoneMapping()); err != nil {
return false, fmt.Errorf(errorGlobalClusterRead, clusterName, err)
}
-
- oldResp, httpResp, err := connV220240530.GlobalClustersApi.GetManagedNamespace(ctx, projectID, clusterName).Execute()
- if err != nil {
- if validate.StatusNotFound(httpResp) {
- return true, nil
- }
- if validate.ErrorClusterIsAsymmetrics(err) {
- // Avoid non-empty plan by setting an empty custom_zone_mapping.
- if err := d.Set("custom_zone_mapping", map[string]string{}); err != nil {
- return false, fmt.Errorf(errorGlobalClusterRead, clusterName, err)
- }
- return false, nil
- }
- return false, fmt.Errorf(errorGlobalClusterRead, clusterName, err)
- }
- if err := d.Set("custom_zone_mapping", oldResp.GetCustomZoneMapping()); err != nil {
- return false, fmt.Errorf(errorGlobalClusterRead, clusterName, err)
- }
return false, nil
}
diff --git a/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go b/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go
index 8ad7fbf039..877e1d03c9 100644
--- a/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go
+++ b/internal/service/globalclusterconfig/resource_global_cluster_config_migration_test.go
@@ -7,6 +7,6 @@ import (
)
func TestMigGlobalClusterConfig_basic(t *testing.T) {
- checkZoneID := mig.IsProviderVersionAtLeast("1.21.0")
- mig.CreateAndRunTest(t, basicTestCase(t, checkZoneID, false))
+ mig.SkipIfVersionBelow(t, "2.0.0")
+ mig.CreateAndRunTest(t, basicTestCase(t, false))
}
diff --git a/internal/service/globalclusterconfig/resource_global_cluster_config_test.go b/internal/service/globalclusterconfig/resource_global_cluster_config_test.go
index be1d07ec7a..30640d8670 100644
--- a/internal/service/globalclusterconfig/resource_global_cluster_config_test.go
+++ b/internal/service/globalclusterconfig/resource_global_cluster_config_test.go
@@ -9,6 +9,7 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -26,11 +27,11 @@ const (
)
func TestAccGlobalClusterConfig_basic(t *testing.T) {
- resource.ParallelTest(t, *basicTestCase(t, true, false))
+ resource.ParallelTest(t, *basicTestCase(t, false))
}
func TestAccGlobalClusterConfig_withBackup(t *testing.T) {
- resource.ParallelTest(t, *basicTestCase(t, true, true))
+ resource.ParallelTest(t, *basicTestCase(t, true))
}
func TestAccGlobalClusterConfig_iss(t *testing.T) {
@@ -55,7 +56,6 @@ func TestAccGlobalClusterConfig_iss(t *testing.T) {
"managed_namespaces.#": "1",
"managed_namespaces.0.is_custom_shard_key_hashed": "false",
"managed_namespaces.0.is_shard_key_unique": "false",
- "custom_zone_mapping.%": "0",
"custom_zone_mapping_zone_id.%": "2",
}
)
@@ -75,7 +75,7 @@ func TestAccGlobalClusterConfig_iss(t *testing.T) {
})
}
-func basicTestCase(tb testing.TB, checkZoneID, withBackup bool) *resource.TestCase {
+func basicTestCase(tb testing.TB, withBackup bool) *resource.TestCase {
tb.Helper()
clusterInfo := acc.GetClusterInfo(tb, &acc.ClusterRequest{Geosharded: true, CloudBackup: withBackup})
attrsMap := map[string]string{
@@ -83,10 +83,7 @@ func basicTestCase(tb testing.TB, checkZoneID, withBackup bool) *resource.TestCa
"managed_namespaces.#": "1",
"managed_namespaces.0.is_custom_shard_key_hashed": "false",
"managed_namespaces.0.is_shard_key_unique": "false",
- "custom_zone_mapping.%": "1",
- }
- if checkZoneID {
- attrsMap["custom_zone_mapping_zone_id.%"] = "1"
+ "custom_zone_mapping_zone_id.%": "1",
}
return &resource.TestCase{
@@ -98,7 +95,7 @@ func basicTestCase(tb testing.TB, checkZoneID, withBackup bool) *resource.TestCa
Config: configBasic(&clusterInfo, false, false),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
- checkZone(0, "CA", clusterInfo.ResourceName, checkZoneID),
+ checkZone(0, "CA", clusterInfo.ResourceName),
acc.CheckRSAndDS(resourceName, conversion.Pointer(dataSourceName), nil, []string{"project_id"}, attrsMap)),
},
{
@@ -170,9 +167,9 @@ func TestAccGlobalClusterConfig_database(t *testing.T) {
Config: configWithDBConfig(&clusterInfo, customZone),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
- checkZone(0, "US", clusterInfo.ResourceName, true),
- checkZone(1, "IE", clusterInfo.ResourceName, true),
- checkZone(2, "DE", clusterInfo.ResourceName, true),
+ checkZone(0, "US", clusterInfo.ResourceName),
+ checkZone(1, "IE", clusterInfo.ResourceName),
+ checkZone(2, "DE", clusterInfo.ResourceName),
acc.CheckRSAndDS(resourceName, conversion.Pointer(dataSourceName), nil,
[]string{"project_id"},
map[string]string{
@@ -181,7 +178,6 @@ func TestAccGlobalClusterConfig_database(t *testing.T) {
"managed_namespaces.0.is_custom_shard_key_hashed": "false",
"managed_namespaces.0.is_shard_key_unique": "false",
"custom_zone_mapping_zone_id.%": "3",
- "custom_zone_mapping.%": "3",
}),
),
},
@@ -189,10 +185,10 @@ func TestAccGlobalClusterConfig_database(t *testing.T) {
Config: configWithDBConfig(&clusterInfo, customZoneUpdated),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
- checkZone(0, "US", clusterInfo.ResourceName, true),
- checkZone(1, "IE", clusterInfo.ResourceName, true),
- checkZone(2, "DE", clusterInfo.ResourceName, true),
- checkZone(3, "JP", clusterInfo.ResourceName, true),
+ checkZone(0, "US", clusterInfo.ResourceName),
+ checkZone(1, "IE", clusterInfo.ResourceName),
+ checkZone(2, "DE", clusterInfo.ResourceName),
+ checkZone(3, "JP", clusterInfo.ResourceName),
acc.CheckRSAndDS(resourceName, conversion.Pointer(dataSourceName), nil,
[]string{"project_id"},
map[string]string{
@@ -201,7 +197,6 @@ func TestAccGlobalClusterConfig_database(t *testing.T) {
"managed_namespaces.0.is_custom_shard_key_hashed": "false",
"managed_namespaces.0.is_shard_key_unique": "false",
"custom_zone_mapping_zone_id.%": "4",
- "custom_zone_mapping.%": "4",
}),
),
},
@@ -221,7 +216,6 @@ func TestAccGlobalClusterConfig_database(t *testing.T) {
"managed_namespaces.0.is_custom_shard_key_hashed": "false",
"managed_namespaces.0.is_shard_key_unique": "false",
"custom_zone_mapping_zone_id.%": "0",
- "custom_zone_mapping.%": "0",
}),
),
},
@@ -236,21 +230,14 @@ func TestAccGlobalClusterConfig_database(t *testing.T) {
})
}
-func checkZone(pos int, zone, clusterName string, checkZoneID bool) resource.TestCheckFunc {
- firstID := fmt.Sprintf("custom_zone_mapping.%s", zone)
- secondID := fmt.Sprintf("replication_specs.%d.id", pos)
+func checkZone(pos int, zone, clusterName string) resource.TestCheckFunc {
+ firstZoneID := fmt.Sprintf("custom_zone_mapping_zone_id.%s", zone)
+ secondZoneID := fmt.Sprintf("replication_specs.%d.zone_id", pos)
checks := []resource.TestCheckFunc{
- resource.TestCheckResourceAttrPair(resourceName, firstID, clusterName, secondID),
- resource.TestCheckResourceAttrPair(dataSourceName, firstID, clusterName, secondID),
- }
- if checkZoneID {
- firstZoneID := fmt.Sprintf("custom_zone_mapping_zone_id.%s", zone)
- secondZoneID := fmt.Sprintf("replication_specs.%d.zone_id", pos)
- checks = append(checks,
- resource.TestCheckResourceAttrPair(resourceName, firstZoneID, clusterName, secondZoneID),
- resource.TestCheckResourceAttrPair(dataSourceName, firstZoneID, clusterName, secondZoneID),
- )
+ resource.TestCheckResourceAttrPair(resourceName, firstZoneID, clusterName, secondZoneID),
+ resource.TestCheckResourceAttrPair(dataSourceName, firstZoneID, clusterName, secondZoneID),
}
+
return resource.ComposeAggregateTestCheckFunc(checks...)
}
diff --git a/internal/service/maintenancewindow/resource_maintenance_window.go b/internal/service/maintenancewindow/resource_maintenance_window.go
index 84dcad881d..cc4d3592a5 100644
--- a/internal/service/maintenancewindow/resource_maintenance_window.go
+++ b/internal/service/maintenancewindow/resource_maintenance_window.go
@@ -49,10 +49,8 @@ func Resource() *schema.Resource {
},
},
"hour_of_day": {
- Type: schema.TypeInt,
- Optional: true,
- Computed: true,
- ConflictsWith: []string{"start_asap"},
+ Type: schema.TypeInt,
+ Required: true,
ValidateFunc: func(val any, key string) (warns []string, errs []error) {
v := val.(int)
if v < 0 || v > 23 {
@@ -63,7 +61,6 @@ func Resource() *schema.Resource {
},
"start_asap": {
Type: schema.TypeBool,
- Optional: true,
Computed: true,
},
"number_of_deferrals": {
@@ -124,9 +121,7 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
params := new(admin.GroupMaintenanceWindow)
params.DayOfWeek = cast.ToInt(d.Get("day_of_week"))
-
- hourOfDay := d.Get("hour_of_day")
- params.HourOfDay = conversion.Pointer(cast.ToInt(hourOfDay)) // during creation of maintenance window hourOfDay needs to be set in PATCH to avoid errors, 0 value is sent when absent
+ params.HourOfDay = conversion.Pointer(cast.ToInt(d.Get("hour_of_day")))
if autoDeferOnceEnabled, ok := d.GetOk("auto_defer_once_enabled"); ok {
params.AutoDeferOnceEnabled = conversion.Pointer(autoDeferOnceEnabled.(bool))
diff --git a/internal/service/maintenancewindow/resource_maintenance_window_migration_test.go b/internal/service/maintenancewindow/resource_maintenance_window_migration_test.go
index a6705dd74f..b1fa1c8240 100644
--- a/internal/service/maintenancewindow/resource_maintenance_window_migration_test.go
+++ b/internal/service/maintenancewindow/resource_maintenance_window_migration_test.go
@@ -5,7 +5,6 @@ import (
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
"github.com/spf13/cast"
@@ -17,7 +16,7 @@ func TestMigConfigMaintenanceWindow_basic(t *testing.T) {
projectName = acc.RandomProjectName()
dayOfWeek = 7
hourOfDay = 3
- config = configBasic(orgID, projectName, dayOfWeek, conversion.Pointer(hourOfDay), nil)
+ config = configBasic(orgID, projectName, dayOfWeek, hourOfDay, nil)
)
resource.ParallelTest(t, resource.TestCase{
diff --git a/internal/service/maintenancewindow/resource_maintenance_window_test.go b/internal/service/maintenancewindow/resource_maintenance_window_test.go
index 6466a1f665..13a1cc3397 100644
--- a/internal/service/maintenancewindow/resource_maintenance_window_test.go
+++ b/internal/service/maintenancewindow/resource_maintenance_window_test.go
@@ -45,20 +45,19 @@ func TestAccConfigRSMaintenanceWindow_basic(t *testing.T) {
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
- // testing hour_of_day set to 0 during creation phase does not return errors
- Config: configBasic(orgID, projectName, dayOfWeek, conversion.Pointer(hourOfDay), defaultProtectedHours),
+ Config: configBasic(orgID, projectName, dayOfWeek, hourOfDay, defaultProtectedHours),
Check: checkBasic(dayOfWeek, hourOfDay, defaultProtectedHours),
},
{
- Config: configBasic(orgID, projectName, dayOfWeek, conversion.Pointer(hourOfDayUpdated), updatedProtectedHours),
+ Config: configBasic(orgID, projectName, dayOfWeek, hourOfDayUpdated, updatedProtectedHours),
Check: checkBasic(dayOfWeek, hourOfDayUpdated, updatedProtectedHours),
},
{
- Config: configBasic(orgID, projectName, dayOfWeekUpdated, conversion.Pointer(hourOfDay), nil),
+ Config: configBasic(orgID, projectName, dayOfWeekUpdated, hourOfDay, nil),
Check: checkBasic(dayOfWeekUpdated, hourOfDay, nil),
},
{
- Config: configBasic(orgID, projectName, dayOfWeek, conversion.Pointer(hourOfDay), defaultProtectedHours),
+ Config: configBasic(orgID, projectName, dayOfWeek, hourOfDay, defaultProtectedHours),
Check: checkBasic(dayOfWeek, hourOfDay, defaultProtectedHours),
},
{
@@ -71,26 +70,6 @@ func TestAccConfigRSMaintenanceWindow_basic(t *testing.T) {
})
}
-func TestAccConfigRSMaintenanceWindow_emptyHourOfDay(t *testing.T) {
- var (
- orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
- projectName = acc.RandomProjectName()
- dayOfWeek = 7
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- Config: configBasic(orgID, projectName, dayOfWeek, nil, defaultProtectedHours),
- Check: checkBasic(dayOfWeek, 0, defaultProtectedHours),
- },
- },
- })
-}
-
func TestAccConfigRSMaintenanceWindow_autoDeferActivated(t *testing.T) {
var (
orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
@@ -167,11 +146,7 @@ func importStateIDFunc(resourceName string) resource.ImportStateIdFunc {
}
}
-func configBasic(orgID, projectName string, dayOfWeek int, hourOfDay *int, protectedHours *admin.ProtectedHours) string {
- hourOfDayAttr := ""
- if hourOfDay != nil {
- hourOfDayAttr = fmt.Sprintf("hour_of_day = %d", *hourOfDay)
- }
+func configBasic(orgID, projectName string, dayOfWeek, hourOfDay int, protectedHours *admin.ProtectedHours) string {
protectedHoursStr := ""
if protectedHours != nil {
protectedHoursStr = fmt.Sprintf(`
@@ -189,10 +164,10 @@ func configBasic(orgID, projectName string, dayOfWeek int, hourOfDay *int, prote
resource "mongodbatlas_maintenance_window" "test" {
project_id = mongodbatlas_project.test.id
day_of_week = %[3]d
- %[4]s
+ hour_of_day = %[4]d
%[5]s
- }`, orgID, projectName, dayOfWeek, hourOfDayAttr, protectedHoursStr)
+ }`, orgID, projectName, dayOfWeek, hourOfDay, protectedHoursStr)
}
func configWithAutoDeferEnabled(orgID, projectName string, dayOfWeek, hourOfDay int) string {
diff --git a/internal/service/networkpeering/data_source_network_peering.go b/internal/service/networkpeering/data_source.go
similarity index 100%
rename from internal/service/networkpeering/data_source_network_peering.go
rename to internal/service/networkpeering/data_source.go
diff --git a/internal/service/networkpeering/data_source_network_peerings.go b/internal/service/networkpeering/plural_data_source.go
similarity index 100%
rename from internal/service/networkpeering/data_source_network_peerings.go
rename to internal/service/networkpeering/plural_data_source.go
diff --git a/internal/service/networkpeering/resource_network_peering.go b/internal/service/networkpeering/resource.go
similarity index 87%
rename from internal/service/networkpeering/resource_network_peering.go
rename to internal/service/networkpeering/resource.go
index afd2084e66..e12eff4845 100644
--- a/internal/service/networkpeering/resource_network_peering.go
+++ b/internal/service/networkpeering/resource.go
@@ -4,7 +4,6 @@ import (
"context"
"errors"
"fmt"
- "log"
"strings"
"time"
@@ -12,6 +11,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -24,17 +24,24 @@ const (
errorPeersRead = "error reading MongoDB Network Peering Connection (%s): %s"
errorPeersDelete = "error deleting MongoDB Network Peering Connection (%s): %s"
errorPeersUpdate = "error updating MongoDB Network Peering Connection (%s): %s"
+
+ minTimeout = 10 * time.Second
)
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- UpdateContext: resourceUpdate,
- DeleteContext: resourceDelete,
+ CreateWithoutTimeout: resourceCreate,
+ ReadWithoutTimeout: resourceRead,
+ UpdateWithoutTimeout: resourceUpdate,
+ DeleteWithoutTimeout: resourceDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceImportState,
},
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(1 * time.Hour),
+ Update: schema.DefaultTimeout(1 * time.Hour),
+ Delete: schema.DefaultTimeout(1 * time.Hour),
+ },
Schema: map[string]*schema.Schema{
"project_id": {
Type: schema.TypeString,
@@ -92,7 +99,6 @@ func Resource() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
-
"atlas_cidr_block": {
Type: schema.TypeString,
Optional: true,
@@ -157,6 +163,11 @@ func Resource() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
+ "delete_on_create_timeout": { // Don't use Default: true to avoid unplanned changes when upgrading from previous versions.
+ Type: schema.TypeBool,
+ Optional: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
@@ -244,19 +255,27 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
if err != nil {
return diag.FromErr(fmt.Errorf(errorPeersCreate, err))
}
+ peerID := peer.GetId()
stateConf := &retry.StateChangeConf{
Pending: []string{"INITIATING", "FINALIZING", "ADDING_PEER", "WAITING_FOR_USER"},
Target: []string{"FAILED", "AVAILABLE", "PENDING_ACCEPTANCE"},
- Refresh: resourceRefreshFunc(ctx, peer.GetId(), projectID, peerRequest.GetContainerId(), conn.NetworkPeeringApi),
- Timeout: 1 * time.Hour,
- MinTimeout: 10 * time.Second,
- Delay: 30 * time.Second,
+ Refresh: resourceRefreshFunc(ctx, peerID, projectID, peerRequest.GetContainerId(), conn.NetworkPeeringApi),
+ Timeout: d.Timeout(schema.TimeoutCreate) - time.Minute, // When using a CRUD function with a timeout, any StateChangeConf timeouts must be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.
+ MinTimeout: minTimeout,
+ Delay: minTimeout,
}
-
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(errorPeersCreate, err))
+ _, errWait := stateConf.WaitForStateContext(ctx)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
+ }
+ errWait = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, errWait, func(ctxCleanup context.Context) error {
+ _, _, errCleanup := conn.NetworkPeeringApi.DeleteGroupPeer(ctxCleanup, projectID, peerID).Execute()
+ return errCleanup
+ })
+ if errWait != nil {
+ return diag.Errorf(errorPeersCreate, errWait)
}
d.SetId(conversion.EncodeStateID(map[string]string{
@@ -459,9 +478,9 @@ func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.
Pending: []string{"INITIATING", "FINALIZING", "ADDING_PEER", "WAITING_FOR_USER"},
Target: []string{"FAILED", "AVAILABLE", "PENDING_ACCEPTANCE"},
Refresh: resourceRefreshFunc(ctx, peerID, projectID, "", conn.NetworkPeeringApi),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 30 * time.Second,
- Delay: 1 * time.Minute,
+ Timeout: d.Timeout(schema.TimeoutUpdate),
+ MinTimeout: minTimeout,
+ Delay: minTimeout,
}
_, err = stateConf.WaitForStateContext(ctx)
@@ -483,15 +502,13 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(fmt.Errorf(errorPeersDelete, peerID, err))
}
- log.Println("[INFO] Waiting for MongoDB Network Peering Connection to be destroyed")
-
stateConf := &retry.StateChangeConf{
Pending: []string{"AVAILABLE", "INITIATING", "PENDING_ACCEPTANCE", "FINALIZING", "ADDING_PEER", "WAITING_FOR_USER", "TERMINATING", "DELETING"},
Target: []string{"DELETED"},
Refresh: resourceRefreshFunc(ctx, peerID, projectID, "", conn.NetworkPeeringApi),
- Timeout: 1 * time.Hour,
- MinTimeout: 30 * time.Second,
- Delay: 10 * time.Second, // Wait 10 secs before starting
+ Timeout: d.Timeout(schema.TimeoutDelete),
+ MinTimeout: minTimeout,
+ Delay: minTimeout,
}
_, err = stateConf.WaitForStateContext(ctx)
@@ -516,19 +533,19 @@ func resourceImportState(ctx context.Context, d *schema.ResourceData, meta any)
peer, _, err := conn.NetworkPeeringApi.GetGroupPeer(ctx, projectID, peerID).Execute()
if err != nil {
- return nil, fmt.Errorf("couldn't import peer %s in project %s, error: %s", peerID, projectID, err)
+ return nil, fmt.Errorf("couldn't import peer %s in project %s, error: %w", peerID, projectID, err)
}
if err := d.Set("project_id", projectID); err != nil {
- log.Printf("[WARN] Error setting project_id for (%s): %s", peerID, err)
+ return nil, fmt.Errorf("error setting project_id while importing peer %s in project %s, error: %w", peerID, projectID, err)
}
if err := d.Set("container_id", peer.GetContainerId()); err != nil {
- log.Printf("[WARN] Error setting container_id for (%s): %s", peerID, err)
+ return nil, fmt.Errorf("error setting container_id while importing peer %s in project %s, error: %w", peerID, projectID, err)
}
if err := d.Set("provider_name", providerName); err != nil {
- log.Printf("[WARN] Error setting provider_name for (%s): %s", peerID, err)
+ return nil, fmt.Errorf("error setting provider_name while importing peer %s in project %s, error: %w", peerID, projectID, err)
}
d.SetId(conversion.EncodeStateID(map[string]string{
@@ -547,9 +564,6 @@ func resourceRefreshFunc(ctx context.Context, peerID, projectID, containerID str
if validate.StatusNotFound(resp) {
return "", "DELETED", nil
}
-
- log.Printf("error reading MongoDB Network Peering Connection %s: %s", peerID, err)
-
return nil, "", err
}
@@ -559,8 +573,6 @@ func resourceRefreshFunc(ctx context.Context, peerID, projectID, containerID str
status = c.GetStatusName()
}
- log.Printf("[DEBUG] status for MongoDB Network Peering Connection: %s: %s", peerID, status)
-
/* We need to get the provisioned status from Mongo container that contains the peering connection
* to validate if it has changed to true. This means that the reciprocal connection in Mongo side
* is right, and the Mongo parameters used on the Google side to configure the reciprocal connection
diff --git a/internal/service/networkpeering/resource_network_peering_migration_test.go b/internal/service/networkpeering/resource_migration_test.go
similarity index 100%
rename from internal/service/networkpeering/resource_network_peering_migration_test.go
rename to internal/service/networkpeering/resource_migration_test.go
diff --git a/internal/service/networkpeering/resource_network_peering_test.go b/internal/service/networkpeering/resource_test.go
similarity index 90%
rename from internal/service/networkpeering/resource_network_peering_test.go
rename to internal/service/networkpeering/resource_test.go
index 2a3d264b47..52a485028f 100644
--- a/internal/service/networkpeering/resource_network_peering_test.go
+++ b/internal/service/networkpeering/resource_test.go
@@ -3,7 +3,6 @@ package networkpeering_test
import (
"context"
"fmt"
- "log"
"os"
"regexp"
"strings"
@@ -191,13 +190,37 @@ func TestAccNetworkRSNetworkPeering_AWSDifferentRegionName(t *testing.T) {
CheckDestroy: acc.CheckDestroyNetworkPeering,
Steps: []resource.TestStep{
{
- Config: configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, containerRegion, peerRegion),
+ Config: configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, containerRegion, peerRegion, false),
Check: resource.ComposeAggregateTestCheckFunc(checks...),
},
},
})
}
+func TestAccNetworkNetworkPeering_timeouts(t *testing.T) {
+ var (
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ vpcID = os.Getenv("AWS_VPC_ID")
+ vpcCIDRBlock = os.Getenv("AWS_VPC_CIDR_BLOCK")
+ awsAccountID = os.Getenv("AWS_ACCOUNT_ID")
+ containerRegion = os.Getenv("AWS_REGION")
+ peerRegion = conversion.MongoDBRegionToAWSRegion(containerRegion)
+ providerName = "AWS"
+ projectName = acc.RandomProjectName()
+ )
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckPeeringEnvAWS(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: acc.CheckDestroyNetworkPeering, // resource is deleted when creation times out
+ Steps: []resource.TestStep{
+ {
+ Config: configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, containerRegion, peerRegion, true),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
func basicAWSTestCase(tb testing.TB) *resource.TestCase {
tb.Helper()
var (
@@ -213,12 +236,12 @@ func basicAWSTestCase(tb testing.TB) *resource.TestCase {
checks := commonChecksAWS(vpcID, providerName, awsAccountID, vpcCIDRBlock, peerRegion)
return &resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(tb); acc.PreCheckPeeringEnvAWS(tb) },
+ PreCheck: func() { acc.PreCheckPeeringEnvAWS(tb) },
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: acc.CheckDestroyNetworkPeering,
Steps: []resource.TestStep{
{
- Config: configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, containerRegion, peerRegion),
+ Config: configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, containerRegion, peerRegion, false),
Check: resource.ComposeAggregateTestCheckFunc(checks...),
},
{
@@ -272,7 +295,6 @@ func checkExists(resourceName string) resource.TestCheckFunc {
return fmt.Errorf("no ID is set")
}
ids := conversion.DecodeStateID(rs.Primary.ID)
- log.Printf("[DEBUG] projectID: %s", ids["project_id"])
if _, _, err := acc.ConnV2().NetworkPeeringApi.GetGroupPeer(context.Background(), ids["project_id"], ids["peer_id"]).Execute(); err == nil {
return nil
}
@@ -280,7 +302,18 @@ func checkExists(resourceName string) resource.TestCheckFunc {
}
}
-func configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, awsRegionContainer, awsRegionPeer string) string {
+func configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, awsRegionContainer, awsRegionPeer string, forceTimeout bool) string {
+ var extraConfig string
+ if forceTimeout {
+ extraConfig = `
+ delete_on_create_timeout = true # default value
+ timeouts {
+ create = "10s"
+ update = "10s"
+ delete = "10s"
+ }
+ `
+ }
return fmt.Sprintf(`
resource "mongodbatlas_project" "my_project" {
name = %[2]q
@@ -301,6 +334,7 @@ func configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlo
route_table_cidr_block = %[6]q
vpc_id = %[4]q
aws_account_id = %[5]q
+ %[9]s
}
data "mongodbatlas_network_peering" "test" {
@@ -311,7 +345,7 @@ func configAWS(orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlo
data "mongodbatlas_network_peerings" "test" {
project_id = mongodbatlas_network_peering.test.project_id
}
-`, orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, awsRegionContainer, awsRegionPeer)
+`, orgID, projectName, providerName, vpcID, awsAccountID, vpcCIDRBlock, awsRegionContainer, awsRegionPeer, extraConfig)
}
func configAzure(projectID, providerName, directoryID, subscriptionID, resourceGroupName, vNetName string) string {
diff --git a/internal/service/onlinearchive/data_source_online_archive.go b/internal/service/onlinearchive/data_source.go
similarity index 100%
rename from internal/service/onlinearchive/data_source_online_archive.go
rename to internal/service/onlinearchive/data_source.go
diff --git a/internal/service/onlinearchive/resource_online_archive.go b/internal/service/onlinearchive/resource.go
similarity index 90%
rename from internal/service/onlinearchive/resource_online_archive.go
rename to internal/service/onlinearchive/resource.go
index c00dba3979..02c75a7108 100644
--- a/internal/service/onlinearchive/resource_online_archive.go
+++ b/internal/service/onlinearchive/resource.go
@@ -12,6 +12,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -22,18 +23,22 @@ const (
errorOnlineArchivesCreate = "error creating MongoDB Atlas Online Archive:: %s"
errorOnlineArchivesDelete = "error deleting MongoDB Atlas Online Archive: %s archive_id (%s)"
scheduleTypeDefault = "DEFAULT"
+ oneMinute = 1 * time.Minute
)
func Resource() *schema.Resource {
return &schema.Resource{
- Schema: resourceSchema(),
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- DeleteContext: resourceDelete,
- UpdateContext: resourceUpdate,
+ Schema: resourceSchema(),
+ CreateWithoutTimeout: resourceCreate,
+ ReadWithoutTimeout: resourceRead,
+ UpdateWithoutTimeout: resourceUpdate,
+ DeleteWithoutTimeout: resourceDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceImport,
},
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(3 * time.Hour),
+ },
}
}
@@ -207,6 +212,11 @@ func resourceSchema() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
+ "delete_on_create_timeout": { // Don't use Default: true to avoid unplanned changes when upgrading from previous versions.
+ Type: schema.TypeBool,
+ Optional: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
"sync_creation": {
Type: schema.TypeBool,
Optional: true,
@@ -240,15 +250,23 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
Pending: []string{"PENDING", "ARCHIVING", "PAUSING", "PAUSED", "ORPHANED", "REPEATING"},
Target: []string{"IDLE", "ACTIVE"},
Refresh: resourceOnlineRefreshFunc(ctx, projectID, clusterName, archiveID, connV2),
- Timeout: 3 * time.Hour,
- MinTimeout: 1 * time.Minute,
- Delay: 3 * time.Minute,
+ Timeout: d.Timeout(schema.TimeoutCreate) - oneMinute, // When using a CRUD function with a timeout, any StateChangeConf timeouts must be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.
+ MinTimeout: oneMinute,
+ Delay: oneMinute,
}
// Wait, catching any errors
- _, err := stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf("error updating the online archive status %s for cluster %s", clusterName, archiveID))
+ _, errWait := stateConf.WaitForStateContext(ctx)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
+ }
+ errWait = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, errWait, func(ctxCleanup context.Context) error {
+ _, errCleanup := connV2.OnlineArchiveApi.DeleteOnlineArchive(ctxCleanup, projectID, archiveID, clusterName).Execute()
+ return errCleanup
+ })
+ if errWait != nil {
+ return diag.FromErr(fmt.Errorf("error updating the online archive status %s for cluster %s: %s", clusterName, archiveID, errWait))
}
}
@@ -311,13 +329,13 @@ func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Di
}
func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- conn := meta.(*config.MongoDBClient).Atlas
+ connV2 := meta.(*config.MongoDBClient).AtlasV2
ids := conversion.DecodeStateID(d.Id())
- atlasID := ids["archive_id"]
+ archiveID := ids["archive_id"]
projectID := ids["project_id"]
clusterName := ids["cluster_name"]
- _, err := conn.OnlineArchives.Delete(ctx, projectID, clusterName, atlasID)
+ _, err := connV2.OnlineArchiveApi.DeleteOnlineArchive(ctx, projectID, archiveID, clusterName).Execute()
if err != nil {
alreadyDeleted := strings.Contains(err.Error(), "404") && !d.IsNewResource()
@@ -325,7 +343,7 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
return nil
}
- return diag.FromErr(fmt.Errorf(errorOnlineArchivesDelete, err, atlasID))
+ return diag.FromErr(fmt.Errorf(errorOnlineArchivesDelete, err, archiveID))
}
return nil
}
diff --git a/internal/service/onlinearchive/resource_online_archive_migration_test.go b/internal/service/onlinearchive/resource_migration_test.go
similarity index 84%
rename from internal/service/onlinearchive/resource_online_archive_migration_test.go
rename to internal/service/onlinearchive/resource_migration_test.go
index 01ed31fb8e..1a0a38b55d 100644
--- a/internal/service/onlinearchive/resource_online_archive_migration_test.go
+++ b/internal/service/onlinearchive/resource_migration_test.go
@@ -10,6 +10,7 @@ import (
)
func TestMigBackupRSOnlineArchiveWithNoChangeBetweenVersions(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "1.29.0") // version when advanced cluster TPF was introduced
var (
onlineArchiveResourceName = "mongodbatlas_online_archive.users_archive"
clusterInfo = acc.GetClusterInfo(t, clusterRequest())
@@ -17,15 +18,13 @@ func TestMigBackupRSOnlineArchiveWithNoChangeBetweenVersions(t *testing.T) {
projectID = clusterInfo.ProjectID
clusterTerraformStr = clusterInfo.TerraformStr
clusterResourceName = clusterInfo.ResourceName
- deleteExpirationDays = 0
+ deleteExpirationDays = 7
)
- if mig.IsProviderVersionAtLeast("1.12.2") {
- deleteExpirationDays = 7
- }
- config := configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, deleteExpirationDays)
+
+ config := configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, deleteExpirationDays, false)
resource.ParallelTest(t, resource.TestCase{
- PreCheck: mig.PreCheckBasicSleep(t),
+ PreCheck: func() { mig.PreCheckBasicSleep(t); mig.PreCheckOldPreviewEnv(t) },
CheckDestroy: acc.CheckDestroyFederatedDatabaseInstance,
Steps: []resource.TestStep{
{
diff --git a/internal/service/onlinearchive/resource_online_archive_test.go b/internal/service/onlinearchive/resource_test.go
similarity index 94%
rename from internal/service/onlinearchive/resource_online_archive_test.go
rename to internal/service/onlinearchive/resource_test.go
index 831c78a425..2a1fc0625a 100644
--- a/internal/service/onlinearchive/resource_online_archive_test.go
+++ b/internal/service/onlinearchive/resource_test.go
@@ -42,7 +42,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) {
Check: acc.PopulateWithSampleDataTestCheck(projectID, clusterName),
},
{
- Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, 7),
+ Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, 7, false),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"),
resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"),
@@ -58,7 +58,7 @@ func TestAccBackupRSOnlineArchive(t *testing.T) {
),
},
{
- Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 2, 8),
+ Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 2, 8, false),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"),
resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"),
@@ -146,7 +146,7 @@ func TestAccBackupRSOnlineArchiveBasic(t *testing.T) {
),
},
{
- Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, 1),
+ Config: configWithDailySchedule(clusterTerraformStr, clusterResourceName, 1, 1, false),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "state"),
resource.TestCheckResourceAttrSet(onlineArchiveResourceName, "archive_id"),
@@ -229,7 +229,7 @@ func TestAccBackupRSOnlineArchiveInvalidProcessRegion(t *testing.T) {
})
}
-func configWithDailySchedule(clusterTerraformStr, clusterResourceName string, startHour, deleteExpirationDays int) string {
+func configWithDailySchedule(clusterTerraformStr, clusterResourceName string, startHour, deleteExpirationDays int, deleteOnTimeout bool) string {
var dataExpirationRuleBlock string
if deleteExpirationDays > 0 {
dataExpirationRuleBlock = fmt.Sprintf(`
@@ -238,6 +238,15 @@ func configWithDailySchedule(clusterTerraformStr, clusterResourceName string, st
}
`, deleteExpirationDays)
}
+ deleteOnCreateTimeoutStr := ""
+ if deleteOnTimeout {
+ deleteOnCreateTimeoutStr = `
+ delete_on_create_timeout = true
+ timeouts {
+ create = "1s"
+ }
+ `
+ }
return fmt.Sprintf(`
%[1]s
@@ -281,6 +290,8 @@ func configWithDailySchedule(clusterTerraformStr, clusterResourceName string, st
}
sync_creation = true
+
+ %[5]s
}
data "mongodbatlas_online_archive" "read_archive" {
@@ -293,7 +304,7 @@ func configWithDailySchedule(clusterTerraformStr, clusterResourceName string, st
project_id = mongodbatlas_online_archive.users_archive.project_id
cluster_name = mongodbatlas_online_archive.users_archive.cluster_name
}
- `, clusterTerraformStr, startHour, dataExpirationRuleBlock, clusterResourceName)
+ `, clusterTerraformStr, startHour, dataExpirationRuleBlock, clusterResourceName, deleteOnCreateTimeoutStr)
}
func configWithoutSchedule(clusterTerraformStr, clusterResourceName string) string {
@@ -512,3 +523,25 @@ func testAccBackupRSOnlineArchiveConfigWithMonthlySchedule(clusterTerraformStr,
}
`, clusterTerraformStr, startHour, clusterResourceName)
}
+
+func TestAccOnlineArchive_deleteOnCreateTimeout(t *testing.T) {
+ var (
+ clusterInfo = acc.GetClusterInfo(t, clusterRequest())
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: acc.PreCheckBasicSleep(t, &clusterInfo, "", ""),
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: acc.CheckDestroyCluster,
+ Steps: []resource.TestStep{
+ {
+ Config: clusterInfo.TerraformStr,
+ Check: acc.PopulateWithSampleDataTestCheck(clusterInfo.ProjectID, clusterInfo.Name),
+ },
+ {
+ Config: configWithDailySchedule(clusterInfo.TerraformStr, clusterInfo.ResourceName, 1, 7, true),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
diff --git a/internal/service/organization/data_source_organization.go b/internal/service/organization/data_source_organization.go
index ff01fb017d..c230ca34bf 100644
--- a/internal/service/organization/data_source_organization.go
+++ b/internal/service/organization/data_source_organization.go
@@ -3,11 +3,14 @@ package organization
import (
"context"
"fmt"
+ "net/http"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
@@ -43,6 +46,7 @@ func DataSource() *schema.Resource {
},
},
},
+ "users": dsschema.DSOrgUsersSchema(),
"api_access_list_required": {
Type: schema.TypeBool,
Computed: true,
@@ -97,6 +101,14 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(fmt.Errorf("error setting `is_deleted`: %s", err))
}
+ users, err := listAllOrganizationUsers(ctx, orgID, conn)
+ if err != nil {
+ return diag.FromErr(fmt.Errorf("error getting organization users: %s", err))
+ }
+ if err := d.Set("users", conversion.FlattenUsers(users)); err != nil {
+ return diag.FromErr(fmt.Errorf("error setting `users`: %s", err))
+ }
+
settings, _, err := conn.OrganizationsApi.GetOrgSettings(ctx, orgID).Execute()
if err != nil {
return diag.FromErr(fmt.Errorf("error getting organization settings: %s", err))
@@ -121,3 +133,11 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
return nil
}
+
+func listAllOrganizationUsers(ctx context.Context, orgID string, conn *admin.APIClient) ([]admin.OrgUserResponse, error) {
+ return dsschema.AllPages(ctx, func(ctx context.Context, pageNum int) (dsschema.PaginateResponse[admin.OrgUserResponse], *http.Response, error) {
+ request := conn.MongoDBCloudUsersApi.ListOrgUsers(ctx, orgID)
+ request = request.PageNum(pageNum)
+ return request.Execute()
+ })
+}
diff --git a/internal/service/organization/data_source_organizations.go b/internal/service/organization/data_source_organizations.go
index 1a78fab415..2059324799 100644
--- a/internal/service/organization/data_source_organizations.go
+++ b/internal/service/organization/data_source_organizations.go
@@ -11,6 +11,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
@@ -63,6 +64,7 @@ func PluralDataSource() *schema.Resource {
},
},
},
+ "users": dsschema.DSOrgUsersSchema(),
"api_access_list_required": {
Type: schema.TypeBool,
Computed: true,
@@ -138,9 +140,13 @@ func flattenOrganizations(ctx context.Context, conn *admin.APIClient, organizati
results = make([]map[string]any, len(organizations))
for k, organization := range organizations {
+ users, err := listAllOrganizationUsers(ctx, *organization.Id, conn)
+ if err != nil {
+ return nil, fmt.Errorf("error getting organization users (orgID: %s, name: %s): %s", organization.GetId(), organization.GetName(), err)
+ }
settings, _, err := conn.OrganizationsApi.GetOrgSettings(ctx, *organization.Id).Execute()
if err != nil {
- return nil, fmt.Errorf("error getting organization settings (orgID: %s, org Name: %s): %s", organization.GetId(), organization.GetName(), err)
+ return nil, fmt.Errorf("error getting organization settings (orgID: %s, name: %s): %s", organization.GetId(), organization.GetName(), err)
}
results[k] = map[string]any{
"id": organization.Id,
@@ -148,6 +154,7 @@ func flattenOrganizations(ctx context.Context, conn *admin.APIClient, organizati
"skip_default_alerts_settings": organization.SkipDefaultAlertsSettings,
"is_deleted": organization.IsDeleted,
"links": conversion.FlattenLinks(organization.GetLinks()),
+ "users": conversion.FlattenUsers(users),
"api_access_list_required": settings.ApiAccessListRequired,
"multi_factor_auth_required": settings.MultiFactorAuthRequired,
"restrict_employee_access": settings.RestrictEmployeeAccess,
diff --git a/internal/service/organization/resource_organization_test.go b/internal/service/organization/resource_organization_test.go
index d1586afca5..6f900c6677 100644
--- a/internal/service/organization/resource_organization_test.go
+++ b/internal/service/organization/resource_organization_test.go
@@ -185,7 +185,40 @@ func TestAccConfigDSOrganization_basic(t *testing.T) {
{
Config: configWithPluralDS(orgID),
Check: checkAggrDS(resource.TestCheckResourceAttr(datasourceName, "gen_ai_features_enabled", "true"),
- resource.TestCheckResourceAttr(pluralDSName, "results.0.gen_ai_features_enabled", "true")),
+ resource.TestCheckResourceAttr(pluralDSName, "results.0.gen_ai_features_enabled", "true"),
+ resource.TestCheckResourceAttrSet(datasourceName, "users.#"),
+ resource.TestCheckResourceAttrSet(datasourceName, "users.0.id")),
+ },
+ },
+ })
+}
+
+func TestAccConfigDSOrganization_users(t *testing.T) {
+ var (
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configWithPluralDS(orgID),
+ Check: checkAggrDS(
+ resource.TestCheckResourceAttrWith(datasourceName, "users.#", acc.IntGreatThan(0)),
+ resource.TestCheckResourceAttrSet(datasourceName, "users.0.id"),
+ resource.TestCheckResourceAttrSet(datasourceName, "users.0.roles.0.org_roles.#"),
+ resource.TestCheckResourceAttrSet(datasourceName, "users.0.roles.0.project_role_assignments.#"),
+ resource.TestCheckResourceAttrWith(datasourceName, "users.0.username", acc.IsUsername()),
+ resource.TestCheckResourceAttrWith(datasourceName, "users.0.last_auth", acc.IsTimestamp()),
+ resource.TestCheckResourceAttrWith(datasourceName, "users.0.created_at", acc.IsTimestamp()),
+
+ resource.TestCheckResourceAttrWith(pluralDSName, "results.0.users.#", acc.IntGreatThan(0)),
+ resource.TestCheckResourceAttrSet(pluralDSName, "results.0.users.0.id"),
+ resource.TestCheckResourceAttrSet(pluralDSName, "results.0.users.0.roles.0.org_roles.#"),
+ resource.TestCheckResourceAttrSet(pluralDSName, "results.0.users.0.roles.0.project_role_assignments.#"),
+ resource.TestCheckResourceAttrWith(pluralDSName, "results.0.users.0.username", acc.IsUsername()),
+ resource.TestCheckResourceAttrWith(pluralDSName, "results.0.users.0.last_auth", acc.IsTimestamp()),
+ ),
},
},
})
@@ -229,7 +262,7 @@ func TestAccConfigRSOrganization_import(t *testing.T) {
{
// Use removed block so the organization is not deleted.
// Even if something goes wrong, the organization wouldn't be deleted if it has some projects, it would return ORG_NOT_EMPTY error.
- Config: configImportRemove(),
+ Config: acc.ConfigRemove(resourceName),
},
},
})
@@ -275,17 +308,6 @@ func configImportSet(orgID, orgName string) string {
`, orgID, orgName)
}
-func configImportRemove() string {
- return `
- removed {
- from = mongodbatlas_organization.test
- lifecycle {
- destroy = false
- }
- }
- `
-}
-
func checkExists(resourceName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[resourceName]
diff --git a/internal/service/orginvitation/data_source_org_invitation.go b/internal/service/orginvitation/data_source_org_invitation.go
index 9939cf133d..b2e05ac7c3 100644
--- a/internal/service/orginvitation/data_source_org_invitation.go
+++ b/internal/service/orginvitation/data_source_org_invitation.go
@@ -6,13 +6,16 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
func DataSource() *schema.Resource {
return &schema.Resource{
- ReadContext: dataSourceRead,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "data source", "mongodbatlas_cloud_user_org_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ ReadContext: dataSourceRead,
Schema: map[string]*schema.Schema{
"org_id": {
Type: schema.TypeString,
diff --git a/internal/service/orginvitation/resource_org_invitation.go b/internal/service/orginvitation/resource_org_invitation.go
index 97900fcae2..802daa49c5 100644
--- a/internal/service/orginvitation/resource_org_invitation.go
+++ b/internal/service/orginvitation/resource_org_invitation.go
@@ -6,19 +6,23 @@ import (
"regexp"
"strings"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- DeleteContext: resourceDelete,
- UpdateContext: resourceUpdate,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "resource", "mongodbatlas_cloud_user_org_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ CreateContext: resourceCreate,
+ ReadContext: resourceRead,
+ DeleteContext: resourceDelete,
+ UpdateContext: resourceUpdate,
Importer: &schema.ResourceImporter{
StateContext: resourceImport,
},
diff --git a/internal/service/privatelinkendpoint/data_source_privatelink_endpoint.go b/internal/service/privatelinkendpoint/data_source.go
similarity index 100%
rename from internal/service/privatelinkendpoint/data_source_privatelink_endpoint.go
rename to internal/service/privatelinkendpoint/data_source.go
diff --git a/internal/service/privatelinkendpoint/data_source_privatelink_endpoint_test.go b/internal/service/privatelinkendpoint/data_source_test.go
similarity index 100%
rename from internal/service/privatelinkendpoint/data_source_privatelink_endpoint_test.go
rename to internal/service/privatelinkendpoint/data_source_test.go
diff --git a/internal/service/privatelinkendpoint/resource_privatelink_endpoint.go b/internal/service/privatelinkendpoint/resource.go
similarity index 81%
rename from internal/service/privatelinkendpoint/resource_privatelink_endpoint.go
rename to internal/service/privatelinkendpoint/resource.go
index e7bef49edf..4673f3b8b6 100644
--- a/internal/service/privatelinkendpoint/resource_privatelink_endpoint.go
+++ b/internal/service/privatelinkendpoint/resource.go
@@ -12,6 +12,7 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -23,13 +24,14 @@ const (
errorPrivateLinkEndpointsRead = "error reading MongoDB Private Endpoints Connection(%s): %s"
errorPrivateLinkEndpointsDelete = "error deleting MongoDB Private Endpoints Connection(%s): %s"
ErrorPrivateLinkEndpointsSetting = "error setting `%s` for MongoDB Private Endpoints Connection(%s): %s"
+ delayAndMinTimeout = 5 * time.Second
)
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- DeleteContext: resourceDelete,
+ CreateWithoutTimeout: resourceCreate,
+ ReadWithoutTimeout: resourceRead,
+ DeleteWithoutTimeout: resourceDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceImport,
},
@@ -106,6 +108,12 @@ func Resource() *schema.Resource {
Type: schema.TypeString,
},
},
+ "delete_on_create_timeout": { // Don't use Default: true to avoid unplanned changes when upgrading from previous versions.
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(1 * time.Hour),
@@ -130,19 +138,18 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(fmt.Errorf(errorPrivateLinkEndpointsCreate, err))
}
- stateConf := &retry.StateChangeConf{
- Pending: []string{"INITIATING", "DELETING"},
- Target: []string{"WAITING_FOR_USER", "FAILED", "DELETED", "AVAILABLE"},
- Refresh: refreshFunc(ctx, connV2, projectID, providerName, privateEndpoint.GetId()),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 5 * time.Second,
- Delay: 3 * time.Second,
+ stateConf := CreateStateChangeConfig(ctx, connV2, projectID, providerName, privateEndpoint.GetId(), d.Timeout(schema.TimeoutCreate))
+ _, errWait := stateConf.WaitForStateContext(ctx)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
}
-
- // Wait, catching any errors
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(errorPrivateLinkEndpointsCreate, err))
+ errWait = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, errWait, func(ctxCleanup context.Context) error {
+ _, errCleanup := connV2.PrivateEndpointServicesApi.DeletePrivateEndpointService(ctxCleanup, projectID, providerName, privateEndpoint.GetId()).Execute()
+ return errCleanup
+ })
+ if errWait != nil {
+ return diag.FromErr(fmt.Errorf(errorPrivateLinkEndpointsCreate, errWait))
}
d.SetId(conversion.EncodeStateID(map[string]string{
@@ -250,15 +257,7 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
log.Println("[INFO] Waiting for MongoDB Private Endpoints Connection to be destroyed")
- stateConf := &retry.StateChangeConf{
- Pending: []string{"DELETING"},
- Target: []string{"DELETED", "FAILED"},
- Refresh: refreshFunc(ctx, connV2, projectID, providerName, privateLinkID),
- Timeout: d.Timeout(schema.TimeoutDelete),
- MinTimeout: 5 * time.Second,
- Delay: 3 * time.Second,
- }
- // Wait, catching any errors
+ stateConf := DeleteStateChangeConfig(ctx, connV2, projectID, providerName, privateLinkID, d.Timeout(schema.TimeoutDelete))
_, err = stateConf.WaitForStateContext(ctx)
if err != nil {
return diag.FromErr(fmt.Errorf(errorPrivateLinkEndpointsDelete, privateLinkID, err))
@@ -329,3 +328,25 @@ func refreshFunc(ctx context.Context, client *admin.APIClient, projectID, provid
return p, status, nil
}
}
+
+func CreateStateChangeConfig(ctx context.Context, connV2 *admin.APIClient, projectID, providerName, privateLinkID string, timeout time.Duration) retry.StateChangeConf {
+ return retry.StateChangeConf{
+ Pending: []string{"INITIATING", "DELETING"},
+ Target: []string{"WAITING_FOR_USER", "FAILED", "DELETED", "AVAILABLE"},
+ Refresh: refreshFunc(ctx, connV2, projectID, providerName, privateLinkID),
+ Timeout: timeout,
+ MinTimeout: delayAndMinTimeout,
+ Delay: delayAndMinTimeout,
+ }
+}
+
+func DeleteStateChangeConfig(ctx context.Context, connV2 *admin.APIClient, projectID, providerName, privateLinkID string, timeout time.Duration) retry.StateChangeConf {
+ return retry.StateChangeConf{
+ Pending: []string{"DELETING"},
+ Target: []string{"DELETED", "FAILED"},
+ Refresh: refreshFunc(ctx, connV2, projectID, providerName, privateLinkID),
+ Timeout: timeout,
+ MinTimeout: delayAndMinTimeout,
+ Delay: delayAndMinTimeout,
+ }
+}
diff --git a/internal/service/privatelinkendpoint/resource_privatelink_endpoint_migration_test.go b/internal/service/privatelinkendpoint/resource_migration_test.go
similarity index 100%
rename from internal/service/privatelinkendpoint/resource_privatelink_endpoint_migration_test.go
rename to internal/service/privatelinkendpoint/resource_migration_test.go
diff --git a/internal/service/privatelinkendpoint/resource_privatelink_endpoint_test.go b/internal/service/privatelinkendpoint/resource_test.go
similarity index 85%
rename from internal/service/privatelinkendpoint/resource_privatelink_endpoint_test.go
rename to internal/service/privatelinkendpoint/resource_test.go
index c260939d7b..cd2c26472c 100644
--- a/internal/service/privatelinkendpoint/resource_privatelink_endpoint_test.go
+++ b/internal/service/privatelinkendpoint/resource_test.go
@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"os"
+ "regexp"
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
@@ -117,6 +118,41 @@ func TestAccNetworkRSPrivateLinkEndpointGCP_basic(t *testing.T) {
})
}
+func TestAccPrivateLinkEndpoint_deleteOnCreateTimeout(t *testing.T) {
+ var (
+ projectID = acc.ProjectIDExecution(t)
+ region = "us-east-1"
+ providerName = "AWS"
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configDeleteOnCreateTimeout(projectID, providerName, region, "1s", true),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
+func configDeleteOnCreateTimeout(projectID, providerName, region, timeout string, deleteOnTimeout bool) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_privatelink_endpoint" "test" {
+ project_id = %[1]q
+ provider_name = %[2]q
+ region = %[3]q
+ delete_on_create_timeout = %[5]t
+
+ timeouts {
+ create = %[4]q
+ }
+ }
+ `, projectID, providerName, region, timeout, deleteOnTimeout)
+}
+
func importStateIDFunc(resourceName string) resource.ImportStateIdFunc {
return func(s *terraform.State) (string, error) {
rs, ok := s.RootModule().Resources[resourceName]
diff --git a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go
deleted file mode 100644
index 50eaa25d85..0000000000
--- a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless.go
+++ /dev/null
@@ -1,264 +0,0 @@
-package privatelinkendpointserverless
-
-import (
- "context"
- "errors"
- "fmt"
- "log"
- "strings"
- "time"
-
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-
- "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpoint"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointserviceserverless"
-)
-
-const (
- errorServerlessEndpointAdd = "error adding MongoDB Serverless PrivateLink Endpoint Connection(%s): %s"
- errorServerlessEndpointDelete = "error deleting MongoDB Serverless PrivateLink Endpoint Connection(%s): %s"
-)
-
-func Resource() *schema.Resource {
- return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- DeleteContext: resourceDelete,
- Importer: &schema.ResourceImporter{
- StateContext: resourceImport,
- },
- DeprecationMessage: fmt.Sprintf(constant.DeprecationResourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"),
- Schema: map[string]*schema.Schema{
- "project_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "instance_name": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "endpoint_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "provider_name": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- ValidateFunc: validation.StringInSlice([]string{"AWS", "AZURE"}, false),
- },
- "endpoint_service_name": {
- Type: schema.TypeString,
- Computed: true,
- },
- "private_link_service_resource_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "status": {
- Type: schema.TypeString,
- Computed: true,
- },
- },
- Timeouts: &schema.ResourceTimeout{
- Create: schema.DefaultTimeout(2 * time.Hour),
- Delete: schema.DefaultTimeout(2 * time.Hour),
- },
- }
-}
-
-func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- projectID := d.Get("project_id").(string)
- instanceName := d.Get("instance_name").(string)
-
- endPoint, _, err := connV2.ServerlessPrivateEndpointsApi.CreateServerlessPrivateEndpoint(ctx, projectID, instanceName, &admin.ServerlessTenantCreateRequest{}).Execute()
- if err != nil {
- return diag.Errorf(privatelinkendpointserviceserverless.ErrorServerlessServiceEndpointAdd, endPoint.GetCloudProviderEndpointId(), err)
- }
-
- stateConf := &retry.StateChangeConf{
- Pending: []string{"RESERVATION_REQUESTED", "INITIATING", "DELETING"},
- Target: []string{"RESERVED", "FAILED", "DELETED", "AVAILABLE"},
- Refresh: resourceRefreshFunc(ctx, connV2, projectID, instanceName, endPoint.GetId()),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 5 * time.Second,
- Delay: 5 * time.Second,
- }
-
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(errorServerlessEndpointAdd, err, endPoint.GetId()))
- }
-
- d.SetId(conversion.EncodeStateID(map[string]string{
- "project_id": projectID,
- "instance_name": instanceName,
- "endpoint_id": endPoint.GetId(),
- }))
-
- return resourceRead(ctx, d, meta)
-}
-
-func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- ids := conversion.DecodeStateID(d.Id())
- projectID := ids["project_id"]
- instanceName := ids["instance_name"]
- endpointID := ids["endpoint_id"]
-
- privateLinkResponse, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- // case 404/400: deleted in the backend case
- if strings.Contains(err.Error(), "404") || strings.Contains(err.Error(), "400") {
- d.SetId("")
- return nil
- }
-
- return diag.Errorf("error getting Serverless private link endpoint information: %s", err)
- }
-
- if err := d.Set("endpoint_id", privateLinkResponse.GetId()); err != nil {
- return diag.Errorf("error setting `endpoint_id` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("instance_name", instanceName); err != nil {
- return diag.Errorf("error setting `instance Name` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("endpoint_service_name", privateLinkResponse.GetEndpointServiceName()); err != nil {
- return diag.Errorf("error setting `endpoint_service_name Name` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("private_link_service_resource_id", privateLinkResponse.GetPrivateLinkServiceResourceId()); err != nil {
- return diag.Errorf("error setting `private_link_service_resource_id Name` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("status", privateLinkResponse.GetStatus()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpoint.ErrorPrivateLinkEndpointsSetting, "status", d.Id(), err))
- }
-
- return nil
-}
-
-func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- ids := conversion.DecodeStateID(d.Id())
- projectID := ids["project_id"]
- instanceName := ids["instance_name"]
- endpointID := ids["endpoint_id"]
-
- _, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- // case 404/400: deleted in the backend case
- if strings.Contains(err.Error(), "404") || strings.Contains(err.Error(), "400") {
- d.SetId("")
- return nil
- }
-
- return diag.Errorf("error getting Serverless private link endpoint information: %s", err)
- }
-
- _, err = connV2.ServerlessPrivateEndpointsApi.DeleteServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- return diag.Errorf("error deleting serverless private link endpoint(%s): %s", endpointID, err)
- }
-
- stateConf := &retry.StateChangeConf{
- Pending: []string{"DELETING"},
- Target: []string{"DELETED", "FAILED"},
- Refresh: resourceRefreshFunc(ctx, connV2, projectID, instanceName, endpointID),
- Timeout: d.Timeout(schema.TimeoutDelete),
- MinTimeout: 5 * time.Second,
- Delay: 5 * time.Second,
- }
- // Wait, catching any errors
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(errorServerlessEndpointDelete, endpointID, err))
- }
-
- return nil
-}
-
-func resourceImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
-
- parts := strings.SplitN(d.Id(), "--", 3)
- if len(parts) != 3 {
- return nil, errors.New("import format error: to import a search index, use the format {project_id}--{instance_name}--{endpoint_id}")
- }
-
- projectID := parts[0]
- instanceName := parts[1]
- endpointID := parts[2]
-
- privateLinkResponse, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- return nil, fmt.Errorf("couldn't import serverless private link endpoint (%s) in projectID (%s) , error: %s", endpointID, projectID, err)
- }
-
- if err := d.Set("project_id", projectID); err != nil {
- log.Printf("[WARN] Error setting project_id for (%s): %s", projectID, err)
- }
-
- if err := d.Set("endpoint_id", endpointID); err != nil {
- log.Printf("[WARN] Error setting endpoint_id for (%s): %s", endpointID, err)
- }
- if err := d.Set("instance_name", instanceName); err != nil {
- log.Printf("[WARN] Error setting instance_name for (%s): %s", endpointID, err)
- }
-
- if err := d.Set("endpoint_service_name", privateLinkResponse.GetEndpointServiceName()); err != nil {
- log.Printf("[WARN] Error setting endpoint_service_name for (%s): %s", endpointID, err)
- }
-
- if privateLinkResponse.GetPrivateLinkServiceResourceId() != "" {
- if err := d.Set("provider_name", "AZURE"); err != nil {
- log.Printf("[WARN] Error setting provider_name for (%s): %s", endpointID, err)
- }
- } else {
- if err := d.Set("provider_name", "AWS"); err != nil {
- log.Printf("[WARN] Error setting provider_name for (%s): %s", endpointID, err)
- }
- }
-
- d.SetId(conversion.EncodeStateID(map[string]string{
- "project_id": projectID,
- "instance_name": instanceName,
- "endpoint_id": endpointID,
- }))
-
- return []*schema.ResourceData{d}, nil
-}
-
-func resourceRefreshFunc(ctx context.Context, client *admin.APIClient, projectID, instanceName, privateLinkID string) retry.StateRefreshFunc {
- return func() (any, string, error) {
- p, resp, err := client.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, privateLinkID).Execute()
- if err != nil {
- if validate.StatusNotFound(resp) || validate.StatusBadRequest(resp) {
- return "", "DELETED", nil
- }
- return nil, "REJECTED", err
- }
-
- status := p.GetStatus()
-
- if status != "WAITING_FOR_USER" {
- return "", status, nil
- }
-
- return p, status, nil
- }
-}
diff --git a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless_migration_test.go b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless_migration_test.go
deleted file mode 100644
index c659f9cd15..0000000000
--- a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless_migration_test.go
+++ /dev/null
@@ -1,13 +0,0 @@
-package privatelinkendpointserverless_test
-
-import (
- "testing"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
-)
-
-func TestMigServerlessPrivateLinkEndpoint_basic(t *testing.T) {
- acc.SkipTestForCI(t) // mongodbatlas_serverless_instance now create Flex clusters
- mig.CreateAndRunTest(t, basicTestCase(t))
-}
diff --git a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless_test.go b/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless_test.go
deleted file mode 100644
index d444b2982a..0000000000
--- a/internal/service/privatelinkendpointserverless/resource_privatelink_endpoint_serverless_test.go
+++ /dev/null
@@ -1,109 +0,0 @@
-package privatelinkendpointserverless_test
-
-import (
- "context"
- "fmt"
- "testing"
-
- "github.com/hashicorp/terraform-plugin-testing/helper/resource"
- "github.com/hashicorp/terraform-plugin-testing/terraform"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
-)
-
-const (
- resourceName = "mongodbatlas_privatelink_endpoint_serverless.test"
-)
-
-func TestAccServerlessPrivateLinkEndpoint_basic(t *testing.T) {
- acc.SkipTestForCI(t) // mongodbatlas_serverless_instance now create Flex clusters
- resource.ParallelTest(t, *basicTestCase(t))
-}
-
-func basicTestCase(tb testing.TB) *resource.TestCase {
- tb.Helper()
-
- var (
- projectID = acc.ProjectIDExecution(tb)
- instanceName = acc.RandomClusterName()
- )
-
- return &resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(tb) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- Config: configBasic(projectID, instanceName, true),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "instance_name", instanceName),
- ),
- },
- {
- Config: configBasic(projectID, instanceName, false),
- ResourceName: resourceName,
- ImportStateIdFunc: importStateIDFunc(resourceName),
- ImportState: true,
- ImportStateVerify: true,
- ImportStateVerifyIgnore: []string{"connection_strings_private_endpoint_srv"},
- },
- },
- }
-}
-
-func checkDestroy(state *terraform.State) error {
- for _, rs := range state.RootModule().Resources {
- if rs.Type != "mongodbatlas_privatelink_endpoint_serverless" {
- continue
- }
- ids := conversion.DecodeStateID(rs.Primary.ID)
- privateLink, _, err := acc.ConnV2().ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(context.Background(), ids["project_id"], ids["instance_name"], ids["endpoint_id"]).Execute()
- if err == nil && privateLink != nil {
- return fmt.Errorf("endpoint_id (%s) still exists", ids["endpoint_id"])
- }
- }
- return nil
-}
-
-func configBasic(projectID, instanceName string, ignoreConnectionStrings bool) string {
- return fmt.Sprintf(`
-
- resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = mongodbatlas_serverless_instance.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AWS"
- }
-
- %s
- `, acc.ConfigServerlessInstance(projectID, instanceName, ignoreConnectionStrings, nil, nil))
-}
-
-func checkExists(resourceName string) resource.TestCheckFunc {
- return func(s *terraform.State) error {
- rs, ok := s.RootModule().Resources[resourceName]
- if !ok {
- return fmt.Errorf("not found: %s", resourceName)
- }
- if rs.Primary.ID == "" {
- return fmt.Errorf("no ID is set")
- }
- ids := conversion.DecodeStateID(rs.Primary.ID)
- _, _, err := acc.ConnV2().ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(context.Background(), ids["project_id"], ids["instance_name"], ids["endpoint_id"]).Execute()
- if err == nil {
- return nil
- }
- return fmt.Errorf("endpoint_id (%s) does not exist", ids["endpoint_id"])
- }
-}
-
-func importStateIDFunc(resourceName string) resource.ImportStateIdFunc {
- return func(s *terraform.State) (string, error) {
- rs, ok := s.RootModule().Resources[resourceName]
- if !ok {
- return "", fmt.Errorf("not found: %s", resourceName)
- }
- ids := conversion.DecodeStateID(rs.Primary.ID)
- return fmt.Sprintf("%s--%s--%s", ids["project_id"], ids["instance_name"], ids["endpoint_id"]), nil
- }
-}
diff --git a/internal/service/privatelinkendpointservice/data_source_privatelink_endpoint_service.go b/internal/service/privatelinkendpointservice/data_source.go
similarity index 100%
rename from internal/service/privatelinkendpointservice/data_source_privatelink_endpoint_service.go
rename to internal/service/privatelinkendpointservice/data_source.go
diff --git a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go b/internal/service/privatelinkendpointservice/resource.go
similarity index 86%
rename from internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go
rename to internal/service/privatelinkendpointservice/resource.go
index edd81f28a2..53983a5bfc 100644
--- a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service.go
+++ b/internal/service/privatelinkendpointservice/resource.go
@@ -12,6 +12,8 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -24,13 +26,15 @@ const (
ErrorServiceEndpointRead = "error reading MongoDB Private Service Endpoint Connection(%s): %s"
errorEndpointDelete = "error deleting MongoDB Private Service Endpoint Connection(%s): %s"
ErrorEndpointSetting = "error setting `%s` for MongoDB Private Service Endpoint Connection(%s): %s"
+ oneMinute = 1 * time.Minute
+ delayAndMinTimeout = 10 * time.Second
)
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadWithoutTimeout: resourceRead,
- DeleteContext: resourceDelete,
+ CreateWithoutTimeout: resourceCreate,
+ ReadWithoutTimeout: resourceRead,
+ DeleteWithoutTimeout: resourceDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceImportState,
},
@@ -100,6 +104,12 @@ func Resource() *schema.Resource {
ForceNew: true,
ConflictsWith: []string{"private_endpoint_ip_address"},
},
+ "delete_on_create_timeout": { // Don't use Default: true to avoid unplanned changes when upgrading from previous versions.
+ Type: schema.TypeBool,
+ Optional: true,
+ ForceNew: true,
+ Description: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
"endpoints": {
Type: schema.TypeList,
Optional: true,
@@ -173,23 +183,31 @@ func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.
Pending: []string{"NONE", "INITIATING", "PENDING_ACCEPTANCE", "PENDING", "DELETING", "VERIFIED"},
Target: []string{"AVAILABLE", "REJECTED", "DELETED", "FAILED"},
Refresh: resourceRefreshFunc(ctx, connV2, projectID, providerName, privateLinkID, endpointServiceID),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 5 * time.Second,
- Delay: 1 * time.Minute,
+ Timeout: d.Timeout(schema.TimeoutCreate) - oneMinute, // When using a CRUD function with a timeout, any StateChangeConf timeouts must be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.
+ MinTimeout: delayAndMinTimeout,
+ Delay: delayAndMinTimeout,
}
// Wait, catching any errors
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(errorServiceEndpointAdd, endpointServiceID, privateLinkID, err))
+ _, errWait := stateConf.WaitForStateContext(ctx)
+ deleteOnCreateTimeout := true // default value when not set
+ if v, ok := d.GetOkExists("delete_on_create_timeout"); ok {
+ deleteOnCreateTimeout = v.(bool)
+ }
+ errWait = cleanup.HandleCreateTimeout(deleteOnCreateTimeout, errWait, func(ctxCleanup context.Context) error {
+ _, errCleanup := connV2.PrivateEndpointServicesApi.DeletePrivateEndpoint(ctxCleanup, projectID, providerName, endpointServiceID, privateLinkID).Execute()
+ return errCleanup
+ })
+ if errWait != nil {
+ return diag.FromErr(fmt.Errorf(errorServiceEndpointAdd, endpointServiceID, privateLinkID, errWait))
}
clusterConf := &retry.StateChangeConf{
Pending: []string{"REPEATING", "PENDING"},
Target: []string{"IDLE", "DELETED"},
Refresh: advancedcluster.ResourceClusterListAdvancedRefreshFunc(ctx, projectID, connV2.ClustersApi),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 5 * time.Second,
- Delay: 1 * time.Minute,
+ Timeout: d.Timeout(schema.TimeoutCreate) - oneMinute, // When using a CRUD function with a timeout, any StateChangeConf timeouts must be configured below that duration to avoid returning the SDK context: deadline exceeded error instead of the retry logic error.
+ MinTimeout: delayAndMinTimeout,
+ Delay: delayAndMinTimeout,
}
if _, err = clusterConf.WaitForStateContext(ctx); err != nil {
@@ -299,8 +317,8 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
Target: []string{"REJECTED", "DELETED", "FAILED"},
Refresh: resourceRefreshFunc(ctx, connV2, projectID, providerName, privateLinkID, endpointServiceID),
Timeout: d.Timeout(schema.TimeoutDelete),
- MinTimeout: 5 * time.Second,
- Delay: 3 * time.Second,
+ MinTimeout: delayAndMinTimeout,
+ Delay: delayAndMinTimeout,
}
// Wait, catching any errors
@@ -314,8 +332,8 @@ func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.
Target: []string{"IDLE", "DELETED"},
Refresh: advancedcluster.ResourceClusterListAdvancedRefreshFunc(ctx, projectID, connV2.ClustersApi),
Timeout: d.Timeout(schema.TimeoutDelete),
- MinTimeout: 5 * time.Second,
- Delay: 1 * time.Minute,
+ MinTimeout: delayAndMinTimeout,
+ Delay: delayAndMinTimeout,
}
if _, err = clusterConf.WaitForStateContext(ctx); err != nil {
diff --git a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service_migration_test.go b/internal/service/privatelinkendpointservice/resource_migration_test.go
similarity index 100%
rename from internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service_migration_test.go
rename to internal/service/privatelinkendpointservice/resource_migration_test.go
diff --git a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service_test.go b/internal/service/privatelinkendpointservice/resource_test.go
similarity index 82%
rename from internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service_test.go
rename to internal/service/privatelinkendpointservice/resource_test.go
index 80dcec94df..e02a0fd119 100644
--- a/internal/service/privatelinkendpointservice/resource_privatelink_endpoint_service_test.go
+++ b/internal/service/privatelinkendpointservice/resource_test.go
@@ -42,6 +42,30 @@ func TestAccNetworkRSPrivateLinkEndpointServiceAWS_Failed(t *testing.T) {
})
}
+func TestAccNetworkRSPrivateLinkEndpointService_deleteOnCreateTimeout(t *testing.T) {
+ var (
+ resourceSuffix = "test"
+ providerName = "AWS"
+ region = os.Getenv("AWS_REGION")
+ // Create private link endpoint outside of test configuration to avoid cleanup issues
+ projectID, privateLinkEndpointID = acc.PrivateLinkEndpointIDExecution(t, providerName, region)
+ )
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ CheckDestroy: checkDestroy,
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Steps: []resource.TestStep{
+ {
+ Config: configDeleteOnCreateTimeoutWithExistingEndpoint(
+ projectID, providerName, privateLinkEndpointID, resourceSuffix, "1s", true,
+ ),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ })
+}
+
func basicAWSTestCase(tb testing.TB) *resource.TestCase {
tb.Helper()
acc.SkipTestForCI(tb) // needs AWS configuration
@@ -189,3 +213,19 @@ func configFailAWS(projectID, providerName, region, resourceSuffix string) strin
}
`, projectID, providerName, region, resourceSuffix)
}
+
+func configDeleteOnCreateTimeoutWithExistingEndpoint(projectID, providerName, privateLinkEndpointID, resourceSuffix, timeout string, deleteOnTimeout bool) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_privatelink_endpoint_service" %[4]q {
+ project_id = %[1]q
+ private_link_id = %[3]q
+ endpoint_service_id = "vpce-11111111111111111"
+ provider_name = %[2]q
+ delete_on_create_timeout = %[6]t
+
+ timeouts {
+ create = %[5]q
+ }
+ }
+ `, projectID, providerName, privateLinkEndpointID, resourceSuffix, timeout, deleteOnTimeout)
+}
diff --git a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go
deleted file mode 100644
index e69fbe840b..0000000000
--- a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoint_service_serverless.go
+++ /dev/null
@@ -1,114 +0,0 @@
-package privatelinkendpointserviceserverless
-
-import (
- "context"
- "fmt"
-
- "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpointservice"
-)
-
-func DataSource() *schema.Resource {
- return &schema.Resource{
- ReadContext: dataSourceRead,
- DeprecationMessage: fmt.Sprintf(constant.DeprecationDataSourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"),
- Schema: map[string]*schema.Schema{
- "project_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "instance_name": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "endpoint_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "comment": {
- Type: schema.TypeString,
- Computed: true,
- },
- "endpoint_service_name": {
- Type: schema.TypeString,
- Computed: true,
- },
- "cloud_provider_endpoint_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "private_link_service_resource_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "private_endpoint_ip_address": {
- Type: schema.TypeString,
- Computed: true,
- },
- "error_message": {
- Type: schema.TypeString,
- Computed: true,
- },
- "status": {
- Type: schema.TypeString,
- Computed: true,
- },
- },
- }
-}
-
-func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
-
- projectID := d.Get("project_id").(string)
- instanceName := d.Get("instance_name").(string)
- endpointID := d.Get("endpoint_id").(string)
-
- serviceEndpoint, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorServiceEndpointRead, endpointID, err))
- }
-
- if err := d.Set("error_message", serviceEndpoint.GetErrorMessage()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "error_message", endpointID, err))
- }
-
- if err := d.Set("status", serviceEndpoint.GetStatus()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "status", endpointID, err))
- }
-
- if err := d.Set("comment", serviceEndpoint.GetComment()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "comment", endpointID, err))
- }
-
- if err := d.Set("endpoint_service_name", serviceEndpoint.GetEndpointServiceName()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "endpoint_service_name", endpointID, err))
- }
-
- if err := d.Set("cloud_provider_endpoint_id", serviceEndpoint.GetCloudProviderEndpointId()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "cloud_provider_endpoint_id", endpointID, err))
- }
-
- if err := d.Set("private_link_service_resource_id", serviceEndpoint.GetPrivateLinkServiceResourceId()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "private_link_service_resource_id", endpointID, err))
- }
-
- if err := d.Set("private_endpoint_ip_address", serviceEndpoint.GetPrivateEndpointIpAddress()); err != nil {
- return diag.FromErr(fmt.Errorf(privatelinkendpointservice.ErrorEndpointSetting, "private_endpoint_ip_address", endpointID, err))
- }
-
- d.SetId(conversion.EncodeStateID(map[string]string{
- "project_id": projectID,
- "instance_name": instanceName,
- "endpoint_id": endpointID,
- }))
-
- return nil
-}
diff --git a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go
deleted file mode 100644
index 12e34648b8..0000000000
--- a/internal/service/privatelinkendpointserviceserverless/data_source_privatelink_endpoints_service_serverless.go
+++ /dev/null
@@ -1,114 +0,0 @@
-package privatelinkendpointserviceserverless
-
-import (
- "context"
- "fmt"
-
- "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/id"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-)
-
-func PluralDataSource() *schema.Resource {
- return &schema.Resource{
- ReadContext: dataSourcePluralRead,
- DeprecationMessage: fmt.Sprintf(constant.DeprecationDataSourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"),
- Schema: map[string]*schema.Schema{
- "project_id": {
- Type: schema.TypeString,
- Required: true,
- },
- "instance_name": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "results": {
- Type: schema.TypeList,
- Computed: true,
- Elem: &schema.Resource{
- Schema: map[string]*schema.Schema{
- "cloud_provider_endpoint_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "comment": {
- Type: schema.TypeString,
- Computed: true,
- },
- "endpoint_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "endpoint_service_name": {
- Type: schema.TypeString,
- Computed: true,
- },
- "private_link_service_resource_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "private_endpoint_ip_address": {
- Type: schema.TypeString,
- Computed: true,
- },
- "error_message": {
- Type: schema.TypeString,
- Computed: true,
- },
- "status": {
- Type: schema.TypeString,
- Computed: true,
- },
- },
- },
- },
- },
- }
-}
-
-func dataSourcePluralRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- projectID := d.Get("project_id").(string)
- instanceName := d.Get("instance_name").(string)
-
- privateLinkEndpoints, _, err := connV2.ServerlessPrivateEndpointsApi.ListServerlessPrivateEndpoint(ctx, projectID, instanceName).Execute()
- if err != nil {
- return diag.Errorf("error getting Serverless PrivateLink Endpoints Information: %s", err)
- }
-
- if err := d.Set("results", flattenServerlessPrivateLinkEndpoints(privateLinkEndpoints)); err != nil {
- return diag.Errorf("error setting `results`: %s", err)
- }
-
- d.SetId(id.UniqueId())
-
- return nil
-}
-
-func flattenServerlessPrivateLinkEndpoints(privateLinks []admin.ServerlessTenantEndpoint) []map[string]any {
- if len(privateLinks) == 0 {
- return nil
- }
-
- results := make([]map[string]any, len(privateLinks))
-
- for k := range privateLinks {
- results[k] = map[string]any{
- "endpoint_id": privateLinks[k].GetId(),
- "endpoint_service_name": privateLinks[k].GetEndpointServiceName(),
- "cloud_provider_endpoint_id": privateLinks[k].GetCloudProviderEndpointId(),
- "private_link_service_resource_id": privateLinks[k].GetPrivateLinkServiceResourceId(),
- "private_endpoint_ip_address": privateLinks[k].GetPrivateEndpointIpAddress(),
- "comment": privateLinks[k].GetComment(),
- "error_message": privateLinks[k].GetErrorMessage(),
- "status": privateLinks[k].GetStatus(),
- }
- }
-
- return results
-}
diff --git a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go
deleted file mode 100644
index e88042fe81..0000000000
--- a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless.go
+++ /dev/null
@@ -1,356 +0,0 @@
-package privatelinkendpointserviceserverless
-
-import (
- "context"
- "errors"
- "fmt"
- "log"
- "strings"
- "time"
-
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
-
- "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
-)
-
-const (
- ErrorServerlessServiceEndpointAdd = "error adding MongoDB Serverless PrivateLink Endpoint Connection(%s): %s"
- errorServerlessServiceEndpointDelete = "error deleting MongoDB Serverless PrivateLink Endpoint Connection(%s): %s"
- errorServerlessInstanceListStatus = "error awaiting serverless instance list status IDLE: %s"
-)
-
-func Resource() *schema.Resource {
- return &schema.Resource{
- CreateWithoutTimeout: resourceCreate,
- ReadWithoutTimeout: resourceRead,
- DeleteWithoutTimeout: resourceDelete,
- UpdateWithoutTimeout: resourceUpdate,
- Importer: &schema.ResourceImporter{
- StateContext: resourceImport,
- },
- DeprecationMessage: fmt.Sprintf(constant.DeprecationResourceByDateWithExternalLink, "March 2025", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/serverless-shared-migration-guide"),
- Schema: map[string]*schema.Schema{
- "project_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "instance_name": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "endpoint_id": {
- Type: schema.TypeString,
- Required: true,
- ForceNew: true,
- },
- "comment": {
- Type: schema.TypeString,
- Optional: true,
- },
- "provider_name": {
- Type: schema.TypeString,
- Required: true,
- ValidateFunc: validation.StringInSlice([]string{"AWS", "AZURE"}, false),
- ForceNew: true,
- },
- "cloud_provider_endpoint_id": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- Computed: true,
- },
- "private_link_service_resource_id": {
- Type: schema.TypeString,
- Computed: true,
- },
- "private_endpoint_ip_address": {
- Type: schema.TypeString,
- Optional: true,
- ForceNew: true,
- Computed: true,
- },
- "status": {
- Type: schema.TypeString,
- Computed: true,
- },
- },
- Timeouts: &schema.ResourceTimeout{
- Create: schema.DefaultTimeout(2 * time.Hour),
- Delete: schema.DefaultTimeout(2 * time.Hour),
- },
- }
-}
-
-func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- projectID := d.Get("project_id").(string)
- instanceName := d.Get("instance_name").(string)
- endpointID := d.Get("endpoint_id").(string)
-
- _, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- return diag.Errorf("error getting Serverless PrivateLink Endpoint Information: %s", err)
- }
-
- updateRequest := admin.ServerlessTenantEndpointUpdate{
- Comment: conversion.StringPtr(d.Get("comment").(string)),
- ProviderName: d.Get("provider_name").(string),
- CloudProviderEndpointId: conversion.StringPtr(d.Get("cloud_provider_endpoint_id").(string)),
- PrivateEndpointIpAddress: conversion.StringPtr(d.Get("private_endpoint_ip_address").(string)),
- }
-
- endPoint, _, err := connV2.ServerlessPrivateEndpointsApi.UpdateServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID, &updateRequest).Execute()
- if err != nil {
- return diag.Errorf(ErrorServerlessServiceEndpointAdd, endpointID, err)
- }
-
- stateConf := &retry.StateChangeConf{
- Pending: []string{"RESERVATION_REQUESTED", "INITIATING", "DELETING"},
- Target: []string{"RESERVED", "FAILED", "DELETED", "AVAILABLE"},
- Refresh: resourceRefreshFunc(ctx, connV2, projectID, instanceName, endpointID),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 5 * time.Second,
- Delay: 5 * time.Minute,
- }
-
- _, err = stateConf.WaitForStateContext(ctx)
- if err != nil {
- return diag.FromErr(fmt.Errorf(ErrorServerlessServiceEndpointAdd, endpointID, err))
- }
-
- clusterConf := &retry.StateChangeConf{
- Pending: []string{"REPEATING", "PENDING"},
- Target: []string{"IDLE", "DELETED"},
- Refresh: resourceListRefreshFunc(ctx, projectID, connV2),
- Timeout: d.Timeout(schema.TimeoutCreate),
- MinTimeout: 5 * time.Second,
- Delay: 5 * time.Minute,
- }
-
- if _, err = clusterConf.WaitForStateContext(ctx); err != nil {
- // error awaiting serverless instances to IDLE should not result in failure to apply changes to this resource
- log.Printf(errorServerlessInstanceListStatus, err)
- }
-
- d.SetId(conversion.EncodeStateID(map[string]string{
- "project_id": projectID,
- "instance_name": instanceName,
- "endpoint_id": endPoint.GetId(),
- }))
-
- return resourceRead(ctx, d, meta)
-}
-
-func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- if !d.HasChange("comment") {
- return resourceRead(ctx, d, meta)
- }
-
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- projectID := d.Get("project_id").(string)
- instanceName := d.Get("instance_name").(string)
- endpointID := d.Get("endpoint_id").(string)
-
- // only "comment" attribute update is supported, updating other attributes forces replacement of this resource
- updateRequest := admin.ServerlessTenantEndpointUpdate{
- Comment: conversion.StringPtr(d.Get("comment").(string)),
- ProviderName: d.Get("provider_name").(string),
- }
-
- endPoint, _, err := connV2.ServerlessPrivateEndpointsApi.UpdateServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID, &updateRequest).Execute()
- if err != nil {
- return diag.Errorf(ErrorServerlessServiceEndpointAdd, endpointID, err)
- }
-
- d.SetId(conversion.EncodeStateID(map[string]string{
- "project_id": projectID,
- "instance_name": instanceName,
- "endpoint_id": endPoint.GetId(),
- }))
-
- return resourceRead(ctx, d, meta)
-}
-
-func resourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- ids := conversion.DecodeStateID(d.Id())
- projectID := ids["project_id"]
- instanceName := ids["instance_name"]
- endpointID := ids["endpoint_id"]
-
- privateLinkResponse, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- // case 404: deleted in the backend case
- if strings.Contains(err.Error(), "404") || strings.Contains(err.Error(), "400") {
- d.SetId("")
- return nil
- }
- return diag.Errorf("error getting Serverless private link endpoint information: %s", err)
- }
-
- privateLinkResponse.ProviderName = conversion.StringPtr(d.Get("provider_name").(string))
-
- if err := d.Set("endpoint_id", privateLinkResponse.GetId()); err != nil {
- return diag.Errorf("error setting `endpoint_id` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("instance_name", instanceName); err != nil {
- return diag.Errorf("error setting `instance Name` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("comment", privateLinkResponse.GetComment()); err != nil {
- return diag.Errorf("error setting `comment` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("provider_name", privateLinkResponse.GetProviderName()); err != nil {
- return diag.Errorf("error setting `provider_name` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("status", privateLinkResponse.GetStatus()); err != nil {
- return diag.Errorf("error setting `status` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("cloud_provider_endpoint_id", privateLinkResponse.GetCloudProviderEndpointId()); err != nil {
- return diag.Errorf("error setting `cloud_provider_endpoint_id` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("private_link_service_resource_id", privateLinkResponse.GetPrivateLinkServiceResourceId()); err != nil {
- return diag.Errorf("error setting `private_link_service_resource_id` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- if err := d.Set("private_endpoint_ip_address", privateLinkResponse.GetPrivateEndpointIpAddress()); err != nil {
- return diag.Errorf("error setting `private_endpoint_ip_address` for endpoint_id (%s): %s", d.Id(), err)
- }
-
- return nil
-}
-
-func resourceDelete(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
- ids := conversion.DecodeStateID(d.Id())
- projectID := ids["project_id"]
- instanceName := ids["instance_name"]
- endpointID := ids["endpoint_id"]
-
- _, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- // case 404: deleted in the backend case
- if strings.Contains(err.Error(), "404") || strings.Contains(err.Error(), "400") {
- d.SetId("")
- return nil
- }
- return diag.Errorf(errorServerlessServiceEndpointDelete, endpointID, err)
- }
-
- d.SetId("") // Set to null as linked resource will delete servless endpoint
-
- return nil
-}
-
-func resourceImport(ctx context.Context, d *schema.ResourceData, meta any) ([]*schema.ResourceData, error) {
- connV2 := meta.(*config.MongoDBClient).AtlasV2
-
- parts := strings.SplitN(d.Id(), "--", 3)
- if len(parts) != 3 {
- return nil, errors.New("import format error: to import a search index, use the format {project_id}--{instance_name}--{endpoint_id}")
- }
-
- projectID := parts[0]
- instanceName := parts[1]
- endpointID := parts[2]
-
- privateLinkResponse, _, err := connV2.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointID).Execute()
- if err != nil {
- return nil, fmt.Errorf("couldn't import serverless private link endpoint (%s) in projectID (%s) , error: %s", endpointID, projectID, err)
- }
-
- if err := d.Set("project_id", projectID); err != nil {
- log.Printf("[WARN] Error setting project_id for (%s): %s", projectID, err)
- }
-
- if err := d.Set("endpoint_id", endpointID); err != nil {
- log.Printf("[WARN] Error setting endpoint_id for (%s): %s", endpointID, err)
- }
-
- if err := d.Set("instance_name", instanceName); err != nil {
- log.Printf("[WARN] Error setting instance_name for (%s): %s", endpointID, err)
- }
-
- if privateLinkResponse.GetPrivateLinkServiceResourceId() != "" {
- if err := d.Set("provider_name", "AZURE"); err != nil {
- log.Printf("[WARN] Error setting provider_name for (%s): %s", endpointID, err)
- }
- } else {
- if err := d.Set("provider_name", "AWS"); err != nil {
- log.Printf("[WARN] Error setting provider_name for (%s): %s", endpointID, err)
- }
- }
-
- d.SetId(conversion.EncodeStateID(map[string]string{
- "project_id": projectID,
- "instance_name": instanceName,
- "endpoint_id": endpointID,
- }))
-
- return []*schema.ResourceData{d}, nil
-}
-
-func resourceRefreshFunc(ctx context.Context, client *admin.APIClient, projectID, instanceName, endpointServiceID string) retry.StateRefreshFunc {
- return func() (any, string, error) {
- serverlessTenantEndpoint, resp, err := client.ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(ctx, projectID, instanceName, endpointServiceID).Execute()
- if err != nil {
- if validate.StatusNotFound(resp) || validate.StatusBadRequest(resp) {
- return "", "DELETED", nil
- }
-
- return nil, "", err
- }
-
- if serverlessTenantEndpoint.GetStatus() != "AVAILABLE" {
- return "", serverlessTenantEndpoint.GetStatus(), nil
- }
- resultStatus := serverlessTenantEndpoint.GetStatus()
-
- return serverlessTenantEndpoint, resultStatus, nil
- }
-}
-
-func resourceListRefreshFunc(ctx context.Context, projectID string, client *admin.APIClient) retry.StateRefreshFunc {
- return func() (any, string, error) {
- serverlessInstances, resp, err := client.ServerlessInstancesApi.ListServerlessInstances(ctx, projectID).Execute()
-
- if err != nil && strings.Contains(err.Error(), "reset by peer") {
- return nil, "REPEATING", nil
- }
-
- if err != nil && serverlessInstances == nil && resp == nil {
- return nil, "", err
- } else if err != nil {
- if validate.StatusNotFound(resp) {
- return "", "DELETED", nil
- }
- if validate.StatusServiceUnavailable(resp) {
- return "", "PENDING", nil
- }
- return nil, "", err
- }
-
- for i := range serverlessInstances.GetResults() {
- if serverlessInstances.GetResults()[i].GetStateName() != "IDLE" {
- return serverlessInstances.GetResults()[i], "PENDING", nil
- }
- }
-
- return serverlessInstances, "IDLE", nil
- }
-}
diff --git a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless_migration_test.go b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless_migration_test.go
deleted file mode 100644
index 116c1d3038..0000000000
--- a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless_migration_test.go
+++ /dev/null
@@ -1,69 +0,0 @@
-package privatelinkendpointserviceserverless_test
-
-import (
- "os"
- "testing"
-
- "github.com/hashicorp/terraform-plugin-testing/helper/resource"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
-)
-
-func TestMigServerlessPrivateLinkEndpointService_basic(t *testing.T) {
- acc.SkipTestForCI(t) // mongodbatlas_serverless_instance now create Flex clusters
- var (
- resourceName = "mongodbatlas_privatelink_endpoint_service_serverless.test"
- datasourceName = "data.mongodbatlas_privatelink_endpoint_service_serverless.test"
- projectID = acc.ProjectIDExecution(t)
- instanceName = acc.RandomClusterName()
- commentOrigin = "this is a comment for serverless private link endpoint"
- config = configBasic(projectID, instanceName, commentOrigin)
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { mig.PreCheckBasic(t) },
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- ExternalProviders: mig.ExternalProviders(),
- Config: config,
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "provider_name", "AWS"),
- resource.TestCheckResourceAttr(resourceName, "comment", commentOrigin),
- resource.TestCheckResourceAttr(datasourceName, "comment", commentOrigin),
- ),
- },
- mig.TestStepCheckEmptyPlan(config),
- },
- })
-}
-
-func TestMigServerlessPrivateLinkEndpointService_AWSVPC(t *testing.T) {
- acc.SkipTestForCI(t) // mongodbatlas_serverless_instance now create Flex clusters
- var (
- resourceName = "mongodbatlas_privatelink_endpoint_service_serverless.test"
- projectID = acc.ProjectIDExecution(t)
- instanceName = acc.RandomClusterName()
- awsAccessKey = os.Getenv("AWS_ACCESS_KEY_ID")
- awsSecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
- config = configAWSEndpoint(projectID, instanceName, awsAccessKey, awsSecretKey, true, "test comment")
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { mig.PreCheckBasic(t) },
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- ExternalProviders: mig.ExternalProvidersWithAWS(),
- Config: config,
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "provider_name", "AWS"),
- ),
- },
- mig.TestStepCheckEmptyPlan(config),
- },
- })
-}
diff --git a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless_test.go b/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless_test.go
deleted file mode 100644
index 92c0f0795f..0000000000
--- a/internal/service/privatelinkendpointserviceserverless/resource_privatelink_endpoint_service_serverless_test.go
+++ /dev/null
@@ -1,335 +0,0 @@
-package privatelinkendpointserviceserverless_test
-
-import (
- "context"
- "fmt"
- "os"
- "testing"
-
- "github.com/hashicorp/terraform-plugin-testing/helper/resource"
- "github.com/hashicorp/terraform-plugin-testing/terraform"
-
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
-)
-
-func TestAccServerlessPrivateLinkEndpointService_basic(t *testing.T) {
- acc.SkipTestForCI(t) // mongodbatlas_serverless_instance now create Flex clusters
- var (
- resourceName = "mongodbatlas_privatelink_endpoint_service_serverless.test"
- datasourceName = "data.mongodbatlas_privatelink_endpoint_service_serverless.test"
- datasourceEndpointsName = "data.mongodbatlas_privatelink_endpoints_service_serverless.test"
- projectID = acc.ProjectIDExecution(t)
- instanceName = acc.RandomClusterName()
- commentOrigin = "this is a comment for serverless private link endpoint"
- commentUpdated = "this is updated comment for serverless private link endpoint"
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- Config: configBasic(projectID, instanceName, commentOrigin),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "provider_name", "AWS"),
- resource.TestCheckResourceAttr(resourceName, "comment", commentOrigin),
- resource.TestCheckResourceAttr(datasourceName, "comment", commentOrigin),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "project_id"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "results.#"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "instance_name"),
- ),
- },
- {
- Config: configBasic(projectID, instanceName, commentUpdated),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "provider_name", "AWS"),
- resource.TestCheckResourceAttr(resourceName, "comment", commentUpdated),
- resource.TestCheckResourceAttr(datasourceName, "comment", commentUpdated),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "project_id"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "results.#"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "instance_name"),
- ),
- },
- {
- Config: configBasic(projectID, instanceName, commentOrigin),
- ResourceName: resourceName,
- ImportStateIdFunc: importStateIDFunc(resourceName),
- ImportState: true,
- ImportStateVerify: true,
- },
- },
- })
-}
-
-func TestAccServerlessPrivateLinkEndpointService_AWSEndpointCommentUpdate(t *testing.T) {
- acc.SkipTestForCI(t) // mongodbatlas_serverless_instance now create Flex clusters
- var (
- resourceName = "mongodbatlas_privatelink_endpoint_service_serverless.test"
- datasourceEndpointsName = "data.mongodbatlas_privatelink_endpoints_service_serverless.test"
- projectID = acc.ProjectIDExecution(t)
- instanceName = acc.RandomClusterName()
- commentUpdated = "this is updated comment for serverless private link endpoint"
- awsAccessKey = os.Getenv("AWS_ACCESS_KEY_ID")
- awsSecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckBasic(t) },
- ExternalProviders: acc.ExternalProvidersOnlyAWS(),
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: checkDestroy,
- Steps: []resource.TestStep{
- {
- Config: configAWSEndpoint(projectID, instanceName, awsAccessKey, awsSecretKey, false, ""),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "provider_name", "AWS"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "project_id"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "results.#"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "instance_name"),
- ),
- },
- {
- Config: configAWSEndpoint(projectID, instanceName, awsAccessKey, awsSecretKey, true, commentUpdated),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttr(resourceName, "provider_name", "AWS"),
- resource.TestCheckResourceAttr(resourceName, "comment", commentUpdated),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "project_id"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "results.#"),
- resource.TestCheckResourceAttrSet(datasourceEndpointsName, "instance_name"),
- ),
- },
- {
- Config: configAWSEndpoint(projectID, instanceName, awsAccessKey, awsSecretKey, true, commentUpdated),
- ResourceName: resourceName,
- ImportStateIdFunc: importStateIDFunc(resourceName),
- ImportState: true,
- ImportStateVerify: true,
- },
- },
- })
-}
-
-func checkDestroy(state *terraform.State) error {
- for _, rs := range state.RootModule().Resources {
- if rs.Type != "mongodbatlas_privatelink_endpoint_service_serverless" {
- continue
- }
- ids := conversion.DecodeStateID(rs.Primary.ID)
- privateLink, _, err := acc.ConnV2().ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(context.Background(), ids["project_id"], ids["instance_name"], ids["endpoint_id"]).Execute()
- if err == nil && privateLink != nil {
- return fmt.Errorf("endpoint_id (%s) still exists", ids["endpoint_id"])
- }
- }
- return nil
-}
-
-func configBasic(projectID, instanceName, comment string) string {
- return fmt.Sprintf(`
- resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = %[1]q
- instance_name = mongodbatlas_serverless_instance.test.name
- provider_name = "AWS"
- }
-
-
- resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_serverless.test.project_id
- instance_name = mongodbatlas_privatelink_endpoint_serverless.test.instance_name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- provider_name = "AWS"
- comment = %[3]q
- }
-
- resource "mongodbatlas_serverless_instance" "test" {
- project_id = %[1]q
- name = %[2]q
- provider_settings_backing_provider_name = "AWS"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST_1"
- continuous_backup_enabled = true
-
- lifecycle {
- ignore_changes = [connection_strings_private_endpoint_srv]
- }
- }
-
- data "mongodbatlas_serverless_instance" "test" {
- project_id = mongodbatlas_privatelink_endpoint_service_serverless.test.project_id
- name = mongodbatlas_serverless_instance.test.name
- }
-
- data "mongodbatlas_privatelink_endpoints_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_service_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- }
-
- data "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_service_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_service_serverless.test.endpoint_id
- }
-
- `, projectID, instanceName, comment)
-}
-
-func configAWSVPCEndpoint() string {
- return `
-
- # Create Primary VPC
-resource "aws_vpc" "primary" {
- cidr_block = "10.0.0.0/16"
- enable_dns_hostnames = true
- enable_dns_support = true
-}
-
-# Create IGW
-resource "aws_internet_gateway" "primary" {
- vpc_id = aws_vpc.primary.id
-}
-
-# Route Table
-resource "aws_route" "primary-internet_access" {
- route_table_id = aws_vpc.primary.main_route_table_id
- destination_cidr_block = "0.0.0.0/0"
- gateway_id = aws_internet_gateway.primary.id
-}
-
-# Subnet-A
-resource "aws_subnet" "primary-az1" {
- vpc_id = aws_vpc.primary.id
- cidr_block = "10.0.1.0/24"
- map_public_ip_on_launch = true
- availability_zone = "us-east-1a"
-}
-
-# Subnet-B
-resource "aws_subnet" "primary-az2" {
- vpc_id = aws_vpc.primary.id
- cidr_block = "10.0.2.0/24"
- map_public_ip_on_launch = false
- availability_zone = "us-east-1b"
-}
-
-resource "aws_security_group" "primary_default" {
- name_prefix = "default-"
- description = "Default security group for all instances in ${aws_vpc.primary.id}"
- vpc_id = aws_vpc.primary.id
- ingress {
- from_port = 0
- to_port = 0
- protocol = "tcp"
- cidr_blocks = [
- "0.0.0.0/0",
- ]
- }
- egress {
- from_port = 0
- to_port = 0
- protocol = "-1"
- cidr_blocks = ["0.0.0.0/0"]
- }
-}`
-}
-
-func configAWSEndpoint(projectID, instanceName, awsAccessKey, awsSecretKey string, updateComment bool, comment string) string {
- peServiceServerless := `resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- cloud_provider_endpoint_id = aws_vpc_endpoint.test.id
- provider_name = "AWS"
- }`
- if updateComment {
- peServiceServerless = fmt.Sprintf(`resource "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_id
- cloud_provider_endpoint_id = aws_vpc_endpoint.test.id
- provider_name = "AWS"
- comment = %[1]q
- }`, comment)
- }
-
- return fmt.Sprintf(`
- provider "aws" {
- region = "us-east-1"
- access_key = "%[5]s"
- secret_key = "%[6]s"
- }
-
- resource "mongodbatlas_serverless_instance" "test" {
- project_id = %[1]q
- name = %[2]q
- provider_settings_backing_provider_name = "AWS"
- provider_settings_provider_name = "SERVERLESS"
- provider_settings_region_name = "US_EAST_1"
- continuous_backup_enabled = true
- }
-
- resource "mongodbatlas_privatelink_endpoint_serverless" "test" {
- project_id = %[1]q
- provider_name = "AWS"
- instance_name = mongodbatlas_serverless_instance.test.name
- }
-
- # aws_vpc config
- %[3]s
-
- resource "aws_vpc_endpoint" "test" {
- vpc_id = aws_vpc.primary.id
- service_name = mongodbatlas_privatelink_endpoint_serverless.test.endpoint_service_name
- vpc_endpoint_type = "Interface"
- subnet_ids = [aws_subnet.primary-az1.id, aws_subnet.primary-az2.id]
- security_group_ids = [aws_security_group.primary_default.id]
- }
-
- %[4]s
-
- data "mongodbatlas_privatelink_endpoints_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_service_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- }
-
- data "mongodbatlas_privatelink_endpoint_service_serverless" "test" {
- project_id = mongodbatlas_privatelink_endpoint_service_serverless.test.project_id
- instance_name = mongodbatlas_serverless_instance.test.name
- endpoint_id = mongodbatlas_privatelink_endpoint_service_serverless.test.endpoint_id
- }
-
- `, projectID, instanceName, configAWSVPCEndpoint(), peServiceServerless, awsAccessKey, awsSecretKey)
-}
-
-func checkExists(resourceName string) resource.TestCheckFunc {
- return func(s *terraform.State) error {
- rs, ok := s.RootModule().Resources[resourceName]
- if !ok {
- return fmt.Errorf("not found: %s", resourceName)
- }
- if rs.Primary.ID == "" {
- return fmt.Errorf("no ID is set")
- }
- ids := conversion.DecodeStateID(rs.Primary.ID)
- _, _, err := acc.ConnV2().ServerlessPrivateEndpointsApi.GetServerlessPrivateEndpoint(context.Background(), ids["project_id"], ids["instance_name"], ids["endpoint_id"]).Execute()
- if err == nil {
- return nil
- }
- return fmt.Errorf("endpoint_id (%s) does not exist", ids["endpoint_id"])
- }
-}
-
-func importStateIDFunc(resourceName string) resource.ImportStateIdFunc {
- return func(s *terraform.State) (string, error) {
- rs, ok := s.RootModule().Resources[resourceName]
- if !ok {
- return "", fmt.Errorf("not found: %s", resourceName)
- }
- ids := conversion.DecodeStateID(rs.Primary.ID)
- return fmt.Sprintf("%s--%s--%s", ids["project_id"], ids["instance_name"], ids["endpoint_id"]), nil
- }
-}
diff --git a/internal/service/project/data_source_project.go b/internal/service/project/data_source_project.go
index 00521616b1..07e0c6d438 100644
--- a/internal/service/project/data_source_project.go
+++ b/internal/service/project/data_source_project.go
@@ -3,17 +3,13 @@ package project
import (
"context"
"fmt"
+ "net/http"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
- "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
"github.com/hashicorp/terraform-plugin-framework/datasource"
- "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
- "github.com/hashicorp/terraform-plugin-framework/path"
- "github.com/hashicorp/terraform-plugin-framework/schema/validator"
- "github.com/hashicorp/terraform-plugin-framework/types"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
@@ -32,156 +28,10 @@ type projectDS struct {
config.DSCommon
}
-type TFProjectDSModel struct {
- IPAddresses types.Object `tfsdk:"ip_addresses"`
- Created types.String `tfsdk:"created"`
- OrgID types.String `tfsdk:"org_id"`
- RegionUsageRestrictions types.String `tfsdk:"region_usage_restrictions"`
- ID types.String `tfsdk:"id"`
- Name types.String `tfsdk:"name"`
- ProjectID types.String `tfsdk:"project_id"`
- Tags types.Map `tfsdk:"tags"`
- Teams []*TFTeamDSModel `tfsdk:"teams"`
- Limits []*TFLimitModel `tfsdk:"limits"`
- ClusterCount types.Int64 `tfsdk:"cluster_count"`
- IsCollectDatabaseSpecificsStatisticsEnabled types.Bool `tfsdk:"is_collect_database_specifics_statistics_enabled"`
- IsRealtimePerformancePanelEnabled types.Bool `tfsdk:"is_realtime_performance_panel_enabled"`
- IsSchemaAdvisorEnabled types.Bool `tfsdk:"is_schema_advisor_enabled"`
- IsPerformanceAdvisorEnabled types.Bool `tfsdk:"is_performance_advisor_enabled"`
- IsExtendedStorageSizesEnabled types.Bool `tfsdk:"is_extended_storage_sizes_enabled"`
- IsDataExplorerEnabled types.Bool `tfsdk:"is_data_explorer_enabled"`
- IsSlowOperationThresholdingEnabled types.Bool `tfsdk:"is_slow_operation_thresholding_enabled"`
-}
-
-type TFTeamDSModel struct {
- TeamID types.String `tfsdk:"team_id"`
- RoleNames types.List `tfsdk:"role_names"`
-}
-
func (d *projectDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
- resp.Schema = schema.Schema{
- Attributes: map[string]schema.Attribute{
- "id": schema.StringAttribute{
- Computed: true,
- },
- "project_id": schema.StringAttribute{
- Optional: true,
- Validators: []validator.String{
- stringvalidator.ConflictsWith(path.MatchRoot("name")),
- },
- },
- "name": schema.StringAttribute{
- Optional: true,
- Validators: []validator.String{
- stringvalidator.ConflictsWith(path.MatchRoot("project_id")),
- },
- },
- "org_id": schema.StringAttribute{
- Computed: true,
- },
- "cluster_count": schema.Int64Attribute{
- Computed: true,
- },
- "created": schema.StringAttribute{
- Computed: true,
- },
- "is_collect_database_specifics_statistics_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_data_explorer_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_extended_storage_sizes_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_performance_advisor_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_realtime_performance_panel_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_schema_advisor_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_slow_operation_thresholding_enabled": schema.BoolAttribute{
- DeprecationMessage: constant.DeprecationParam, // added deprecation in CLOUDP-293855 because was deprecated in the doc
- Computed: true,
- },
- "region_usage_restrictions": schema.StringAttribute{
- Computed: true,
- },
- "teams": schema.ListNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "team_id": schema.StringAttribute{
- Computed: true,
- },
- "role_names": schema.ListAttribute{
- Computed: true,
- ElementType: types.StringType,
- },
- },
- },
- },
- "limits": schema.SetNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "name": schema.StringAttribute{
- Computed: true,
- },
- "value": schema.Int64Attribute{
- Computed: true,
- },
- "current_usage": schema.Int64Attribute{
- Computed: true,
- },
- "default_limit": schema.Int64Attribute{
- Computed: true,
- },
- "maximum_limit": schema.Int64Attribute{
- Computed: true,
- },
- },
- },
- },
- "ip_addresses": schema.SingleNestedAttribute{
- Computed: true,
- DeprecationMessage: fmt.Sprintf(constant.DeprecationParamWithReplacement, "mongodbatlas_project_ip_addresses data source"),
- Attributes: map[string]schema.Attribute{
- "services": schema.SingleNestedAttribute{
- Computed: true,
- Attributes: map[string]schema.Attribute{
- "clusters": schema.ListNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "cluster_name": schema.StringAttribute{
- Computed: true,
- },
- "inbound": schema.ListAttribute{
- ElementType: types.StringType,
- Computed: true,
- },
- "outbound": schema.ListAttribute{
- ElementType: types.StringType,
- Computed: true,
- },
- },
- },
- },
- },
- },
- },
- },
- "tags": schema.MapAttribute{
- ElementType: types.StringType,
- Computed: true,
- },
- },
- }
- conversion.UpdateSchemaDescription(&resp.Schema)
+ resp.Schema = conversion.DataSourceSchemaFromResource(ResourceSchema(ctx), &conversion.DataSourceSchemaRequest{
+ OverridenFields: dataSourceOverridenFields(),
+ })
}
func (d *projectDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
@@ -215,14 +65,22 @@ func (d *projectDS) Read(ctx context.Context, req datasource.ReadRequest, resp *
return
}
}
+ projectPropsParams := &PropsParams{
+ ProjectID: project.GetId(),
+ IsDataSource: true,
+ ProjectsAPI: connV2.ProjectsApi,
+ TeamsAPI: connV2.TeamsApi,
+ PerformanceAdvisorAPI: connV2.PerformanceAdvisorApi,
+ MongoDBCloudUsersAPI: connV2.MongoDBCloudUsersApi,
+ }
- projectProps, err := GetProjectPropsFromAPI(ctx, connV2.ProjectsApi, connV2.TeamsApi, connV2.PerformanceAdvisorApi, project.GetId(), &resp.Diagnostics)
+ projectProps, err := GetProjectPropsFromAPI(ctx, projectPropsParams, &resp.Diagnostics)
if err != nil {
resp.Diagnostics.AddError("error when getting project properties", fmt.Sprintf(ErrorProjectRead, project.GetId(), err.Error()))
return
}
- newProjectState, diags := NewTFProjectDataSourceModel(ctx, project, *projectProps)
+ newProjectState, diags := NewTFProjectDataSourceModel(ctx, project, projectProps)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
@@ -233,3 +91,11 @@ func (d *projectDS) Read(ctx context.Context, req datasource.ReadRequest, resp *
return
}
}
+
+func ListAllProjectUsers(ctx context.Context, projectID string, mongoDBCloudUsersAPI admin.MongoDBCloudUsersApi) ([]admin.GroupUserResponse, error) {
+ return dsschema.AllPages(ctx, func(ctx context.Context, pageNum int) (dsschema.PaginateResponse[admin.GroupUserResponse], *http.Response, error) {
+ request := mongoDBCloudUsersAPI.ListGroupUsers(ctx, projectID)
+ request = request.PageNum(pageNum)
+ return request.Execute()
+ })
+}
diff --git a/internal/service/project/data_source_projects.go b/internal/service/project/data_source_projects.go
index 24953d9713..5fb5bf1aaa 100644
--- a/internal/service/project/data_source_projects.go
+++ b/internal/service/project/data_source_projects.go
@@ -5,11 +5,11 @@ import (
"fmt"
"github.com/hashicorp/terraform-plugin-framework/datasource"
- "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
+
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/id"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
@@ -32,153 +32,11 @@ type ProjectsDS struct {
config.DSCommon
}
-type tfProjectsDSModel struct {
- ID types.String `tfsdk:"id"`
- Results []*TFProjectDSModel `tfsdk:"results"`
- PageNum types.Int64 `tfsdk:"page_num"`
- ItemsPerPage types.Int64 `tfsdk:"items_per_page"`
- TotalCount types.Int64 `tfsdk:"total_count"`
-}
-
func (d *ProjectsDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
- resp.Schema = schema.Schema{
- Attributes: map[string]schema.Attribute{
- // https://github.com/hashicorp/terraform-plugin-testing/issues/84#issuecomment-1480006432
- "id": schema.StringAttribute{ // required by hashicorps terraform plugin testing framework
- DeprecationMessage: "Please use each project's id attribute instead",
- Computed: true,
- },
- "page_num": schema.Int64Attribute{
- Optional: true,
- },
- "items_per_page": schema.Int64Attribute{
- Optional: true,
- },
- "total_count": schema.Int64Attribute{
- Computed: true,
- },
- "results": schema.ListNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "id": schema.StringAttribute{
- Computed: true,
- },
- "org_id": schema.StringAttribute{
- Computed: true,
- },
- "project_id": schema.StringAttribute{
- Computed: true,
- },
- "name": schema.StringAttribute{
- Computed: true,
- },
- "cluster_count": schema.Int64Attribute{
- Computed: true,
- },
- "created": schema.StringAttribute{
- Computed: true,
- },
- "is_collect_database_specifics_statistics_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_data_explorer_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_extended_storage_sizes_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_performance_advisor_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_realtime_performance_panel_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_schema_advisor_enabled": schema.BoolAttribute{
- Computed: true,
- },
- "is_slow_operation_thresholding_enabled": schema.BoolAttribute{
- DeprecationMessage: constant.DeprecationParam, // added deprecation in CLOUDP-293855 because was deprecated in the doc
- Computed: true,
- },
- "region_usage_restrictions": schema.StringAttribute{
- Computed: true,
- },
- "teams": schema.ListNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "team_id": schema.StringAttribute{
- Computed: true,
- },
- "role_names": schema.ListAttribute{
- Computed: true,
- ElementType: types.StringType,
- },
- },
- },
- },
- "limits": schema.SetNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "name": schema.StringAttribute{
- Computed: true,
- },
- "value": schema.Int64Attribute{
- Computed: true,
- },
- "current_usage": schema.Int64Attribute{
- Computed: true,
- },
- "default_limit": schema.Int64Attribute{
- Computed: true,
- },
- "maximum_limit": schema.Int64Attribute{
- Computed: true,
- },
- },
- },
- },
- "ip_addresses": schema.SingleNestedAttribute{
- Computed: true,
- DeprecationMessage: fmt.Sprintf(constant.DeprecationParamWithReplacement, "mongodbatlas_project_ip_addresses data source"),
- Attributes: map[string]schema.Attribute{
- "services": schema.SingleNestedAttribute{
- Computed: true,
- Attributes: map[string]schema.Attribute{
- "clusters": schema.ListNestedAttribute{
- Computed: true,
- NestedObject: schema.NestedAttributeObject{
- Attributes: map[string]schema.Attribute{
- "cluster_name": schema.StringAttribute{
- Computed: true,
- },
- "inbound": schema.ListAttribute{
- ElementType: types.StringType,
- Computed: true,
- },
- "outbound": schema.ListAttribute{
- ElementType: types.StringType,
- Computed: true,
- },
- },
- },
- },
- },
- },
- },
- },
- "tags": schema.MapAttribute{
- ElementType: types.StringType,
- Computed: true,
- },
- },
- },
- },
- },
- }
- conversion.UpdateSchemaDescription(&resp.Schema)
+ resp.Schema = conversion.PluralDataSourceSchemaFromResource(ResourceSchema(ctx), &conversion.PluralDataSourceSchemaRequest{
+ OverridenFields: dataSourceOverridenFields(),
+ HasLegacyFields: true,
+ })
}
func (d *ProjectsDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
@@ -215,9 +73,19 @@ func populateProjectsDataSourceModel(ctx context.Context, connV2 *admin.APIClien
results := make([]*TFProjectDSModel, 0, len(input))
for i := range input {
project := input[i]
- projectProps, err := GetProjectPropsFromAPI(ctx, connV2.ProjectsApi, connV2.TeamsApi, connV2.PerformanceAdvisorApi, project.GetId(), &diagnostics)
+
+ projectPropsParams := &PropsParams{
+ ProjectID: project.GetId(),
+ IsDataSource: true,
+ ProjectsAPI: connV2.ProjectsApi,
+ TeamsAPI: connV2.TeamsApi,
+ PerformanceAdvisorAPI: connV2.PerformanceAdvisorApi,
+ MongoDBCloudUsersAPI: connV2.MongoDBCloudUsersApi,
+ }
+
+ projectProps, err := GetProjectPropsFromAPI(ctx, projectPropsParams, &diagnostics)
if err == nil { // if the project is still valid, e.g. could have just been deleted
- projectModel, diags := NewTFProjectDataSourceModel(ctx, &project, *projectProps)
+ projectModel, diags := NewTFProjectDataSourceModel(ctx, &project, projectProps)
diagnostics = append(diagnostics, diags...)
if projectModel != nil {
results = append(results, projectModel)
diff --git a/internal/service/project/model_project.go b/internal/service/project/model_project.go
index c2f668e344..d6e03a3afd 100644
--- a/internal/service/project/model_project.go
+++ b/internal/service/project/model_project.go
@@ -2,17 +2,169 @@ package project
import (
"context"
+ "fmt"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
+ "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
+ "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
"github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/path"
+ "github.com/hashicorp/terraform-plugin-framework/schema/validator"
"github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
)
-func NewTFProjectDataSourceModel(ctx context.Context, project *admin.Group, projectProps AdditionalProperties) (*TFProjectDSModel, diag.Diagnostics) {
- ipAddressesModel, diags := NewTFIPAddressesModel(ctx, projectProps.IPAddresses)
+func UsersProjectSchema() schema.ListNestedAttribute {
+ return schema.ListNestedAttribute{
+ Computed: true,
+ NestedObject: schema.NestedAttributeObject{
+ Attributes: map[string]schema.Attribute{
+ "id": schema.StringAttribute{
+ Computed: true,
+ },
+ "org_membership_status": schema.StringAttribute{
+ Computed: true,
+ },
+ "roles": schema.SetAttribute{
+ Computed: true,
+ ElementType: types.StringType,
+ },
+ "username": schema.StringAttribute{
+ Computed: true,
+ },
+ "invitation_created_at": schema.StringAttribute{
+ Computed: true,
+ },
+ "invitation_expires_at": schema.StringAttribute{
+ Computed: true,
+ },
+ "inviter_username": schema.StringAttribute{
+ Computed: true,
+ },
+ "country": schema.StringAttribute{
+ Computed: true,
+ },
+ "created_at": schema.StringAttribute{
+ Computed: true,
+ },
+ "first_name": schema.StringAttribute{
+ Computed: true,
+ },
+ "last_auth": schema.StringAttribute{
+ Computed: true,
+ },
+ "last_name": schema.StringAttribute{
+ Computed: true,
+ },
+ "mobile_number": schema.StringAttribute{
+ Computed: true,
+ },
+ },
+ },
+ }
+}
+
+func dataSourceOverridenFields() map[string]schema.Attribute {
+ return map[string]schema.Attribute{
+ "name": schema.StringAttribute{
+ Optional: true,
+ Validators: []validator.String{
+ stringvalidator.ConflictsWith(path.MatchRoot("project_id")),
+ },
+ },
+ "project_id": schema.StringAttribute{
+ Optional: true,
+ Validators: []validator.String{
+ stringvalidator.ConflictsWith(path.MatchRoot("name")),
+ },
+ },
+ "users": UsersProjectSchema(),
+ "teams": schema.ListNestedAttribute{
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "parameter", "mongodbatlas_team_project_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ Computed: true,
+ NestedObject: schema.NestedAttributeObject{
+ Attributes: map[string]schema.Attribute{
+ "team_id": schema.StringAttribute{
+ Computed: true,
+ },
+ "role_names": schema.ListAttribute{
+ Computed: true,
+ ElementType: types.StringType,
+ },
+ },
+ },
+ },
+ "project_owner_id": nil,
+ "with_default_alerts_settings": nil,
+ }
+}
+
+type tfProjectsDSModel struct {
+ ID types.String `tfsdk:"id"`
+ Results []*TFProjectDSModel `tfsdk:"results"`
+ PageNum types.Int64 `tfsdk:"page_num"`
+ ItemsPerPage types.Int64 `tfsdk:"items_per_page"`
+ TotalCount types.Int64 `tfsdk:"total_count"`
+}
+
+type TFProjectDSModel struct {
+ Tags types.Map `tfsdk:"tags"`
+ IPAddresses types.Object `tfsdk:"ip_addresses"`
+ Created types.String `tfsdk:"created"`
+ OrgID types.String `tfsdk:"org_id"`
+ RegionUsageRestrictions types.String `tfsdk:"region_usage_restrictions"`
+ ID types.String `tfsdk:"id"`
+ Name types.String `tfsdk:"name"`
+ ProjectID types.String `tfsdk:"project_id"`
+ Teams []*TFTeamDSModel `tfsdk:"teams"`
+ Limits []*TFLimitModel `tfsdk:"limits"`
+ Users []*TFCloudUsersDSModel `tfsdk:"users"`
+ ClusterCount types.Int64 `tfsdk:"cluster_count"`
+ IsCollectDatabaseSpecificsStatisticsEnabled types.Bool `tfsdk:"is_collect_database_specifics_statistics_enabled"`
+ IsRealtimePerformancePanelEnabled types.Bool `tfsdk:"is_realtime_performance_panel_enabled"`
+ IsSchemaAdvisorEnabled types.Bool `tfsdk:"is_schema_advisor_enabled"`
+ IsPerformanceAdvisorEnabled types.Bool `tfsdk:"is_performance_advisor_enabled"`
+ IsExtendedStorageSizesEnabled types.Bool `tfsdk:"is_extended_storage_sizes_enabled"`
+ IsDataExplorerEnabled types.Bool `tfsdk:"is_data_explorer_enabled"`
+ IsSlowOperationThresholdingEnabled types.Bool `tfsdk:"is_slow_operation_thresholding_enabled"`
+}
+
+type TFTeamDSModel struct {
+ TeamID types.String `tfsdk:"team_id"`
+ RoleNames types.List `tfsdk:"role_names"`
+}
+
+type TFCloudUsersDSModel struct {
+ ID types.String `tfsdk:"id"`
+ OrgMembershipStatus types.String `tfsdk:"org_membership_status"`
+ Roles types.Set `tfsdk:"roles"`
+ Username types.String `tfsdk:"username"`
+ InvitationCreatedAt types.String `tfsdk:"invitation_created_at"`
+ InvitationExpiresAt types.String `tfsdk:"invitation_expires_at"`
+ InviterUsername types.String `tfsdk:"inviter_username"`
+ Country types.String `tfsdk:"country"`
+ CreatedAt types.String `tfsdk:"created_at"`
+ FirstName types.String `tfsdk:"first_name"`
+ LastAuth types.String `tfsdk:"last_auth"`
+ LastName types.String `tfsdk:"last_name"`
+ MobileNumber types.String `tfsdk:"mobile_number"`
+}
+
+func NewTFProjectDataSourceModel(ctx context.Context, project *admin.Group, projectProps *AdditionalProperties) (*TFProjectDSModel, diag.Diagnostics) {
+ var diags diag.Diagnostics
+ if project == nil {
+ diags.AddError("Invalid Project Data", "Project data is nil and cannot be processed")
+ return nil, diags
+ }
+ if projectProps == nil {
+ diags.AddError("Invalid Project Properties", "Project properties data is nil and cannot be processed")
+ return nil, diags
+ }
+ ipAddressesModel, ipDiags := NewTFIPAddressesModel(ctx, projectProps.IPAddresses)
+ diags.Append(ipDiags...)
if diags.HasError() {
return nil, diags
}
@@ -36,11 +188,12 @@ func NewTFProjectDataSourceModel(ctx context.Context, project *admin.Group, proj
IPAddresses: ipAddressesModel,
Tags: conversion.NewTFTags(project.GetTags()),
IsSlowOperationThresholdingEnabled: types.BoolValue(projectProps.IsSlowOperationThresholdingEnabled),
+ Users: NewTFCloudUsersDataSourceModel(ctx, projectProps.Users),
}, nil
}
func NewTFTeamsDataSourceModel(ctx context.Context, atlasTeams *admin.PaginatedTeamRole) []*TFTeamDSModel {
- if atlasTeams.GetTotalCount() == 0 {
+ if atlasTeams == nil || atlasTeams.GetTotalCount() == 0 {
return nil
}
results := atlasTeams.GetResults()
@@ -71,6 +224,33 @@ func NewTFLimitsDataSourceModel(ctx context.Context, dataFederationLimits []admi
return limits
}
+func NewTFCloudUsersDataSourceModel(ctx context.Context, cloudUsers []admin.GroupUserResponse) []*TFCloudUsersDSModel {
+ if len(cloudUsers) == 0 {
+ return []*TFCloudUsersDSModel{}
+ }
+ users := make([]*TFCloudUsersDSModel, len(cloudUsers))
+ for i := range cloudUsers {
+ cloudUser := &cloudUsers[i]
+ roles, _ := types.SetValueFrom(ctx, types.StringType, cloudUser.Roles)
+ users[i] = &TFCloudUsersDSModel{
+ ID: types.StringValue(cloudUser.Id),
+ OrgMembershipStatus: types.StringValue(cloudUser.OrgMembershipStatus),
+ Roles: roles,
+ Username: types.StringValue(cloudUser.Username),
+ InvitationCreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(cloudUser.InvitationCreatedAt)),
+ InvitationExpiresAt: types.StringPointerValue(conversion.TimePtrToStringPtr(cloudUser.InvitationExpiresAt)),
+ InviterUsername: types.StringPointerValue(cloudUser.InviterUsername),
+ Country: types.StringPointerValue(cloudUser.Country),
+ CreatedAt: types.StringPointerValue(conversion.TimePtrToStringPtr(cloudUser.CreatedAt)),
+ FirstName: types.StringPointerValue(cloudUser.FirstName),
+ LastAuth: types.StringPointerValue(conversion.TimePtrToStringPtr(cloudUser.LastAuth)),
+ LastName: types.StringPointerValue(cloudUser.LastName),
+ MobileNumber: types.StringPointerValue(cloudUser.MobileNumber),
+ }
+ }
+ return users
+}
+
func NewTFIPAddressesModel(ctx context.Context, ipAddresses *admin.GroupIPAddresses) (types.Object, diag.Diagnostics) {
clusterIPs := []TFClusterIPsModel{}
if ipAddresses != nil && ipAddresses.Services != nil {
@@ -94,8 +274,18 @@ func NewTFIPAddressesModel(ctx context.Context, ipAddresses *admin.GroupIPAddres
return obj, diags
}
-func NewTFProjectResourceModel(ctx context.Context, projectRes *admin.Group, projectProps AdditionalProperties) (*TFProjectRSModel, diag.Diagnostics) {
- ipAddressesModel, diags := NewTFIPAddressesModel(ctx, projectProps.IPAddresses)
+func NewTFProjectResourceModel(ctx context.Context, projectRes *admin.Group, projectProps *AdditionalProperties) (*TFProjectRSModel, diag.Diagnostics) {
+ var diags diag.Diagnostics
+ if projectRes == nil {
+ diags.AddError("Invalid Project Data", "Project data is nil and cannot be processed")
+ return nil, diags
+ }
+ if projectProps == nil {
+ diags.AddError("Invalid Project Properties", "Project properties data is nil and cannot be processed")
+ return nil, diags
+ }
+ ipAddressesModel, ipDiags := NewTFIPAddressesModel(ctx, projectProps.IPAddresses)
+ diags.Append(ipDiags...)
if diags.HasError() {
return nil, diags
}
@@ -145,6 +335,9 @@ func newTFLimitsResourceModel(ctx context.Context, dataFederationLimits []admin.
}
func newTFTeamsResourceModel(ctx context.Context, atlasTeams *admin.PaginatedTeamRole) types.Set {
+ if atlasTeams == nil || atlasTeams.GetTotalCount() == 0 {
+ return types.SetNull(TfTeamObjectType)
+ }
results := atlasTeams.GetResults()
teams := make([]TFTeamModel, len(results))
for i, atlasTeam := range results {
diff --git a/internal/service/project/model_project_test.go b/internal/service/project/model_project_test.go
index 0c46975f21..3071eb3681 100644
--- a/internal/service/project/model_project_test.go
+++ b/internal/service/project/model_project_test.go
@@ -3,6 +3,7 @@ package project_test
import (
"context"
"testing"
+ "time"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
@@ -26,6 +27,10 @@ const (
projectClusterCount = int64(1)
clusterCount = 1
regionUsageRestrictions = "GOV_REGIONS_ONLY"
+ userOrgMembershipStatus = "ACTIVE"
+ country = "US"
+ inviterUsername = ""
+ mobileNumber = ""
)
var (
@@ -74,6 +79,72 @@ var (
limitsTFSet, _ = types.SetValueFrom(context.Background(), project.TfLimitObjectType, []project.TFLimitModel{
*limitsTF[0],
})
+
+ usersSDK = []admin.GroupUserResponse{
+ {
+ Id: "user-id-1",
+ Username: "user1@example.com",
+ FirstName: admin.PtrString("FirstName1"),
+ LastName: admin.PtrString("LastName1"),
+ Roles: roles,
+ InvitationCreatedAt: nil,
+ InvitationExpiresAt: nil,
+ InviterUsername: admin.PtrString(inviterUsername),
+ OrgMembershipStatus: userOrgMembershipStatus,
+ Country: admin.PtrString("US"),
+ CreatedAt: admin.PtrTime(time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)),
+ LastAuth: admin.PtrTime(time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC)),
+ MobileNumber: admin.PtrString(mobileNumber),
+ },
+ {
+ Id: "user-id-2",
+ Username: "user2@example.com",
+ FirstName: admin.PtrString("FirstName2"),
+ LastName: admin.PtrString("LastName2"),
+ Roles: roles,
+ InvitationCreatedAt: nil,
+ InvitationExpiresAt: nil,
+ InviterUsername: admin.PtrString(inviterUsername),
+ OrgMembershipStatus: userOrgMembershipStatus,
+ Country: admin.PtrString(country),
+ CreatedAt: admin.PtrTime(time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)),
+ LastAuth: admin.PtrTime(time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC)),
+ MobileNumber: admin.PtrString(mobileNumber),
+ },
+ }
+ usersTF = []*project.TFCloudUsersDSModel{
+ {
+ ID: types.StringValue("user-id-1"),
+ Username: types.StringValue("user1@example.com"),
+ FirstName: types.StringValue("FirstName1"),
+ LastName: types.StringValue("LastName1"),
+ Roles: roleSet,
+ InvitationCreatedAt: types.StringNull(),
+ InvitationExpiresAt: types.StringNull(),
+ InviterUsername: types.StringValue(inviterUsername),
+ OrgMembershipStatus: types.StringValue(userOrgMembershipStatus),
+ Country: types.StringValue(country),
+ CreatedAt: types.StringValue("2025-01-01T00:00:00Z"),
+ LastAuth: types.StringValue("2025-01-02T00:00:00Z"),
+ MobileNumber: types.StringValue(mobileNumber),
+ },
+ {
+ ID: types.StringValue("user-id-2"),
+ Username: types.StringValue("user2@example.com"),
+ FirstName: types.StringValue("FirstName2"),
+ LastName: types.StringValue("LastName2"),
+ Roles: roleSet,
+ InvitationCreatedAt: types.StringNull(),
+ InvitationExpiresAt: types.StringNull(),
+ InviterUsername: types.StringValue(inviterUsername),
+ OrgMembershipStatus: types.StringValue(userOrgMembershipStatus),
+ Country: types.StringValue(country),
+ CreatedAt: types.StringValue("2025-01-01T00:00:00Z"),
+ LastAuth: types.StringValue("2025-01-02T00:00:00Z"),
+ MobileNumber: types.StringValue(mobileNumber),
+ },
+ }
+
ipAddressesTF, _ = types.ObjectValueFrom(context.Background(), project.IPAddressesObjectType.AttrTypes, project.TFIPAddressesModel{
Services: project.TFServicesModel{
Clusters: []project.TFClusterIPsModel{
@@ -199,6 +270,36 @@ func TestLimitsDataSourceSDKToTFModel(t *testing.T) {
}
}
+func TestUsersDataSourceSDKToDataSourceTFModel(t *testing.T) {
+ testCases := []struct {
+ name string
+ users []admin.GroupUserResponse
+ expectedTFModel []*project.TFCloudUsersDSModel
+ }{
+ {
+ name: "Users",
+ users: usersSDK,
+ expectedTFModel: usersTF,
+ },
+ {
+ name: "Empty Users",
+ users: []admin.GroupUserResponse{},
+ expectedTFModel: []*project.TFCloudUsersDSModel{},
+ },
+ {
+ name: "Nil Users",
+ users: nil,
+ expectedTFModel: []*project.TFCloudUsersDSModel{},
+ },
+ }
+ for _, tc := range testCases {
+ t.Run(tc.name, func(t *testing.T) {
+ resultModel := project.NewTFCloudUsersDataSourceModel(t.Context(), tc.users)
+ assert.Equal(t, tc.expectedTFModel, resultModel)
+ })
+ }
+}
+
func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) {
testCases := []struct {
name string
@@ -217,6 +318,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) {
Settings: &projectSettingsSDK,
IPAddresses: &IPAddressesSDK,
Limits: limitsSDK,
+ Users: usersSDK,
},
expectedTFModel: project.TFProjectDSModel{
@@ -234,6 +336,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) {
IsSlowOperationThresholdingEnabled: types.BoolValue(false),
Teams: teamsDSTF,
Limits: limitsTF,
+ Users: usersTF,
IPAddresses: ipAddressesTF,
Created: types.StringValue("0001-01-01T00:00:00Z"),
Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}),
@@ -250,6 +353,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) {
Settings: &projectSettingsSDK,
IPAddresses: &IPAddressesSDK,
Limits: limitsSDK,
+ Users: usersSDK,
IsSlowOperationThresholdingEnabled: true,
},
expectedTFModel: project.TFProjectDSModel{
@@ -269,6 +373,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) {
IsSlowOperationThresholdingEnabled: types.BoolValue(true),
Teams: teamsDSTF,
Limits: limitsTF,
+ Users: usersTF,
IPAddresses: ipAddressesTF,
Created: types.StringValue("0001-01-01T00:00:00Z"),
Tags: types.MapValueMust(types.StringType, map[string]attr.Value{}),
@@ -278,7 +383,7 @@ func TestProjectDataSourceSDKToDataSourceTFModel(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
- resultModel, diags := project.NewTFProjectDataSourceModel(t.Context(), tc.project, tc.projectProps)
+ resultModel, diags := project.NewTFProjectDataSourceModel(t.Context(), tc.project, &tc.projectProps)
if diags.HasError() {
t.Errorf("unexpected errors found: %s", diags.Errors()[0].Summary())
}
@@ -363,7 +468,7 @@ func TestProjectDataSourceSDKToResourceTFModel(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
- resultModel, diags := project.NewTFProjectResourceModel(t.Context(), tc.project, tc.projectProps)
+ resultModel, diags := project.NewTFProjectResourceModel(t.Context(), tc.project, &tc.projectProps)
if diags.HasError() {
t.Errorf("unexpected errors found: %s", diags.Errors()[0].Summary())
}
diff --git a/internal/service/project/resource_project.go b/internal/service/project/resource_project.go
index efa920220a..b84ce78c04 100644
--- a/internal/service/project/resource_project.go
+++ b/internal/service/project/resource_project.go
@@ -164,8 +164,17 @@ func (r *projectRS) Create(ctx context.Context, req resource.CreateRequest, resp
return
}
+ projectPropsParams := &PropsParams{
+ ProjectID: projectID,
+ IsDataSource: false,
+ ProjectsAPI: connV2.ProjectsApi,
+ TeamsAPI: connV2.TeamsApi,
+ PerformanceAdvisorAPI: connV2.PerformanceAdvisorApi,
+ MongoDBCloudUsersAPI: connV2.MongoDBCloudUsersApi,
+ }
+
// get project props
- projectProps, err := GetProjectPropsFromAPI(ctx, connV2.ProjectsApi, connV2.TeamsApi, connV2.PerformanceAdvisorApi, projectID, &resp.Diagnostics)
+ projectProps, err := GetProjectPropsFromAPI(ctx, projectPropsParams, &resp.Diagnostics)
if err != nil {
resp.Diagnostics.AddError("error when getting project properties after create", fmt.Sprintf(ErrorProjectRead, projectID, err.Error()))
return
@@ -174,7 +183,7 @@ func (r *projectRS) Create(ctx context.Context, req resource.CreateRequest, resp
filteredLimits := FilterUserDefinedLimits(projectProps.Limits, limits)
projectProps.Limits = filteredLimits
- projectPlanNew, diags := NewTFProjectResourceModel(ctx, projectRes, *projectProps)
+ projectPlanNew, diags := NewTFProjectResourceModel(ctx, projectRes, projectProps)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
@@ -216,8 +225,17 @@ func (r *projectRS) Read(ctx context.Context, req resource.ReadRequest, resp *re
return
}
+ projectPropsParams := &PropsParams{
+ ProjectID: projectID,
+ IsDataSource: false,
+ ProjectsAPI: connV2.ProjectsApi,
+ TeamsAPI: connV2.TeamsApi,
+ PerformanceAdvisorAPI: connV2.PerformanceAdvisorApi,
+ MongoDBCloudUsersAPI: connV2.MongoDBCloudUsersApi,
+ }
+
// get project props
- projectProps, err := GetProjectPropsFromAPI(ctx, connV2.ProjectsApi, connV2.TeamsApi, connV2.PerformanceAdvisorApi, projectID, &resp.Diagnostics)
+ projectProps, err := GetProjectPropsFromAPI(ctx, projectPropsParams, &resp.Diagnostics)
if err != nil {
resp.Diagnostics.AddError("error when getting project properties after create", fmt.Sprintf(ErrorProjectRead, projectID, err.Error()))
return
@@ -226,7 +244,7 @@ func (r *projectRS) Read(ctx context.Context, req resource.ReadRequest, resp *re
filteredLimits := FilterUserDefinedLimits(projectProps.Limits, limits)
projectProps.Limits = filteredLimits
- projectStateNew, diags := NewTFProjectResourceModel(ctx, projectRes, *projectProps)
+ projectStateNew, diags := NewTFProjectResourceModel(ctx, projectRes, projectProps)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
@@ -287,8 +305,17 @@ func (r *projectRS) Update(ctx context.Context, req resource.UpdateRequest, resp
return
}
+ projectPropsParams := &PropsParams{
+ ProjectID: projectID,
+ IsDataSource: false,
+ ProjectsAPI: connV2.ProjectsApi,
+ TeamsAPI: connV2.TeamsApi,
+ PerformanceAdvisorAPI: connV2.PerformanceAdvisorApi,
+ MongoDBCloudUsersAPI: connV2.MongoDBCloudUsersApi,
+ }
+
// get project props
- projectProps, err := GetProjectPropsFromAPI(ctx, connV2.ProjectsApi, connV2.TeamsApi, connV2.PerformanceAdvisorApi, projectID, &resp.Diagnostics)
+ projectProps, err := GetProjectPropsFromAPI(ctx, projectPropsParams, &resp.Diagnostics)
if err != nil {
resp.Diagnostics.AddError("error when getting project properties after create", fmt.Sprintf(ErrorProjectRead, projectID, err.Error()))
return
@@ -299,7 +326,7 @@ func (r *projectRS) Update(ctx context.Context, req resource.UpdateRequest, resp
filteredLimits := FilterUserDefinedLimits(projectProps.Limits, planLimits)
projectProps.Limits = filteredLimits
- projectPlanNew, diags := NewTFProjectResourceModel(ctx, projectRes, *projectProps)
+ projectPlanNew, diags := NewTFProjectResourceModel(ctx, projectRes, projectProps)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
@@ -363,41 +390,59 @@ type AdditionalProperties struct {
Settings *admin.GroupSettings
IPAddresses *admin.GroupIPAddresses
Limits []admin.DataFederationLimit
+ Users []admin.GroupUserResponse
IsSlowOperationThresholdingEnabled bool
}
+type PropsParams struct {
+ ProjectsAPI admin.ProjectsApi
+ TeamsAPI admin.TeamsApi
+ PerformanceAdvisorAPI admin.PerformanceAdvisorApi
+ MongoDBCloudUsersAPI admin.MongoDBCloudUsersApi
+ ProjectID string
+ IsDataSource bool
+}
+
// GetProjectPropsFromAPI fetches properties obtained from complementary endpoints associated with a project.
-func GetProjectPropsFromAPI(ctx context.Context, projectsAPI admin.ProjectsApi, teamsAPI admin.TeamsApi, performanceAdvisorAPI admin.PerformanceAdvisorApi, projectID string, warnings *diag.Diagnostics) (*AdditionalProperties, error) {
- teams, _, err := teamsAPI.ListGroupTeams(ctx, projectID).Execute()
+func GetProjectPropsFromAPI(ctx context.Context, params *PropsParams, warnings *diag.Diagnostics) (*AdditionalProperties, error) {
+ teams, _, err := params.TeamsAPI.ListGroupTeams(ctx, params.ProjectID).Execute()
if err != nil {
- return nil, fmt.Errorf("error getting project's teams assigned (%s): %v", projectID, err.Error())
+ return nil, fmt.Errorf("error getting project's teams assigned (%s): %v", params.ProjectID, err.Error())
}
- limits, _, err := projectsAPI.ListGroupLimits(ctx, projectID).Execute()
+ limits, _, err := params.ProjectsAPI.ListGroupLimits(ctx, params.ProjectID).Execute()
if err != nil {
- return nil, fmt.Errorf("error getting project's limits (%s): %s", projectID, err.Error())
+ return nil, fmt.Errorf("error getting project's limits (%s): %s", params.ProjectID, err.Error())
}
- projectSettings, _, err := projectsAPI.GetGroupSettings(ctx, projectID).Execute()
+ projectSettings, _, err := params.ProjectsAPI.GetGroupSettings(ctx, params.ProjectID).Execute()
if err != nil {
- return nil, fmt.Errorf("error getting project's settings assigned (%s): %v", projectID, err.Error())
+ return nil, fmt.Errorf("error getting project's settings assigned (%s): %v", params.ProjectID, err.Error())
}
- ipAddresses, _, err := projectsAPI.GetGroupIpAddresses(ctx, projectID).Execute()
+ ipAddresses, _, err := params.ProjectsAPI.GetGroupIpAddresses(ctx, params.ProjectID).Execute()
if err != nil {
- return nil, fmt.Errorf("error getting project's IP addresses (%s): %v", projectID, err.Error())
+ return nil, fmt.Errorf("error getting project's IP addresses (%s): %v", params.ProjectID, err.Error())
}
- isSlowOperationThresholdingEnabled, err := ReadIsSlowMsThresholdingEnabled(ctx, performanceAdvisorAPI, projectID, warnings)
+ isSlowOperationThresholdingEnabled, err := ReadIsSlowMsThresholdingEnabled(ctx, params.PerformanceAdvisorAPI, params.ProjectID, warnings)
if err != nil {
- return nil, fmt.Errorf("error getting project's slow operation thresholding enabled (%s): %v", projectID, err.Error())
+ return nil, fmt.Errorf("error getting project's slow operation thresholding enabled (%s): %v", params.ProjectID, err.Error())
}
+ var users []admin.GroupUserResponse
+ if params.IsDataSource {
+ users, err = ListAllProjectUsers(ctx, params.ProjectID, params.MongoDBCloudUsersAPI)
+ if err != nil {
+ return nil, fmt.Errorf("error getting project's users (%s): %v", params.ProjectID, err.Error())
+ }
+ }
return &AdditionalProperties{
Teams: teams,
Limits: limits,
Settings: projectSettings,
IPAddresses: ipAddresses,
IsSlowOperationThresholdingEnabled: isSlowOperationThresholdingEnabled,
+ Users: users,
}, nil
}
diff --git a/internal/service/project/resource_project_schema.go b/internal/service/project/resource_project_schema.go
index 08424d5dcd..c62f576731 100644
--- a/internal/service/project/resource_project_schema.go
+++ b/internal/service/project/resource_project_schema.go
@@ -4,6 +4,8 @@ import (
"context"
"fmt"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/terraform-plugin-framework/attr"
"github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/boolplanmodifier"
@@ -13,9 +15,9 @@ import (
"github.com/hashicorp/terraform-plugin-framework/resource/schema/setplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/types"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
func ResourceSchema(ctx context.Context) schema.Schema {
@@ -51,7 +53,7 @@ func ResourceSchema(ctx context.Context) schema.Schema {
"project_owner_id": schema.StringAttribute{
Optional: true,
PlanModifiers: []planmodifier.String{
- customplanmodifier.CreateOnlyAttributePlanModifier(),
+ customplanmodifier.CreateOnlyStringPlanModifier(),
},
},
"with_default_alerts_settings": schema.BoolAttribute{
@@ -154,6 +156,7 @@ func ResourceSchema(ctx context.Context) schema.Schema {
},
Blocks: map[string]schema.Block{
"teams": schema.SetNestedBlock{
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "parameter", "mongodbatlas_team_project_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
NestedObject: schema.NestedBlockObject{
Attributes: map[string]schema.Attribute{
"team_id": schema.StringAttribute{
diff --git a/internal/service/project/resource_project_test.go b/internal/service/project/resource_project_test.go
index 6911c0cb85..25cb269447 100644
--- a/internal/service/project/resource_project_test.go
+++ b/internal/service/project/resource_project_test.go
@@ -113,6 +113,7 @@ func TestGetProjectPropsFromAPI(t *testing.T) {
teamsMock := mockadmin.NewTeamsApi(t)
projectsMock := mockadmin.NewProjectsApi(t)
perfMock := mockadmin.NewPerformanceAdvisorApi(t)
+ cloudUsersMock := mockadmin.NewMongoDBCloudUsersApi(t)
teamsMock.EXPECT().ListGroupTeams(mock.Anything, mock.Anything).Return(admin.ListGroupTeamsApiRequest{ApiService: teamsMock})
teamsMock.EXPECT().ListGroupTeamsExecute(mock.Anything).Return(tc.teamRoleReponse.TeamRole, tc.teamRoleReponse.HTTPResponse, tc.teamRoleReponse.Err)
@@ -129,7 +130,16 @@ func TestGetProjectPropsFromAPI(t *testing.T) {
perfMock.EXPECT().GetManagedSlowMs(mock.Anything, mock.Anything).Return(admin.GetManagedSlowMsApiRequest{ApiService: perfMock}).Maybe()
perfMock.EXPECT().GetManagedSlowMsExecute(mock.Anything).Return(true, nil, nil).Maybe()
- _, err := project.GetProjectPropsFromAPI(t.Context(), projectsMock, teamsMock, perfMock, dummyProjectID, nil)
+ projectPropsParams := &project.PropsParams{
+ ProjectID: dummyProjectID,
+ IsDataSource: false,
+ ProjectsAPI: projectsMock,
+ TeamsAPI: teamsMock,
+ PerformanceAdvisorAPI: perfMock,
+ MongoDBCloudUsersAPI: cloudUsersMock,
+ }
+
+ _, err := project.GetProjectPropsFromAPI(t.Context(), projectPropsParams, nil)
if (err != nil) != tc.expectedError {
t.Errorf("Case %s: Received unexpected error: %v", tc.name, err)
@@ -528,14 +538,22 @@ func TestAccProject_basic(t *testing.T) {
"is_realtime_performance_panel_enabled",
"is_schema_advisor_enabled",
}
+
+ dataSourceChecks := map[string]string{
+ "users.#": "1",
+ }
+
checks := acc.AddAttrChecks(resourceName, nil, commonChecks)
checks = acc.AddAttrChecks(dataSourceNameByID, checks, commonChecks)
checks = acc.AddAttrChecks(dataSourceNameByName, checks, commonChecks)
+ checks = acc.AddAttrChecks(dataSourceNameByID, checks, dataSourceChecks)
+ checks = acc.AddAttrChecks(dataSourceNameByName, checks, dataSourceChecks)
checks = acc.AddAttrSetChecks(resourceName, checks, commonSetChecks...)
checks = acc.AddAttrSetChecks(dataSourceNameByID, checks, commonSetChecks...)
checks = acc.AddAttrSetChecks(dataSourceNameByName, checks, commonSetChecks...)
checks = append(checks, checkExists(resourceName), checkExists(dataSourceNameByID), checkExists(dataSourceNameByName))
checks = acc.AddAttrSetChecks(dataSourcePluralName, checks, "total_count", "results.#", "results.0.is_slow_operation_thresholding_enabled")
+ checks = append(checks, resource.TestCheckResourceAttrWith(dataSourcePluralName, "results.0.users.#", acc.IntGreatThan(0)))
resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t); acc.PreCheckProjectTeamsIDsWithMinCount(t, 3) },
@@ -1210,8 +1228,7 @@ func configBasic(orgID, projectName, projectOwnerID string, includeDataSource bo
data "mongodbatlas_project" "test2" {
name = mongodbatlas_project.test.name
}
-
- data "mongodbatlas_projects" "test" {
+ data "mongodbatlas_projects" "test" {
}
`
}
diff --git a/internal/service/projectinvitation/data_source_project_invitation.go b/internal/service/projectinvitation/data_source_project_invitation.go
index 526edc2de7..763d74a86b 100644
--- a/internal/service/projectinvitation/data_source_project_invitation.go
+++ b/internal/service/projectinvitation/data_source_project_invitation.go
@@ -6,13 +6,16 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
func DataSource() *schema.Resource {
return &schema.Resource{
- ReadContext: dataSourceRead,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "data source", "mongodbatlas_cloud_user_project_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ ReadContext: dataSourceRead,
Schema: map[string]*schema.Schema{
"project_id": {
Type: schema.TypeString,
diff --git a/internal/service/projectinvitation/resource_project_invitation.go b/internal/service/projectinvitation/resource_project_invitation.go
index 642e8477e3..e8f3f22152 100644
--- a/internal/service/projectinvitation/resource_project_invitation.go
+++ b/internal/service/projectinvitation/resource_project_invitation.go
@@ -5,21 +5,24 @@ import (
"fmt"
"regexp"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
-
- "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
func Resource() *schema.Resource {
return &schema.Resource{
- CreateContext: resourceCreate,
- ReadContext: resourceRead,
- UpdateContext: resourceUpdate,
- DeleteContext: resourceDelete,
+ DeprecationMessage: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "resource", "mongodbatlas_cloud_user_project_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
+ CreateContext: resourceCreate,
+ ReadContext: resourceRead,
+ UpdateContext: resourceUpdate,
+ DeleteContext: resourceDelete,
Importer: &schema.ResourceImporter{
StateContext: resourceImport,
},
diff --git a/internal/service/pushbasedlogexport/data_source.go b/internal/service/pushbasedlogexport/data_source.go
index aeffa62683..3bab17bce1 100644
--- a/internal/service/pushbasedlogexport/data_source.go
+++ b/internal/service/pushbasedlogexport/data_source.go
@@ -45,23 +45,13 @@ func (d *pushBasedLogExportDS) Read(ctx context.Context, req datasource.ReadRequ
return
}
- newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logConfig, nil)
+ newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logConfig, nil, nil)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
}
-
- dsModel := convertToDSModel(newTFModel)
- resp.Diagnostics.Append(resp.State.Set(ctx, dsModel)...)
-}
-
-func convertToDSModel(inputModel *TFPushBasedLogExportRSModel) TFPushBasedLogExportDSModel {
- return TFPushBasedLogExportDSModel{
- BucketName: inputModel.BucketName,
- CreateDate: inputModel.CreateDate,
- ProjectID: inputModel.ProjectID,
- IamRoleID: inputModel.IamRoleID,
- PrefixPath: inputModel.PrefixPath,
- State: inputModel.State,
+ dsModel := TFPushBasedLogExportDSModel{
+ TFPushBasedLogExportCommonModel: newTFModel.TFPushBasedLogExportCommonModel,
}
+ resp.Diagnostics.Append(resp.State.Set(ctx, dsModel)...)
}
diff --git a/internal/service/pushbasedlogexport/data_source_schema.go b/internal/service/pushbasedlogexport/data_source_schema.go
index fd7e109333..9981a05ec8 100644
--- a/internal/service/pushbasedlogexport/data_source_schema.go
+++ b/internal/service/pushbasedlogexport/data_source_schema.go
@@ -1,14 +1,5 @@
package pushbasedlogexport
-import (
- "github.com/hashicorp/terraform-plugin-framework/types"
-)
-
type TFPushBasedLogExportDSModel struct {
- BucketName types.String `tfsdk:"bucket_name"`
- CreateDate types.String `tfsdk:"create_date"`
- ProjectID types.String `tfsdk:"project_id"`
- IamRoleID types.String `tfsdk:"iam_role_id"`
- PrefixPath types.String `tfsdk:"prefix_path"`
- State types.String `tfsdk:"state"`
+ TFPushBasedLogExportCommonModel
}
diff --git a/internal/service/pushbasedlogexport/model.go b/internal/service/pushbasedlogexport/model.go
index e5e73ae175..21b912c84b 100644
--- a/internal/service/pushbasedlogexport/model.go
+++ b/internal/service/pushbasedlogexport/model.go
@@ -12,19 +12,24 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
)
-func NewTFPushBasedLogExport(ctx context.Context, projectID string, apiResp *admin.PushBasedLogExportProject, timeout *timeouts.Value) (*TFPushBasedLogExportRSModel, diag.Diagnostics) {
+func NewTFPushBasedLogExport(ctx context.Context, projectID string, apiResp *admin.PushBasedLogExportProject, timeout *timeouts.Value, deleteOnCreateTimeout *types.Bool) (*TFPushBasedLogExportRSModel, diag.Diagnostics) {
tfModel := &TFPushBasedLogExportRSModel{
- ProjectID: types.StringPointerValue(&projectID),
- BucketName: types.StringPointerValue(apiResp.BucketName),
- IamRoleID: types.StringPointerValue(apiResp.IamRoleId),
- PrefixPath: types.StringPointerValue(apiResp.PrefixPath),
- CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.CreateDate)),
- State: types.StringPointerValue(apiResp.State),
+ TFPushBasedLogExportCommonModel: TFPushBasedLogExportCommonModel{
+ ProjectID: types.StringPointerValue(&projectID),
+ BucketName: types.StringPointerValue(apiResp.BucketName),
+ IamRoleID: types.StringPointerValue(apiResp.IamRoleId),
+ PrefixPath: types.StringPointerValue(apiResp.PrefixPath),
+ CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(apiResp.CreateDate)),
+ State: types.StringPointerValue(apiResp.State),
+ },
}
if timeout != nil {
tfModel.Timeouts = *timeout
}
+ if deleteOnCreateTimeout != nil {
+ tfModel.DeleteOnCreateTimeout = *deleteOnCreateTimeout
+ }
return tfModel, nil
}
diff --git a/internal/service/pushbasedlogexport/model_test.go b/internal/service/pushbasedlogexport/model_test.go
index 96e7cc0c72..ce0ed58d8c 100644
--- a/internal/service/pushbasedlogexport/model_test.go
+++ b/internal/service/pushbasedlogexport/model_test.go
@@ -23,11 +23,12 @@ var (
)
type sdkToTFModelTestCase struct {
- apiResp *admin.PushBasedLogExportProject
- timeout *timeouts.Value
- expectedTFModel *pushbasedlogexport.TFPushBasedLogExportRSModel
- name string
- projectID string
+ apiResp *admin.PushBasedLogExportProject
+ timeout *timeouts.Value
+ deleteOnCreateTimeout *types.Bool
+ expectedTFModel *pushbasedlogexport.TFPushBasedLogExportRSModel
+ name string
+ projectID string
}
func TestNewTFPushBasedLogExport(t *testing.T) {
@@ -45,12 +46,14 @@ func TestNewTFPushBasedLogExport(t *testing.T) {
State: admin.PtrString(activeState),
},
expectedTFModel: &pushbasedlogexport.TFPushBasedLogExportRSModel{
- ProjectID: types.StringValue(testProjectID),
- BucketName: types.StringValue(testBucketName),
- IamRoleID: types.StringValue(testIAMRoleID),
- PrefixPath: types.StringValue(testPrefixPath),
- State: types.StringValue(activeState),
- CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(¤tTime)),
+ TFPushBasedLogExportCommonModel: pushbasedlogexport.TFPushBasedLogExportCommonModel{
+ ProjectID: types.StringValue(testProjectID),
+ BucketName: types.StringValue(testBucketName),
+ IamRoleID: types.StringValue(testIAMRoleID),
+ PrefixPath: types.StringValue(testPrefixPath),
+ State: types.StringValue(activeState),
+ CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(¤tTime)),
+ },
},
},
{
@@ -64,19 +67,21 @@ func TestNewTFPushBasedLogExport(t *testing.T) {
State: admin.PtrString(activeState),
},
expectedTFModel: &pushbasedlogexport.TFPushBasedLogExportRSModel{
- ProjectID: types.StringValue(testProjectID),
- BucketName: types.StringValue(testBucketName),
- IamRoleID: types.StringValue(testIAMRoleID),
- PrefixPath: types.StringValue(prefixPathEmpty),
- State: types.StringValue(activeState),
- CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(¤tTime)),
+ TFPushBasedLogExportCommonModel: pushbasedlogexport.TFPushBasedLogExportCommonModel{
+ ProjectID: types.StringValue(testProjectID),
+ BucketName: types.StringValue(testBucketName),
+ IamRoleID: types.StringValue(testIAMRoleID),
+ PrefixPath: types.StringValue(prefixPathEmpty),
+ State: types.StringValue(activeState),
+ CreateDate: types.StringPointerValue(conversion.TimePtrToStringPtr(¤tTime)),
+ },
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
- resultModel, _ := pushbasedlogexport.NewTFPushBasedLogExport(t.Context(), tc.projectID, tc.apiResp, tc.timeout)
+ resultModel, _ := pushbasedlogexport.NewTFPushBasedLogExport(t.Context(), tc.projectID, tc.apiResp, tc.timeout, tc.deleteOnCreateTimeout)
if !assert.Equal(t, tc.expectedTFModel, resultModel) {
t.Errorf("result model does not match expected output: expected %+v, got %+v", tc.expectedTFModel, resultModel)
}
@@ -96,9 +101,11 @@ func TestNewPushBasedLogExportReq(t *testing.T) {
{
name: "Valid TF state",
input: &pushbasedlogexport.TFPushBasedLogExportRSModel{
- BucketName: types.StringValue(testBucketName),
- IamRoleID: types.StringValue(testIAMRoleID),
- PrefixPath: types.StringValue(testPrefixPath),
+ TFPushBasedLogExportCommonModel: pushbasedlogexport.TFPushBasedLogExportCommonModel{
+ BucketName: types.StringValue(testBucketName),
+ IamRoleID: types.StringValue(testIAMRoleID),
+ PrefixPath: types.StringValue(testPrefixPath),
+ },
},
expectedCreateReq: &admin.CreatePushBasedLogExportProjectRequest{
BucketName: testBucketName,
@@ -114,9 +121,11 @@ func TestNewPushBasedLogExportReq(t *testing.T) {
{
name: "Valid TF state with empty prefix path",
input: &pushbasedlogexport.TFPushBasedLogExportRSModel{
- BucketName: types.StringValue(testBucketName),
- IamRoleID: types.StringValue(testIAMRoleID),
- PrefixPath: types.StringValue(prefixPathEmpty),
+ TFPushBasedLogExportCommonModel: pushbasedlogexport.TFPushBasedLogExportCommonModel{
+ BucketName: types.StringValue(testBucketName),
+ IamRoleID: types.StringValue(testIAMRoleID),
+ PrefixPath: types.StringValue(prefixPathEmpty),
+ },
},
expectedCreateReq: &admin.CreatePushBasedLogExportProjectRequest{
BucketName: testBucketName,
diff --git a/internal/service/pushbasedlogexport/resource.go b/internal/service/pushbasedlogexport/resource.go
index c0dd084c60..882296019c 100644
--- a/internal/service/pushbasedlogexport/resource.go
+++ b/internal/service/pushbasedlogexport/resource.go
@@ -11,6 +11,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework/path"
"github.com/hashicorp/terraform-plugin-framework/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
@@ -74,6 +75,15 @@ func (r *pushBasedLogExportRS) Create(ctx context.Context, req resource.CreateRe
logExportConfigResp, err := WaitStateTransition(ctx, projectID, connV2.PushBasedLogExportApi,
retryTimeConfig(timeout, minTimeoutCreateUpdate))
+
+ err = cleanup.HandleCreateTimeout(cleanup.ResolveDeleteOnCreateTimeout(tfPlan.DeleteOnCreateTimeout), err, func(ctx context.Context) error {
+ cleanResp, cleanErr := connV2.PushBasedLogExportApi.DeleteLogExport(ctx, projectID).Execute()
+ if validate.StatusNotFound(cleanResp) {
+ return nil
+ }
+ return cleanErr
+ })
+
if err != nil {
resp.Diagnostics.AddError("Error when creating push-based log export configuration", err.Error())
@@ -84,7 +94,7 @@ func (r *pushBasedLogExportRS) Create(ctx context.Context, req resource.CreateRe
return
}
- newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logExportConfigResp, &tfPlan.Timeouts)
+ newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logExportConfigResp, &tfPlan.Timeouts, &tfPlan.DeleteOnCreateTimeout)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
@@ -111,7 +121,7 @@ func (r *pushBasedLogExportRS) Read(ctx context.Context, req resource.ReadReques
return
}
- newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logConfig, &tfState.Timeouts)
+ newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logConfig, &tfState.Timeouts, &tfState.DeleteOnCreateTimeout)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
@@ -148,7 +158,7 @@ func (r *pushBasedLogExportRS) Update(ctx context.Context, req resource.UpdateRe
return
}
- newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logExportConfigResp, &tfPlan.Timeouts)
+ newTFModel, diags := NewTFPushBasedLogExport(ctx, projectID, logExportConfigResp, &tfPlan.Timeouts, &tfPlan.DeleteOnCreateTimeout)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
diff --git a/internal/service/pushbasedlogexport/resource_schema.go b/internal/service/pushbasedlogexport/resource_schema.go
index c67933a8f4..f809a80b7d 100644
--- a/internal/service/pushbasedlogexport/resource_schema.go
+++ b/internal/service/pushbasedlogexport/resource_schema.go
@@ -12,6 +12,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/schema/validator"
"github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
)
func ResourceSchema(ctx context.Context) schema.Schema {
@@ -55,16 +56,28 @@ func ResourceSchema(ctx context.Context) schema.Schema {
Update: true,
Delete: true,
}),
+ "delete_on_create_timeout": schema.BoolAttribute{
+ Optional: true,
+ PlanModifiers: []planmodifier.Bool{
+ customplanmodifier.CreateOnlyBoolPlanModifier(),
+ },
+ MarkdownDescription: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
+type TFPushBasedLogExportCommonModel struct {
+ BucketName types.String `tfsdk:"bucket_name"`
+ CreateDate types.String `tfsdk:"create_date"`
+ ProjectID types.String `tfsdk:"project_id"`
+ IamRoleID types.String `tfsdk:"iam_role_id"`
+ PrefixPath types.String `tfsdk:"prefix_path"`
+ State types.String `tfsdk:"state"`
+}
+
type TFPushBasedLogExportRSModel struct {
- BucketName types.String `tfsdk:"bucket_name"`
- CreateDate types.String `tfsdk:"create_date"`
- ProjectID types.String `tfsdk:"project_id"`
- IamRoleID types.String `tfsdk:"iam_role_id"`
- PrefixPath types.String `tfsdk:"prefix_path"`
- State types.String `tfsdk:"state"`
- Timeouts timeouts.Value `tfsdk:"timeouts"`
+ TFPushBasedLogExportCommonModel
+ Timeouts timeouts.Value `tfsdk:"timeouts"`
+ DeleteOnCreateTimeout types.Bool `tfsdk:"delete_on_create_timeout"`
}
diff --git a/internal/service/pushbasedlogexport/resource_test.go b/internal/service/pushbasedlogexport/resource_test.go
index c4d398c0b9..af60f5ef1f 100644
--- a/internal/service/pushbasedlogexport/resource_test.go
+++ b/internal/service/pushbasedlogexport/resource_test.go
@@ -44,7 +44,7 @@ func basicTestCase(tb testing.TB) *resource.TestCase {
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
- Config: configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, nonEmptyPrefixPath, true),
+ Config: configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, nonEmptyPrefixPath, true, "", nil),
Check: resource.ComposeAggregateTestCheckFunc(commonChecks(s3BucketName1, nonEmptyPrefixPath)...),
},
{
@@ -86,7 +86,7 @@ func noPrefixPathTestCase(tb testing.TB) *resource.TestCase {
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
- Config: configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, defaultPrefixPath, false),
+ Config: configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, defaultPrefixPath, false, "", nil),
Check: resource.ComposeAggregateTestCheckFunc(commonChecks(s3BucketName1, defaultPrefixPath)...),
},
},
@@ -116,6 +116,39 @@ func createFailure(tb testing.TB) *resource.TestCase {
}
}
+func TestAccPushBasedLogExport_createTimeoutWithDeleteOnCreateTimeout(t *testing.T) {
+ resource.Test(t, *createTimeoutWithDeleteOnCreateTimeout(t))
+}
+
+func createTimeoutWithDeleteOnCreateTimeout(tb testing.TB) *resource.TestCase {
+ tb.Helper()
+
+ var (
+ projectID = acc.ProjectIDExecution(tb)
+ s3BucketNamePrefix = acc.RandomS3BucketName()
+ s3BucketName1 = fmt.Sprintf("%s-1", s3BucketNamePrefix)
+ s3BucketName2 = fmt.Sprintf("%s-2", s3BucketNamePrefix)
+ s3BucketPolicyName = fmt.Sprintf("%s-s3-policy", s3BucketNamePrefix)
+ awsIAMRoleName = acc.RandomIAMRole()
+ awsIAMRolePolicyName = fmt.Sprintf("%s-policy", awsIAMRoleName)
+ createTimeout = "1s"
+ deleteOnCreateTimeout = true
+ )
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(tb) },
+ ExternalProviders: acc.ExternalProvidersOnlyAWS(),
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, nonEmptyPrefixPath, true, acc.TimeoutConfig(&createTimeout, nil, nil), &deleteOnCreateTimeout),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ },
+ }
+}
+
func pushBasedLogExportInvalidConfig(projectID string) string {
return fmt.Sprintf(`resource "mongodbatlas_push_based_log_export" "test" {
project_id = %[1]q
@@ -140,7 +173,7 @@ func addAttrChecks(checks []resource.TestCheckFunc, mapChecks map[string]string)
return acc.AddAttrChecks(datasourceName, checks, mapChecks)
}
-func configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, prefixPath string, usePrefixPath bool) string {
+func configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName, prefixPath string, usePrefixPath bool, timeoutConfig string, deleteOnCreateTimeout *bool) string {
test := fmt.Sprintf(`
locals {
project_id = %[1]q
@@ -155,7 +188,7 @@ func configBasic(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, aw
%[8]s
`, projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName,
- awsIAMroleAuthAndS3Config(s3BucketName1, s3BucketName2), pushBasedLogExportConfig(false, usePrefixPath, prefixPath))
+ awsIAMroleAuthAndS3Config(s3BucketName1, s3BucketName2), pushBasedLogExportConfig(false, usePrefixPath, prefixPath, timeoutConfig, deleteOnCreateTimeout))
return test
}
@@ -174,13 +207,17 @@ func configBasicUpdated(projectID, s3BucketName1, s3BucketName2, s3BucketPolicyN
%[8]s
`, projectID, s3BucketName1, s3BucketName2, s3BucketPolicyName, awsIAMRoleName, awsIAMRolePolicyName,
- awsIAMroleAuthAndS3Config(s3BucketName1, s3BucketName2), pushBasedLogExportConfig(true, usePrefixPath, prefixPath)) // updating the S3 bucket to use for push-based log config
+ awsIAMroleAuthAndS3Config(s3BucketName1, s3BucketName2), pushBasedLogExportConfig(true, usePrefixPath, prefixPath, "", nil)) // updating the S3 bucket to use for push-based log config
return test
}
// pushBasedLogExportConfig returns config for mongodbatlas_push_based_log_export resource and data source.
// This method uses the project and S3 bucket created in awsIAMroleAuthAndS3Config()
-func pushBasedLogExportConfig(useBucket2, usePrefixPath bool, prefixPath string) string {
+func pushBasedLogExportConfig(useBucket2, usePrefixPath bool, prefixPath, timeoutConfig string, deleteOnCreateTimeout *bool) string {
+ deleteOnCreateTimeoutAttr := ""
+ if deleteOnCreateTimeout != nil {
+ deleteOnCreateTimeoutAttr = fmt.Sprintf("delete_on_create_timeout = %[1]t", *deleteOnCreateTimeout)
+ }
bucketNameAttr := "bucket_name = aws_s3_bucket.log_bucket_1.bucket"
if useBucket2 {
bucketNameAttr = "bucket_name = aws_s3_bucket.log_bucket_2.bucket"
@@ -191,10 +228,12 @@ func pushBasedLogExportConfig(useBucket2, usePrefixPath bool, prefixPath string)
%[1]s
iam_role_id = mongodbatlas_cloud_provider_access_authorization.auth_role.role_id
prefix_path = %[2]q
+ %[4]s
+ %[5]s
}
%[3]s
- `, bucketNameAttr, prefixPath, pushBasedLogExportDataSourceConfig())
+ `, bucketNameAttr, prefixPath, pushBasedLogExportDataSourceConfig(), deleteOnCreateTimeoutAttr, timeoutConfig)
}
return fmt.Sprintf(`resource "mongodbatlas_push_based_log_export" "test" {
diff --git a/internal/service/searchdeployment/resource.go b/internal/service/searchdeployment/resource.go
index 408accc9aa..089a9eb021 100644
--- a/internal/service/searchdeployment/resource.go
+++ b/internal/service/searchdeployment/resource.go
@@ -3,7 +3,6 @@ package searchdeployment
import (
"context"
"errors"
- "fmt"
"regexp"
"time"
@@ -68,18 +67,6 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
if diags.HasError() {
return
}
- if plan.DeleteOnCreateTimeout.ValueBool() {
- var deferCall func()
- deleteOnTimeout := func(newCtx context.Context) error {
- cleanup.ReplaceContextDeadlineExceededDiags(diags, createTimeout)
- _, err := connV2.AtlasSearchApi.DeleteClusterSearchDeployment(newCtx, projectID, clusterName).Execute()
- return err
- }
- ctx, deferCall = cleanup.OnTimeout(
- ctx, createTimeout, diags.AddWarning, fmt.Sprintf("Search Deployment %s, (%s)", clusterName, projectID), deleteOnTimeout,
- )
- defer deferCall()
- }
if _, _, err := connV2.AtlasSearchApi.CreateClusterSearchDeployment(ctx, projectID, clusterName, &createReq).Execute(); err != nil {
diags.AddError("error during search deployment creation", err.Error())
return
@@ -87,6 +74,10 @@ func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resou
deploymentResp, err := WaitSearchNodeStateTransition(ctx, projectID, clusterName, connV2.AtlasSearchApi,
RetryTimeConfig(createTimeout, minTimeoutCreateUpdate))
+ err = cleanup.HandleCreateTimeout(cleanup.ResolveDeleteOnCreateTimeout(plan.DeleteOnCreateTimeout), err, func(ctxCleanup context.Context) error {
+ _, err := connV2.AtlasSearchApi.DeleteClusterSearchDeployment(ctxCleanup, projectID, clusterName).Execute()
+ return err
+ })
if err != nil {
diags.AddError("error during search deployment creation", err.Error())
return
diff --git a/internal/service/searchdeployment/resource_migration_test.go b/internal/service/searchdeployment/resource_migration_test.go
index d9b7fc0210..7cf713c8f3 100644
--- a/internal/service/searchdeployment/resource_migration_test.go
+++ b/internal/service/searchdeployment/resource_migration_test.go
@@ -4,6 +4,7 @@ import (
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
)
@@ -18,7 +19,7 @@ func TestMigSearchDeployment_basic(t *testing.T) {
)
mig.SkipIfVersionBelow(t, "1.32.0") // enabled_for_search_nodes introduced in this version
resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { mig.PreCheckBasic(t) },
+ PreCheck: func() { mig.PreCheckBasic(t); mig.PreCheckOldPreviewEnv(t) },
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
diff --git a/internal/service/searchdeployment/resource_schema.go b/internal/service/searchdeployment/resource_schema.go
index 50c9fe3739..56fc0f5be8 100644
--- a/internal/service/searchdeployment/resource_schema.go
+++ b/internal/service/searchdeployment/resource_schema.go
@@ -69,7 +69,7 @@ func ResourceSchema(ctx context.Context) schema.Schema {
},
"delete_on_create_timeout": schema.BoolAttribute{
Optional: true,
- MarkdownDescription: "Flag that indicates whether to delete the search deployment if the creation times out, default is false.",
+ MarkdownDescription: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
},
"encryption_at_rest_provider": schema.StringAttribute{
Computed: true,
diff --git a/internal/service/searchdeployment/resource_test.go b/internal/service/searchdeployment/resource_test.go
index e38e744daa..f210160def 100644
--- a/internal/service/searchdeployment/resource_test.go
+++ b/internal/service/searchdeployment/resource_test.go
@@ -10,11 +10,10 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+ "github.com/stretchr/testify/require"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/searchdeployment"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
- "github.com/stretchr/testify/require"
)
const (
@@ -60,6 +59,11 @@ const deleteTimeout = 30 * time.Minute
func TestAccSearchDeployment_timeoutTest(t *testing.T) {
var (
+ timeoutStrNoDeleteOnCreate = `
+ timeouts = {
+ create = "90s"
+ }
+ `
timeoutsStrShort = `
timeouts = {
create = "90s"
@@ -71,8 +75,7 @@ func TestAccSearchDeployment_timeoutTest(t *testing.T) {
projectID, clusterName = acc.ProjectIDExecutionWithCluster(t, 6)
configWithTimeout = func(timeoutsStr string) string {
normalConfig := configBasic(projectID, clusterName, "S20_HIGHCPU_NVME", 3, false)
- configWithTimeout := acc.ConfigAddResourceStr(t, normalConfig, resourceID, timeoutsStr)
- return acc.ConvertAdvancedClusterToPreviewProviderV2(t, config.PreviewProviderV2AdvancedCluster(), configWithTimeout)
+ return acc.ConfigAddResourceStr(t, normalConfig, resourceID, timeoutsStr)
}
)
resource.ParallelTest(t, resource.TestCase{
@@ -81,8 +84,17 @@ func TestAccSearchDeployment_timeoutTest(t *testing.T) {
CheckDestroy: checkDestroy,
Steps: []resource.TestStep{
{
+ Config: configWithTimeout(timeoutStrNoDeleteOnCreate),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ {
+ PreConfig: func() {
+ timeoutConfig := searchdeployment.RetryTimeConfig(deleteTimeout, 30*time.Second)
+ err := searchdeployment.WaitSearchNodeDelete(t.Context(), projectID, clusterName, acc.ConnV2().AtlasSearchApi, timeoutConfig)
+ require.NoError(t, err)
+ },
Config: configWithTimeout(timeoutsStrShort),
- ExpectError: regexp.MustCompile(cleanup.TimeoutReachedPrefix),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
},
{
PreConfig: func() {
diff --git a/internal/service/streamprocessor/model.go b/internal/service/streamprocessor/model.go
index a0bc3b2fab..f2e69afe73 100644
--- a/internal/service/streamprocessor/model.go
+++ b/internal/service/streamprocessor/model.go
@@ -5,6 +5,7 @@ import (
"encoding/json"
"github.com/hashicorp/terraform-plugin-framework-jsontypes/jsontypes"
+ "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-framework/types/basetypes"
@@ -79,7 +80,7 @@ func NewStreamProcessorUpdateReq(ctx context.Context, plan *TFStreamProcessorRSM
return streamProcessorAPIParams, nil
}
-func NewStreamProcessorWithStats(ctx context.Context, projectID, instanceName string, apiResp *admin.StreamsProcessorWithStats) (*TFStreamProcessorRSModel, diag.Diagnostics) {
+func NewStreamProcessorWithStats(ctx context.Context, projectID, instanceName string, apiResp *admin.StreamsProcessorWithStats, timeout *timeouts.Value, deleteOnCreateTimeout *types.Bool) (*TFStreamProcessorRSModel, diag.Diagnostics) {
if apiResp == nil {
return nil, diag.Diagnostics{diag.NewErrorDiagnostic("streamProcessor API response is nil", "")}
}
@@ -105,6 +106,12 @@ func NewStreamProcessorWithStats(ctx context.Context, projectID, instanceName st
State: types.StringPointerValue(&apiResp.State),
Stats: statsTF,
}
+ if timeout != nil {
+ tfModel.Timeouts = *timeout
+ }
+ if deleteOnCreateTimeout != nil {
+ tfModel.DeleteOnCreateTimeout = *deleteOnCreateTimeout
+ }
return tfModel, nil
}
diff --git a/internal/service/streamprocessor/model_test.go b/internal/service/streamprocessor/model_test.go
index 1fd71ade48..2985624b95 100644
--- a/internal/service/streamprocessor/model_test.go
+++ b/internal/service/streamprocessor/model_test.go
@@ -230,7 +230,7 @@ func TestSDKToTFModel(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
sdkModel := tc.sdkModel
- resultModel, diags := streamprocessor.NewStreamProcessorWithStats(t.Context(), projectID, instanceName, sdkModel)
+ resultModel, diags := streamprocessor.NewStreamProcessorWithStats(t.Context(), projectID, instanceName, sdkModel, nil, nil)
if diags.HasError() {
t.Fatalf("unexpected errors found: %s", diags.Errors()[0].Summary())
}
diff --git a/internal/service/streamprocessor/resource.go b/internal/service/streamprocessor/resource.go
index bc722555fe..667bca7e2d 100644
--- a/internal/service/streamprocessor/resource.go
+++ b/internal/service/streamprocessor/resource.go
@@ -10,6 +10,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework/path"
"github.com/hashicorp/terraform-plugin-framework/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/cleanup"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
@@ -86,7 +87,19 @@ func (r *streamProcessorRS) Create(ctx context.Context, req resource.CreateReque
ProcessorName: processorName,
}
- streamProcessorResp, err := WaitStateTransition(ctx, streamProcessorParams, connV2.StreamsApi, []string{InitiatingState, CreatingState}, []string{CreatedState})
+ createTimeout := cleanup.ResolveTimeout(ctx, &plan.Timeouts, cleanup.OperationCreate, &resp.Diagnostics)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ streamProcessorResp, err := WaitStateTransitionWithTimeout(ctx, streamProcessorParams, connV2.StreamsApi, []string{InitiatingState, CreatingState}, []string{CreatedState}, createTimeout)
+ err = cleanup.HandleCreateTimeout(cleanup.ResolveDeleteOnCreateTimeout(plan.DeleteOnCreateTimeout), err, func(ctxCleanup context.Context) error {
+ _, err := connV2.StreamsApi.DeleteStreamProcessor(ctxCleanup, projectID, instanceName, processorName).Execute()
+ if err != nil {
+ return err
+ }
+ return nil
+ })
if err != nil {
resp.Diagnostics.AddError("Error creating stream processor", err.Error())
return
@@ -111,7 +124,7 @@ func (r *streamProcessorRS) Create(ctx context.Context, req resource.CreateReque
}
}
- newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessorResp)
+ newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessorResp, &plan.Timeouts, &plan.DeleteOnCreateTimeout)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
@@ -140,7 +153,7 @@ func (r *streamProcessorRS) Read(ctx context.Context, req resource.ReadRequest,
return
}
- newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessor)
+ newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessor, &state.Timeouts, &state.DeleteOnCreateTimeout)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
@@ -238,7 +251,7 @@ func (r *streamProcessorRS) Update(ctx context.Context, req resource.UpdateReque
}
}
- newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessorResp)
+ newStreamProcessorModel, diags := NewStreamProcessorWithStats(ctx, projectID, instanceName, streamProcessorResp, &plan.Timeouts, &plan.DeleteOnCreateTimeout)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
diff --git a/internal/service/streamprocessor/resource_schema.go b/internal/service/streamprocessor/resource_schema.go
index 4f9fb9a54a..a180be509a 100644
--- a/internal/service/streamprocessor/resource_schema.go
+++ b/internal/service/streamprocessor/resource_schema.go
@@ -4,11 +4,13 @@ import (
"context"
"github.com/hashicorp/terraform-plugin-framework-jsontypes/jsontypes"
+ "github.com/hashicorp/terraform-plugin-framework-timeouts/resource/timeouts"
"github.com/hashicorp/terraform-plugin-framework/attr"
"github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/customplanmodifier"
)
func ResourceSchema(ctx context.Context) schema.Schema {
@@ -73,19 +75,31 @@ func ResourceSchema(ctx context.Context) schema.Schema {
Computed: true,
MarkdownDescription: "The stats associated with the stream processor. Refer to the [MongoDB Atlas Docs](https://www.mongodb.com/docs/atlas/atlas-stream-processing/manage-stream-processor/#view-statistics-of-a-stream-processor) for more information.",
},
+ "timeouts": timeouts.Attributes(ctx, timeouts.Opts{
+ Create: true,
+ }),
+ "delete_on_create_timeout": schema.BoolAttribute{
+ Optional: true,
+ PlanModifiers: []planmodifier.Bool{
+ customplanmodifier.CreateOnlyBoolPlanModifier(),
+ },
+ MarkdownDescription: "Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.",
+ },
},
}
}
type TFStreamProcessorRSModel struct {
- InstanceName types.String `tfsdk:"instance_name"`
- Options types.Object `tfsdk:"options"`
- Pipeline jsontypes.Normalized `tfsdk:"pipeline"`
- ProcessorID types.String `tfsdk:"id"`
- ProcessorName types.String `tfsdk:"processor_name"`
- ProjectID types.String `tfsdk:"project_id"`
- State types.String `tfsdk:"state"`
- Stats types.String `tfsdk:"stats"`
+ InstanceName types.String `tfsdk:"instance_name"`
+ Options types.Object `tfsdk:"options"`
+ Pipeline jsontypes.Normalized `tfsdk:"pipeline"`
+ ProcessorID types.String `tfsdk:"id"`
+ ProcessorName types.String `tfsdk:"processor_name"`
+ ProjectID types.String `tfsdk:"project_id"`
+ State types.String `tfsdk:"state"`
+ Stats types.String `tfsdk:"stats"`
+ Timeouts timeouts.Value `tfsdk:"timeouts"`
+ DeleteOnCreateTimeout types.Bool `tfsdk:"delete_on_create_timeout"`
}
type TFOptionsModel struct {
diff --git a/internal/service/streamprocessor/resource_test.go b/internal/service/streamprocessor/resource_test.go
index 0fc8be2757..650e8cb94e 100644
--- a/internal/service/streamprocessor/resource_test.go
+++ b/internal/service/streamprocessor/resource_test.go
@@ -7,13 +7,13 @@ import (
"strings"
"testing"
+ "github.com/hashicorp/terraform-plugin-sdk/helper/acctest"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/knownvalue"
"github.com/hashicorp/terraform-plugin-testing/statecheck"
"github.com/hashicorp/terraform-plugin-testing/terraform"
"github.com/stretchr/testify/assert"
- "github.com/hashicorp/terraform-plugin-sdk/helper/acctest"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -57,12 +57,12 @@ func basicTestCase(t *testing.T) *resource.TestCase {
CheckDestroy: checkDestroyStreamProcessor,
Steps: []resource.TestStep{
{
- Config: config(t, projectID, instanceName, processorName, "", randomSuffix, sampleSrcConfig, testLogDestConfig),
+ Config: config(t, projectID, instanceName, processorName, "", randomSuffix, sampleSrcConfig, testLogDestConfig, "", nil),
Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, false),
ConfigStateChecks: pluralConfigStateChecks(processorName, streamprocessor.CreatedState, instanceName, false, false),
},
{
- Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, randomSuffix, sampleSrcConfig, testLogDestConfig),
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, randomSuffix, sampleSrcConfig, testLogDestConfig, "", nil),
Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.StartedState, true, false),
ConfigStateChecks: pluralConfigStateChecks(processorName, streamprocessor.StartedState, instanceName, true, false),
},
@@ -89,7 +89,7 @@ func TestAccStreamProcessor_JSONWhiteSpaceFormat(t *testing.T) {
CheckDestroy: checkDestroyStreamProcessor,
Steps: []resource.TestStep{
{
- Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, randomSuffix, sampleSrcConfigExtraSpaces, testLogDestConfig),
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, randomSuffix, sampleSrcConfigExtraSpaces, testLogDestConfig, "", nil),
Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, false),
ConfigStateChecks: pluralConfigStateChecks(processorName, streamprocessor.CreatedState, instanceName, false, false),
},
@@ -112,7 +112,7 @@ func TestAccStreamProcessor_withOptions(t *testing.T) {
CheckDestroy: checkDestroyStreamProcessor,
Steps: []resource.TestStep{
{
- Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, randomSuffix, src, dest),
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.CreatedState, randomSuffix, src, dest, "", nil),
Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.CreatedState, false, true),
ConfigStateChecks: pluralConfigStateChecks(processorName, streamprocessor.CreatedState, instanceName, false, true),
},
@@ -276,7 +276,7 @@ func TestAccStreamProcessor_clusterType(t *testing.T) {
CheckDestroy: checkDestroyStreamProcessor,
Steps: []resource.TestStep{
{
- Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, randomSuffix, srcConfig, testLogDestConfig),
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, randomSuffix, srcConfig, testLogDestConfig, "", nil),
Check: composeStreamProcessorChecks(projectID, instanceName, processorName, streamprocessor.StartedState, true, false),
ConfigStateChecks: pluralConfigStateChecks(processorName, streamprocessor.StartedState, instanceName, true, false),
},
@@ -297,16 +297,38 @@ func TestAccStreamProcessor_createErrors(t *testing.T) {
CheckDestroy: checkDestroyStreamProcessor,
Steps: []resource.TestStep{
{
- Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, randomSuffix, invalidJSONConfig, testLogDestConfig),
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, randomSuffix, invalidJSONConfig, testLogDestConfig, "", nil),
ExpectError: regexp.MustCompile("Invalid JSON String Value"),
},
{
- Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, randomSuffix, sampleSrcConfig, testLogDestConfig),
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.StoppedState, randomSuffix, sampleSrcConfig, testLogDestConfig, "", nil),
ExpectError: regexp.MustCompile("When creating a stream processor, the only valid states are CREATED and STARTED"),
},
}})
}
+func TestAccStreamProcessor_createTimeoutWithDeleteOnCreate(t *testing.T) {
+ acc.SkipTestForCI(t) // Creation of stream processor for testing is too fast to force the creation timeout
+ var (
+ projectID, instanceName = acc.ProjectIDExecutionWithStreamInstance(t)
+ processorName = "new-processor"
+ randomSuffix = acctest.RandString(5)
+ createTimeout = "1s"
+ deleteOnCreateTimeout = true
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroyStreamProcessor,
+ Steps: []resource.TestStep{
+ {
+ Config: config(t, projectID, instanceName, processorName, streamprocessor.StartedState, randomSuffix, sampleSrcConfig, testLogDestConfig, acc.TimeoutConfig(&createTimeout, nil, nil), &deleteOnCreateTimeout),
+ ExpectError: regexp.MustCompile("will run cleanup because delete_on_create_timeout is true"),
+ },
+ }})
+}
+
func checkExists(resourceName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[resourceName]
@@ -526,12 +548,16 @@ func composeStreamProcessorChecks(projectID, instanceName, processorName, state
return resource.ComposeAggregateTestCheckFunc(checks...)
}
-func config(t *testing.T, projectID, instanceName, processorName, state, nameSuffix string, src, dest connectionConfig) string {
+func config(t *testing.T, projectID, instanceName, processorName, state, nameSuffix string, src, dest connectionConfig, timeoutConfig string, deleteOnCreateTimeout *bool) string {
t.Helper()
stateConfig := ""
if state != "" {
stateConfig = fmt.Sprintf(`state = %[1]q`, state)
}
+ deleteOnCreateTimeoutConfig := ""
+ if deleteOnCreateTimeout != nil {
+ deleteOnCreateTimeoutConfig = fmt.Sprintf(`delete_on_create_timeout = %[1]t`, *deleteOnCreateTimeout)
+ }
connectionConfigSrc, connectionIDSrc, pipelineStepSrc := configConnection(t, projectID, instanceName, src, nameSuffix)
connectionConfigDest, connectionIDDest, pipelineStepDest := configConnection(t, projectID, instanceName, dest, nameSuffix)
@@ -581,9 +607,11 @@ func config(t *testing.T, projectID, instanceName, processorName, state, nameSuf
%[5]s
%[6]s
depends_on = [%[7]s]
+ %[8]s
+ %[9]s
}
- `, projectID, instanceName, processorName, pipeline, stateConfig, optionsStr, dependsOnStr) + otherConfig
+ `, projectID, instanceName, processorName, pipeline, stateConfig, optionsStr, dependsOnStr, timeoutConfig, deleteOnCreateTimeoutConfig) + otherConfig
}
func configConnection(t *testing.T, projectID, instanceName string, config connectionConfig, nameSuffix string) (connectionConfig, resourceID, pipelineStep string) {
diff --git a/internal/service/streamprocessor/state_transition.go b/internal/service/streamprocessor/state_transition.go
index 2bfc6c47ab..7ab2d75f4d 100644
--- a/internal/service/streamprocessor/state_transition.go
+++ b/internal/service/streamprocessor/state_transition.go
@@ -24,15 +24,21 @@ const (
const (
ErrorUpdateStateTransition = "Stream Processor must be in %s state to transition to %s state"
ErrorUpdateToCreatedState = "Stream Processor cannot transition from %s to CREATED"
+ defaultTimeout = 5 * time.Minute // big pipelines can take a while to stop due to checkpointing. By default, we prefer the API to raise the error (~ 3min) than having to expose custom timeouts.
+ minTimeout = 3 * time.Second
)
func WaitStateTransition(ctx context.Context, requestParams *admin.GetStreamProcessorApiParams, client admin.StreamsApi, pendingStates, desiredStates []string) (*admin.StreamsProcessorWithStats, error) {
+ return WaitStateTransitionWithTimeout(ctx, requestParams, client, pendingStates, desiredStates, defaultTimeout)
+}
+
+func WaitStateTransitionWithTimeout(ctx context.Context, requestParams *admin.GetStreamProcessorApiParams, client admin.StreamsApi, pendingStates, desiredStates []string, timeout time.Duration) (*admin.StreamsProcessorWithStats, error) {
stateConf := &retry.StateChangeConf{
Pending: pendingStates,
Target: desiredStates,
Refresh: refreshFunc(ctx, requestParams, client),
- Timeout: 5 * time.Minute, // big pipelines can take a while to stop due to checkpointing. We prefer the API to raise the error (~ 3min) than having to expose custom timeouts.
- MinTimeout: 3 * time.Second,
+ Timeout: timeout,
+ MinTimeout: minTimeout,
Delay: 0,
}
diff --git a/internal/service/team/data_source_team.go b/internal/service/team/data_source_team.go
index 927a4c5eed..7a8a720d3f 100644
--- a/internal/service/team/data_source_team.go
+++ b/internal/service/team/data_source_team.go
@@ -4,14 +4,17 @@ import (
"context"
"errors"
"fmt"
+ "net/http"
admin20241113 "go.mongodb.org/atlas-sdk/v20241113005/admin"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/dsschema"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
@@ -36,25 +39,26 @@ func DataSource() *schema.Resource {
ConflictsWith: []string{"team_id"},
},
"usernames": {
- Type: schema.TypeSet,
- Computed: true,
+ Type: schema.TypeSet,
+ Computed: true,
+ Deprecated: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "parameter", "data.mongodbatlas_team.users", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
+ "users": dsschema.DSOrgUsersSchema(),
},
}
}
-func LegacyTeamsDataSource() *schema.Resource {
- res := DataSource()
- res.DeprecationMessage = fmt.Sprintf(constant.DeprecationDataSourceByDateWithReplacement, "November 2024", "mongodbatlas_team")
- return res
-}
-
func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
var (
- connV2 = meta.(*config.MongoDBClient).AtlasV220241113
+ /* Note: We continue using the legacy API for usernames endpoint due to behavioral differences
+ between API versions. The newer SDK returns both pending & active users.
+ The legacy API returns only active.*/
+
+ connV220241113 = meta.(*config.MongoDBClient).AtlasV220241113
+ connV2 = meta.(*config.MongoDBClient).AtlasV2
orgID = d.Get("org_id").(string)
teamID, teamIDOk = d.GetOk("team_id")
name, nameOk = d.GetOk("name")
@@ -68,9 +72,9 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
}
if teamIDOk {
- team, _, err = connV2.TeamsApi.GetTeamById(ctx, orgID, teamID.(string)).Execute()
+ team, _, err = connV220241113.TeamsApi.GetTeamById(ctx, orgID, teamID.(string)).Execute()
} else {
- team, _, err = connV2.TeamsApi.GetTeamByName(ctx, orgID, name.(string)).Execute()
+ team, _, err = connV220241113.TeamsApi.GetTeamByName(ctx, orgID, name.(string)).Execute()
}
if err != nil {
@@ -85,7 +89,7 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(fmt.Errorf(errorTeamSetting, "name", d.Id(), err))
}
- teamUsers, err := listAllTeamUsers(ctx, connV2, orgID, team.GetId())
+ teamUsers, err := listAllTeamUsers(ctx, connV220241113, orgID, team.GetId())
if err != nil {
return diag.FromErr(fmt.Errorf(errorTeamRead, err))
@@ -100,6 +104,15 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
return diag.FromErr(fmt.Errorf(errorTeamSetting, "usernames", d.Id(), err))
}
+ users, err := listAllTeamUsersDS(ctx, connV2, orgID, team.GetId())
+ if err != nil {
+ return diag.FromErr(fmt.Errorf(errorTeamRead, err))
+ }
+
+ if err := d.Set("users", conversion.FlattenUsers(users)); err != nil {
+ return diag.FromErr(fmt.Errorf("error setting `users`: %s", err))
+ }
+
d.SetId(conversion.EncodeStateID(map[string]string{
"org_id": orgID,
"id": team.GetId(),
@@ -107,3 +120,11 @@ func dataSourceRead(ctx context.Context, d *schema.ResourceData, meta any) diag.
return nil
}
+
+func listAllTeamUsersDS(ctx context.Context, conn *admin.APIClient, orgID, teamID string) ([]admin.OrgUserResponse, error) {
+ return dsschema.AllPages(ctx, func(ctx context.Context, pageNum int) (dsschema.PaginateResponse[admin.OrgUserResponse], *http.Response, error) {
+ request := conn.MongoDBCloudUsersApi.ListTeamUsers(ctx, orgID, teamID)
+ request = request.PageNum(pageNum)
+ return request.Execute()
+ })
+}
diff --git a/internal/service/team/data_source_team_test.go b/internal/service/team/data_source_team_test.go
index 0eac53e4a5..c95da30d7e 100644
--- a/internal/service/team/data_source_team_test.go
+++ b/internal/service/team/data_source_team_test.go
@@ -29,6 +29,11 @@ func TestAccConfigDSTeam_basic(t *testing.T) {
resource.TestCheckResourceAttrSet(dataSourceName, "team_id"),
resource.TestCheckResourceAttr(dataSourceName, "name", name),
resource.TestCheckResourceAttr(dataSourceName, "usernames.#", "1"),
+ resource.TestCheckResourceAttrSet(dataSourceName, "users.0.team_ids.0"),
+ resource.TestCheckResourceAttrSet(dataSourceName, "users.0.roles.0.project_role_assignments.#"),
+ resource.TestCheckResourceAttrWith(dataSourceName, "users.0.username", acc.IsUsername()),
+ resource.TestCheckResourceAttrWith(dataSourceName, "users.0.last_auth", acc.IsTimestamp()),
+ resource.TestCheckResourceAttrWith(dataSourceName, "users.0.created_at", acc.IsTimestamp()),
),
},
},
@@ -61,12 +66,38 @@ func TestAccConfigDSTeamByName_basic(t *testing.T) {
})
}
+func TestAccConfigDSTeam_NoUsers(t *testing.T) {
+ var (
+ dataSourceName = "data.mongodbatlas_team.test3"
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ name = acc.RandomName()
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckAtlasUsername(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: acc.CheckDestroyTeam,
+ Steps: []resource.TestStep{
+ {
+ Config: dataSourceConfigNoUsers(orgID, name),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ resource.TestCheckResourceAttrSet(dataSourceName, "org_id"),
+ resource.TestCheckResourceAttrSet(dataSourceName, "team_id"),
+ resource.TestCheckResourceAttr(dataSourceName, "name", name),
+ resource.TestCheckResourceAttr(dataSourceName, "usernames.#", "0"),
+ resource.TestCheckResourceAttr(dataSourceName, "users.#", "0"),
+ ),
+ },
+ },
+ })
+}
+
func dataSourceConfigBasic(orgID, name, username string) string {
return fmt.Sprintf(`
resource "mongodbatlas_team" "test" {
- org_id = "%s"
- name = "%s"
- usernames = ["%s"]
+ org_id = %[1]q
+ name = %[2]q
+ usernames = [%[3]q]
}
data "mongodbatlas_team" "test" {
@@ -80,9 +111,9 @@ func dataSourceConfigBasic(orgID, name, username string) string {
func dataSourceConfigBasicByName(orgID, name, username string) string {
return fmt.Sprintf(`
resource "mongodbatlas_team" "test" {
- org_id = "%s"
- name = "%s"
- usernames = ["%s"]
+ org_id = %[1]q
+ name = %[2]q
+ usernames = [%[3]q]
}
data "mongodbatlas_team" "test2" {
@@ -91,3 +122,19 @@ func dataSourceConfigBasicByName(orgID, name, username string) string {
}
`, orgID, name, username)
}
+
+func dataSourceConfigNoUsers(orgID, name string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "test" {
+ org_id = %[1]q
+ name = %[2]q
+ usernames = []
+ }
+
+ data "mongodbatlas_team" "test3" {
+ org_id = mongodbatlas_team.test.org_id
+ team_id = mongodbatlas_team.test.team_id
+ }
+
+ `, orgID, name)
+}
diff --git a/internal/service/team/resource_team.go b/internal/service/team/resource_team.go
index 56366a2e3e..9aa6cb96a6 100644
--- a/internal/service/team/resource_team.go
+++ b/internal/service/team/resource_team.go
@@ -55,8 +55,10 @@ func Resource() *schema.Resource {
Required: true,
},
"usernames": {
- Type: schema.TypeSet,
- Required: true,
+ Type: schema.TypeSet,
+ Optional: true,
+ Computed: true,
+ Deprecated: fmt.Sprintf(constant.DeprecationNextMajorWithReplacementGuide, "parameter", "mongodbatlas_cloud_user_team_assignment", "https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/atlas-user-management"),
Elem: &schema.Schema{
Type: schema.TypeString,
},
@@ -65,22 +67,20 @@ func Resource() *schema.Resource {
}
}
-func LegacyTeamsResource() *schema.Resource {
- res := Resource()
- res.DeprecationMessage = fmt.Sprintf(constant.DeprecationResourceByDateWithReplacement, "November 2024", "mongodbatlas_team")
- return res
-}
-
func resourceCreate(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
connV2 := meta.(*config.MongoDBClient).AtlasV220241113
orgID := d.Get("org_id").(string)
usernames := conversion.ExpandStringListFromSetSchema(d.Get("usernames").(*schema.Set))
- teamsResp, _, err := connV2.TeamsApi.CreateTeam(ctx, orgID,
- &admin20241113.Team{
- Name: d.Get("name").(string),
- Usernames: usernames,
- }).Execute()
+ createTeamReq := &admin20241113.Team{
+ Name: d.Get("name").(string),
+ }
+
+ if len(usernames) > 0 {
+ createTeamReq.Usernames = usernames
+ }
+
+ teamsResp, _, err := connV2.TeamsApi.CreateTeam(ctx, orgID, createTeamReq).Execute()
if err != nil {
return diag.FromErr(fmt.Errorf(errorTeamCreate, err))
}
diff --git a/internal/service/team/resource_team_migration_test.go b/internal/service/team/resource_team_migration_test.go
index b1bb03d412..0867cb5c7d 100644
--- a/internal/service/team/resource_team_migration_test.go
+++ b/internal/service/team/resource_team_migration_test.go
@@ -5,6 +5,7 @@ import (
"testing"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
)
@@ -15,7 +16,8 @@ func TestMigConfigTeams_basic(t *testing.T) {
orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
username = os.Getenv("MONGODB_ATLAS_USERNAME")
name = acc.RandomName()
- config = configBasic(orgID, name, []string{username})
+ usernames = []string{username}
+ config = configBasic(orgID, name, &usernames)
)
resource.Test(t, resource.TestCase{
@@ -36,3 +38,45 @@ func TestMigConfigTeams_basic(t *testing.T) {
},
})
}
+
+func TestMigConfigTeams_usernamesDeprecation(t *testing.T) {
+ var (
+ resourceName = "mongodbatlas_team.test"
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ username = os.Getenv("MONGODB_ATLAS_USERNAME")
+ name = acc.RandomName()
+ usernames = []string{username}
+ )
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { mig.PreCheckAtlasUsername(t) },
+ CheckDestroy: acc.CheckDestroyTeam,
+ Steps: []resource.TestStep{
+ {
+ ExternalProviders: mig.ExternalProviders(),
+ Config: configBasic(orgID, name, &usernames),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttrSet(resourceName, "org_id"),
+ resource.TestCheckResourceAttr(resourceName, "name", name),
+ resource.TestCheckResourceAttr(resourceName, "usernames.#", "1"),
+ resource.TestCheckTypeSetElemAttr(resourceName, "usernames.*", username),
+ ),
+ },
+ mig.TestStepCheckEmptyPlan(configBasic(orgID, name, &usernames)),
+ {
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: configBasic(orgID, name, nil),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttrSet(resourceName, "org_id"),
+ resource.TestCheckResourceAttr(resourceName, "name", name),
+ // usernames should still be present in state (computed) but not in config
+ resource.TestCheckResourceAttr(resourceName, "usernames.#", "1"),
+ resource.TestCheckTypeSetElemAttr(resourceName, "usernames.*", username),
+ ),
+ },
+ mig.TestStepCheckEmptyPlan(configBasic(orgID, name, nil)),
+ },
+ })
+}
diff --git a/internal/service/team/resource_team_test.go b/internal/service/team/resource_team_test.go
index 5e8dc98514..40bc2f6657 100644
--- a/internal/service/team/resource_team_test.go
+++ b/internal/service/team/resource_team_test.go
@@ -10,10 +10,58 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
+func TestAccConfigRSTeam_basicNoUsernames(t *testing.T) {
+ var (
+ resourceName = "mongodbatlas_team.test"
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ name = acc.RandomName()
+ updatedName = acc.RandomName()
+ )
+
+ resource.ParallelTest(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: acc.CheckDestroyTeam,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(orgID, name, nil),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttrSet(resourceName, "org_id"),
+ resource.TestCheckResourceAttr(resourceName, "name", name),
+ ),
+ },
+ {
+ Config: configBasic(orgID, updatedName, nil),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttrSet(resourceName, "org_id"),
+ resource.TestCheckResourceAttr(resourceName, "name", updatedName),
+ ),
+ },
+ {
+ Config: configBasic(orgID, updatedName, nil),
+ Check: resource.ComposeAggregateTestCheckFunc(
+ checkExists(resourceName),
+ resource.TestCheckResourceAttrSet(resourceName, "org_id"),
+ resource.TestCheckResourceAttr(resourceName, "name", updatedName),
+ ),
+ },
+ {
+ ResourceName: resourceName,
+ ImportStateIdFunc: importStateIDFunc(resourceName),
+ ImportState: true,
+ ImportStateVerify: true,
+ },
+ },
+ })
+}
+
func TestAccConfigRSTeam_basic(t *testing.T) {
var (
resourceName = "mongodbatlas_team.test"
@@ -29,7 +77,7 @@ func TestAccConfigRSTeam_basic(t *testing.T) {
CheckDestroy: acc.CheckDestroyTeam,
Steps: []resource.TestStep{
{
- Config: configBasic(orgID, name, usernames),
+ Config: configBasic(orgID, name, &usernames),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttrSet(resourceName, "org_id"),
@@ -38,7 +86,7 @@ func TestAccConfigRSTeam_basic(t *testing.T) {
),
},
{
- Config: configBasic(orgID, updatedName, usernames),
+ Config: configBasic(orgID, updatedName, &usernames),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttrSet(resourceName, "org_id"),
@@ -47,7 +95,7 @@ func TestAccConfigRSTeam_basic(t *testing.T) {
),
},
{
- Config: configBasic(orgID, updatedName, usernames),
+ Config: configBasic(orgID, updatedName, &usernames),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttrSet(resourceName, "org_id"),
@@ -83,7 +131,7 @@ func TestAccConfigRSTeam_updatingUsernames(t *testing.T) {
CheckDestroy: acc.CheckDestroyTeam,
Steps: []resource.TestStep{
{
- Config: configBasic(orgID, name, usernames),
+ Config: configBasic(orgID, name, &usernames),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttrSet(resourceName, "org_id"),
@@ -93,7 +141,7 @@ func TestAccConfigRSTeam_updatingUsernames(t *testing.T) {
),
},
{
- Config: configBasic(orgID, name, updatedSingleUsername),
+ Config: configBasic(orgID, name, &updatedSingleUsername),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttrSet(resourceName, "org_id"),
@@ -103,7 +151,7 @@ func TestAccConfigRSTeam_updatingUsernames(t *testing.T) {
),
},
{
- Config: configBasic(orgID, name, updatedBothUsername),
+ Config: configBasic(orgID, name, &updatedBothUsername),
Check: resource.ComposeAggregateTestCheckFunc(
checkExists(resourceName),
resource.TestCheckResourceAttrSet(resourceName, "org_id"),
@@ -117,36 +165,6 @@ func TestAccConfigRSTeam_updatingUsernames(t *testing.T) {
})
}
-func TestAccConfigRSTeam_legacyName(t *testing.T) {
- var (
- resourceName = "mongodbatlas_teams.test"
- dataSourceName = "data.mongodbatlas_teams.test"
- orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
- usernames = []string{os.Getenv("MONGODB_ATLAS_USERNAME")}
- name = acc.RandomName()
- )
-
- resource.ParallelTest(t, resource.TestCase{
- PreCheck: func() { acc.PreCheckAtlasUsername(t) },
- ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
- CheckDestroy: acc.CheckDestroyTeam,
- Steps: []resource.TestStep{
- {
- Config: configBasicLegacyNames(orgID, name, usernames),
- Check: resource.ComposeAggregateTestCheckFunc(
- checkExists(resourceName),
- resource.TestCheckResourceAttrSet(resourceName, "org_id"),
- resource.TestCheckResourceAttr(resourceName, "name", name),
- resource.TestCheckResourceAttr(resourceName, "usernames.#", "1"),
- resource.TestCheckResourceAttrSet(dataSourceName, "org_id"),
- resource.TestCheckResourceAttr(dataSourceName, "name", name),
- resource.TestCheckResourceAttr(dataSourceName, "usernames.#", "1"),
- ),
- },
- },
- })
-}
-
func checkExists(resourceName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[resourceName]
@@ -179,30 +197,19 @@ func importStateIDFunc(resourceName string) resource.ImportStateIdFunc {
}
}
-func configBasic(orgID, name string, usernames []string) string {
- return fmt.Sprintf(`
- resource "mongodbatlas_team" "test" {
- org_id = "%s"
- name = "%s"
- usernames = %s
- }`, orgID, name,
- strings.ReplaceAll(fmt.Sprintf("%+q", usernames), " ", ","),
- )
-}
+func configBasic(orgID, name string, usernames *[]string) string {
+ var usernamesAttr string
+ if usernames != nil && len(*usernames) > 0 {
+ usernamesStr := `"` + strings.Join(*usernames, `", "`) + `"`
+ usernamesAttr = fmt.Sprintf(`
+ usernames = [%s]`, usernamesStr)
+ }
-func configBasicLegacyNames(orgID, name string, usernames []string) string {
return fmt.Sprintf(`
- resource "mongodbatlas_teams" "test" {
- org_id = %[1]q
- name = %[2]q
- usernames = %[3]s
- }
-
- data "mongodbatlas_teams" "test" {
- org_id = %[1]q
- name = mongodbatlas_teams.test.name
- }
- `, orgID, name,
- strings.ReplaceAll(fmt.Sprintf("%+q", usernames), " ", ","),
- )
+resource "mongodbatlas_team" "test" {
+ org_id = "%s"
+ name = "%s"
+
+ %s
+}`, orgID, name, usernamesAttr)
}
diff --git a/internal/service/teamprojectassignment/data_source.go b/internal/service/teamprojectassignment/data_source.go
new file mode 100644
index 0000000000..3e3bc25c2e
--- /dev/null
+++ b/internal/service/teamprojectassignment/data_source.go
@@ -0,0 +1,57 @@
+package teamprojectassignment
+
+import (
+ "context"
+
+ "github.com/hashicorp/terraform-plugin-framework/datasource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+var _ datasource.DataSource = &teamProjectAssignmentDS{}
+var _ datasource.DataSourceWithConfigure = &teamProjectAssignmentDS{}
+
+func DataSource() datasource.DataSource {
+ return &teamProjectAssignmentDS{
+ DSCommon: config.DSCommon{
+ DataSourceName: resourceName,
+ },
+ }
+}
+
+type teamProjectAssignmentDS struct {
+ config.DSCommon
+}
+
+func (d *teamProjectAssignmentDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
+ resp.Schema = dataSourceSchema()
+}
+
+func (d *teamProjectAssignmentDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
+ var state TFModel
+ resp.Diagnostics.Append(req.Config.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := d.Client.AtlasV2
+ projectID := state.ProjectId.ValueString()
+ teamID := state.TeamId.ValueString()
+
+ apiResp, httpResp, err := connV2.TeamsApi.GetGroupTeam(ctx, projectID, teamID).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ resp.Diagnostics.AddError(errorFetchingResource, err.Error())
+ return
+ }
+
+ newTeamProjectAssignmentModel, diags := NewTFModel(ctx, apiResp, projectID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newTeamProjectAssignmentModel)...)
+}
diff --git a/internal/service/teamprojectassignment/main_test.go b/internal/service/teamprojectassignment/main_test.go
new file mode 100644
index 0000000000..4f2017b904
--- /dev/null
+++ b/internal/service/teamprojectassignment/main_test.go
@@ -0,0 +1,15 @@
+package teamprojectassignment_test
+
+import (
+ "os"
+ "testing"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+)
+
+func TestMain(m *testing.M) {
+ cleanup := acc.SetupSharedResources()
+ exitCode := m.Run()
+ cleanup()
+ os.Exit(exitCode)
+}
diff --git a/internal/service/teamprojectassignment/model.go b/internal/service/teamprojectassignment/model.go
new file mode 100644
index 0000000000..fae3a72d85
--- /dev/null
+++ b/internal/service/teamprojectassignment/model.go
@@ -0,0 +1,47 @@
+package teamprojectassignment
+
+import (
+ "context"
+
+ "github.com/hashicorp/terraform-plugin-framework/diag"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+func NewTFModel(ctx context.Context, apiResp *admin.TeamRole, projectID string) (*TFModel, diag.Diagnostics) {
+ diags := diag.Diagnostics{}
+
+ if apiResp == nil {
+ return nil, diags
+ }
+
+ roleNames := conversion.TFSetValueOrNull(ctx, apiResp.RoleNames, types.StringType)
+
+ return &TFModel{
+ ProjectId: types.StringValue(projectID),
+ TeamId: types.StringPointerValue(apiResp.TeamId),
+ RoleNames: roleNames,
+ }, diags
+}
+
+func buildTeamRole(ctx context.Context, plan *TFModel) *admin.TeamRole {
+ roleNames := []string{}
+ if !plan.RoleNames.IsNull() && !plan.RoleNames.IsUnknown() {
+ roleNames = conversion.TypesSetToString(ctx, plan.RoleNames)
+ }
+
+ return &admin.TeamRole{
+ TeamId: plan.TeamId.ValueStringPointer(),
+ RoleNames: &roleNames,
+ }
+}
+
+func NewAtlasReq(ctx context.Context, plan *TFModel) (*[]admin.TeamRole, diag.Diagnostics) {
+ teamRole := buildTeamRole(ctx, plan)
+ return &[]admin.TeamRole{*teamRole}, nil
+}
+
+func NewAtlasUpdateReq(ctx context.Context, plan *TFModel) (*admin.TeamRole, diag.Diagnostics) {
+ return buildTeamRole(ctx, plan), nil
+}
diff --git a/internal/service/teamprojectassignment/model_test.go b/internal/service/teamprojectassignment/model_test.go
new file mode 100644
index 0000000000..af12d99521
--- /dev/null
+++ b/internal/service/teamprojectassignment/model_test.go
@@ -0,0 +1,181 @@
+package teamprojectassignment_test
+
+import (
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-framework/types"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/teamprojectassignment"
+ "github.com/stretchr/testify/assert"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+const (
+ testProjectID = "project-123"
+ testTeamID = "team-123"
+)
+
+var (
+ testProjectRoles = []string{"PROJECT_OWNER", "PROJECT_READ_ONLY", "PROJECT_MEMBER"}
+)
+
+type sdkToTFModelTestCase struct {
+ SDKResp *admin.TeamRole
+ expectedTFModel *teamprojectassignment.TFModel
+}
+
+func TestTeamProjectAssignmentSDKToTFModel(t *testing.T) {
+ ctx := t.Context()
+
+ fullResp := &admin.TeamRole{
+ TeamId: admin.PtrString(testTeamID),
+ RoleNames: &testProjectRoles,
+ }
+
+ expectedRoles, _ := types.SetValueFrom(ctx, types.StringType, testProjectRoles)
+ expectedFullModel := &teamprojectassignment.TFModel{
+ ProjectId: types.StringValue(testProjectID),
+ TeamId: types.StringValue(testTeamID),
+ RoleNames: expectedRoles,
+ }
+
+ fullNilResp := &admin.TeamRole{
+ TeamId: admin.PtrString(""),
+ RoleNames: nil,
+ }
+
+ expectedNilModel := &teamprojectassignment.TFModel{
+ ProjectId: types.StringValue(testProjectID),
+ TeamId: types.StringValue(""),
+ RoleNames: types.SetNull(types.StringType),
+ }
+
+ testCases := map[string]sdkToTFModelTestCase{
+ "Complete SDK response": {
+ SDKResp: fullResp,
+ expectedTFModel: expectedFullModel,
+ },
+ "nil SDK response": {
+ SDKResp: nil,
+ expectedTFModel: nil,
+ },
+ "Empty SDK response": {
+ SDKResp: fullNilResp,
+ expectedTFModel: expectedNilModel,
+ },
+ }
+
+ for testName, tc := range testCases {
+ t.Run(testName, func(t *testing.T) {
+ resultModel, diags := teamprojectassignment.NewTFModel(t.Context(), tc.SDKResp, testProjectID)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+ assert.Equal(t, tc.expectedTFModel, resultModel, "TFModel did not match expected")
+ })
+ }
+}
+
+func TestNewAtlasReq(t *testing.T) {
+ ctx := t.Context()
+
+ roles, _ := types.SetValueFrom(ctx, types.StringType, testProjectRoles)
+ testCases := map[string]struct {
+ plan *teamprojectassignment.TFModel
+ expected *[]admin.TeamRole
+ }{
+ "Complete TF state": {
+ plan: &teamprojectassignment.TFModel{
+ ProjectId: types.StringValue(testProjectID),
+ TeamId: types.StringValue(testTeamID),
+ RoleNames: roles,
+ },
+ expected: &[]admin.TeamRole{
+ {
+ TeamId: admin.PtrString(testTeamID),
+ RoleNames: &testProjectRoles,
+ },
+ },
+ },
+ "No roles": {
+ plan: &teamprojectassignment.TFModel{
+ ProjectId: types.StringValue(testProjectID),
+ TeamId: types.StringValue(testTeamID),
+ RoleNames: types.SetNull(types.StringType),
+ },
+ expected: &[]admin.TeamRole{
+ {
+ TeamId: admin.PtrString(testTeamID),
+ RoleNames: &[]string{},
+ },
+ },
+ },
+ }
+
+ for testName, tc := range testCases {
+ t.Run(testName, func(t *testing.T) {
+ apiReqResult, diags := teamprojectassignment.NewAtlasReq(ctx, tc.plan)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+
+ assert.Len(t, *apiReqResult, len(*tc.expected), "slice lengths don't match")
+
+ for i := range *tc.expected {
+ expectedItem := (*tc.expected)[i]
+ actualItem := (*apiReqResult)[i]
+
+ assert.Equal(t, *expectedItem.TeamId, *actualItem.TeamId, "TeamId values don't match")
+
+ if expectedItem.RoleNames == nil {
+ assert.Nil(t, actualItem.RoleNames, "expected RoleNames to be nil")
+ } else {
+ assert.Equal(t, *expectedItem.RoleNames, *actualItem.RoleNames, "RoleNames values don't match")
+ }
+ }
+ })
+ }
+}
+
+func TestNewAtlasUpdateReq(t *testing.T) {
+ ctx := t.Context()
+
+ roles, _ := types.SetValueFrom(ctx, types.StringType, testProjectRoles)
+
+ testCases := map[string]struct {
+ plan *teamprojectassignment.TFModel
+ expected *admin.TeamRole
+ }{
+ "Complete TF state": {
+ plan: &teamprojectassignment.TFModel{
+ ProjectId: types.StringValue(testProjectID),
+ TeamId: types.StringValue(testTeamID),
+ RoleNames: roles,
+ },
+ expected: &admin.TeamRole{
+ TeamId: admin.PtrString(testTeamID),
+ RoleNames: &testProjectRoles,
+ },
+ },
+ "No roles": {
+ plan: &teamprojectassignment.TFModel{
+ ProjectId: types.StringValue(testProjectID),
+ TeamId: types.StringValue(testTeamID),
+ RoleNames: types.SetNull(types.StringType),
+ },
+ expected: &admin.TeamRole{
+ TeamId: admin.PtrString(testTeamID),
+ RoleNames: &[]string{},
+ },
+ },
+ }
+ for testName, tc := range testCases {
+ t.Run(testName, func(t *testing.T) {
+ apiReqResult, diags := teamprojectassignment.NewAtlasUpdateReq(ctx, tc.plan)
+ assert.False(t, diags.HasError(), "expected no diagnostics")
+
+ assert.Equal(t, *tc.expected.TeamId, *apiReqResult.TeamId, "TeamId values don't match")
+
+ if tc.expected.RoleNames == nil {
+ assert.Nil(t, apiReqResult.RoleNames, "expected RoleNames to be nil")
+ } else {
+ assert.Equal(t, *tc.expected.RoleNames, *apiReqResult.RoleNames, "RoleNames values don't match")
+ }
+ })
+ }
+}
diff --git a/internal/service/teamprojectassignment/resource.go b/internal/service/teamprojectassignment/resource.go
new file mode 100644
index 0000000000..3271ed171f
--- /dev/null
+++ b/internal/service/teamprojectassignment/resource.go
@@ -0,0 +1,186 @@
+package teamprojectassignment
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/hashicorp/terraform-plugin-framework/path"
+ "github.com/hashicorp/terraform-plugin-framework/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/validate"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
+)
+
+const (
+ resourceName = "team_project_assignment"
+ errorFetchingResource = "error fetching resource"
+ invalidImportID = "invalid import ID format"
+ errorAssigment = "error assigning Team to ProjectID (%s):"
+ errorUpdate = "error updating TeamID(%s) in ProjectID(%s):"
+ errorDelete = "error deleting TeamID(%s) from ProjectID(%s):"
+)
+
+var _ resource.ResourceWithConfigure = &rs{}
+var _ resource.ResourceWithImportState = &rs{}
+
+func Resource() resource.Resource {
+ return &rs{
+ RSCommon: config.RSCommon{
+ ResourceName: resourceName,
+ },
+ }
+}
+
+type rs struct {
+ config.RSCommon
+}
+
+func (r *rs) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) {
+ resp.Schema = resourceSchema()
+ conversion.UpdateSchemaDescription(&resp.Schema)
+}
+
+func (r *rs) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
+ var plan TFModel
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := plan.ProjectId.ValueString()
+ teamID := plan.TeamId.ValueString()
+
+ teamProjectReq, diags := NewAtlasReq(ctx, &plan)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+
+ /*
+ NOTE: API returns all teams in the project instead of just the assigned team,
+ requiring a separate GET call. Issue has been reported for future fix (CLOUDP-335018).
+ */
+ _, _, err := connV2.TeamsApi.AddGroupTeams(ctx, projectID, teamProjectReq).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf(errorAssigment, projectID), err.Error())
+ return
+ }
+
+ apiResp, _, err := connV2.TeamsApi.GetGroupTeam(ctx, projectID, teamID).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf(errorAssigment, projectID), err.Error())
+ return
+ }
+
+ newTeamProjectAssignmentModel, diags := NewTFModel(ctx, apiResp, projectID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newTeamProjectAssignmentModel)...)
+}
+
+func (r *rs) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
+ var state TFModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := state.ProjectId.ValueString()
+ teamID := state.TeamId.ValueString()
+
+ apiResp, httpResp, err := connV2.TeamsApi.GetGroupTeam(ctx, projectID, teamID).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ resp.Diagnostics.AddError(errorFetchingResource, err.Error())
+ return
+ }
+
+ newTeamProjectAssignmentModel, diags := NewTFModel(ctx, apiResp, projectID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newTeamProjectAssignmentModel)...)
+}
+
+func (r *rs) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
+ var plan TFModel
+ resp.Diagnostics.Append(req.Plan.Get(ctx, &plan)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := plan.ProjectId.ValueString()
+ teamID := plan.TeamId.ValueString()
+
+ updateReq, diags := NewAtlasUpdateReq(ctx, &plan)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ /*
+ NOTE: API returns all teams in the project instead of just the updated team,
+ requiring a separate GET call. Issue has been reported for future fix (CLOUDP-335018).
+ */
+ _, _, err := connV2.TeamsApi.UpdateGroupTeam(ctx, projectID, teamID, updateReq).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf(errorUpdate, teamID, projectID), err.Error())
+ return
+ }
+
+ apiResp, _, err := connV2.TeamsApi.GetGroupTeam(ctx, projectID, teamID).Execute()
+ if err != nil {
+ resp.Diagnostics.AddError(fmt.Sprintf(errorUpdate, teamID, projectID), err.Error())
+ return
+ }
+
+ newTeamProjectAssignmentModel, diags := NewTFModel(ctx, apiResp, projectID)
+ if diags.HasError() {
+ resp.Diagnostics.Append(diags...)
+ return
+ }
+ resp.Diagnostics.Append(resp.State.Set(ctx, newTeamProjectAssignmentModel)...)
+}
+
+func (r *rs) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
+ var state *TFModel
+ resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
+ if resp.Diagnostics.HasError() {
+ return
+ }
+
+ connV2 := r.Client.AtlasV2
+ projectID := state.ProjectId.ValueString()
+ teamID := state.TeamId.ValueString()
+
+ httpResp, err := connV2.TeamsApi.RemoveGroupTeam(ctx, projectID, teamID).Execute()
+ if err != nil {
+ if validate.StatusNotFound(httpResp) {
+ resp.State.RemoveResource(ctx)
+ return
+ }
+ resp.Diagnostics.AddError(fmt.Sprintf(errorDelete, teamID, projectID), err.Error())
+ return
+ }
+}
+
+func (r *rs) ImportState(ctx context.Context, req resource.ImportStateRequest, resp *resource.ImportStateResponse) {
+ importID := req.ID
+ ok, parts := conversion.ImportSplit(req.ID, 2)
+ if !ok {
+ resp.Diagnostics.AddError(invalidImportID, "expected 'project_id/team_id', got: "+importID)
+ return
+ }
+ projectID, teamID := parts[0], parts[1]
+
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("project_id"), projectID)...)
+ resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("team_id"), teamID)...)
+}
diff --git a/internal/service/teamprojectassignment/resource_migration_test.go b/internal/service/teamprojectassignment/resource_migration_test.go
new file mode 100644
index 0000000000..8df6bcf39e
--- /dev/null
+++ b/internal/service/teamprojectassignment/resource_migration_test.go
@@ -0,0 +1,153 @@
+package teamprojectassignment_test
+
+import (
+ "fmt"
+ "os"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
+)
+
+const (
+ resourceProjectName = "mongodbatlas_project.migration_path_project1"
+ resourceAssignmentName1 = "mongodbatlas_team_project_assignment.team1"
+ resourceAssignmentName2 = "mongodbatlas_team_project_assignment.team2"
+)
+
+func TestMigCloudUserTeamAssignmentRS_basic(t *testing.T) {
+ mig.SkipIfVersionBelow(t, "2.0.0") // when resource 1st released
+ mig.CreateAndRunTest(t, basicTestCase(t))
+}
+
+func TestMigTeamProjectAssignment_migrationJourney(t *testing.T) {
+ var (
+ orgID = os.Getenv("MONGODB_ATLAS_ORG_ID")
+ projectName = acc.RandomProjectName()
+ teamName1 = acc.RandomName()
+ teamName2 = acc.RandomName()
+ )
+
+ resource.Test(t, resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ // Step 1: Create project with `teams`
+ ExternalProviders: mig.ExternalProviders(),
+ Config: originalConfigFirst(projectName, orgID, teamName1, teamName2),
+ Check: resource.ComposeTestCheckFunc(
+ resource.TestCheckResourceAttr(resourceProjectName, "name", projectName),
+ resource.TestCheckResourceAttr(resourceProjectName, "teams.#", "2"),
+ ),
+ },
+ {
+ // Step 2: Ignore `teams` attribute & import new resource
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ Config: ignoreTeamsImportConfigSecond(projectName, orgID, teamName1, teamName2), // expected to see 2 import in the plan
+ Check: secondChecks(),
+ },
+ mig.TestStepCheckEmptyPlan(ignoreTeamsImportConfigSecond(projectName, orgID, teamName1, teamName2)),
+ },
+ })
+}
+
+func originalConfigFirst(projectName, orgID, teamName1, teamName2 string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "team1" {
+ name = %[1]q
+ org_id = %[3]q
+ usernames = []
+ }
+
+ resource "mongodbatlas_team" "team2" {
+ name = %[2]q
+ org_id = %[3]q
+ usernames = []
+ }
+
+ locals {
+ team_map = {
+ (mongodbatlas_team.team1.team_id) = ["GROUP_OWNER"]
+ (mongodbatlas_team.team2.team_id) = ["GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_WRITE"]
+ }
+ }
+
+ resource "mongodbatlas_project" "migration_path_project1" {
+ name = %[4]q
+ org_id = %[3]q
+
+ dynamic "teams" {
+ for_each = local.team_map
+ content {
+ team_id = teams.key
+ role_names = teams.value
+ }
+
+ }
+ }`, teamName1, teamName2, orgID, projectName)
+}
+
+func ignoreTeamsImportConfigSecond(projectName, orgID, teamName1, teamName2 string) string {
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "team1" {
+ name = %[1]q
+ org_id = %[3]q
+ }
+
+ resource "mongodbatlas_team" "team2" {
+ name = %[2]q
+ org_id = %[3]q
+ }
+
+ locals {
+ team_map = {
+ (mongodbatlas_team.team1.team_id) = ["GROUP_OWNER"]
+ (mongodbatlas_team.team2.team_id) = ["GROUP_READ_ONLY", "GROUP_DATA_ACCESS_READ_WRITE"]
+ }
+ }
+
+ resource "mongodbatlas_project" "migration_path_project1" {
+ name = %[4]q
+ org_id = %[3]q
+
+ lifecycle {
+ ignore_changes = [teams]
+ }
+ }
+
+ resource "mongodbatlas_team_project_assignment" "team1" {
+ project_id = mongodbatlas_project.migration_path_project1.id
+ team_id = mongodbatlas_team.team1.team_id
+ role_names = local.team_map[mongodbatlas_team.team1.team_id]
+ }
+
+ import {
+ to = mongodbatlas_team_project_assignment.team1
+ id = "${mongodbatlas_project.migration_path_project1.id}/${mongodbatlas_team.team1.team_id}"
+ }
+
+ resource "mongodbatlas_team_project_assignment" "team2" {
+ project_id = mongodbatlas_project.migration_path_project1.id
+ team_id = mongodbatlas_team.team2.team_id
+ role_names = local.team_map[mongodbatlas_team.team2.team_id]
+ }
+
+ import {
+ to = mongodbatlas_team_project_assignment.team2
+ id = "${mongodbatlas_project.migration_path_project1.id}/${mongodbatlas_team.team2.team_id}"
+ }
+ `, teamName1, teamName2, orgID, projectName)
+}
+
+func secondChecks() resource.TestCheckFunc {
+ return resource.ComposeAggregateTestCheckFunc(
+ resource.TestCheckResourceAttrSet(resourceAssignmentName1, "project_id"),
+ resource.TestCheckResourceAttrSet(resourceAssignmentName2, "project_id"),
+ resource.TestCheckResourceAttrSet(resourceAssignmentName1, "team_id"),
+ resource.TestCheckResourceAttrSet(resourceAssignmentName2, "team_id"),
+ resource.TestCheckResourceAttr(resourceAssignmentName1, "role_names.#", "1"),
+ resource.TestCheckResourceAttr(resourceAssignmentName2, "role_names.#", "2"),
+ )
+}
diff --git a/internal/service/teamprojectassignment/resource_test.go b/internal/service/teamprojectassignment/resource_test.go
new file mode 100644
index 0000000000..2ccd1e2e8f
--- /dev/null
+++ b/internal/service/teamprojectassignment/resource_test.go
@@ -0,0 +1,125 @@
+package teamprojectassignment_test
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "strings"
+ "testing"
+
+ "github.com/hashicorp/terraform-plugin-testing/helper/resource"
+ "github.com/hashicorp/terraform-plugin-testing/terraform"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
+)
+
+var resourceName = "mongodbatlas_team_project_assignment.test"
+var dataSourceName = "data.mongodbatlas_team_project_assignment.test"
+
+func TestAccTeamProjectAssignment_basic(t *testing.T) {
+ resource.ParallelTest(t, *basicTestCase(t))
+}
+
+func basicTestCase(t *testing.T) *resource.TestCase {
+ t.Helper()
+
+ orgID := os.Getenv("MONGODB_ATLAS_ORG_ID")
+ projectID := acc.ProjectIDExecution(t)
+ teamName := acc.RandomName()
+ roles := []string{"GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN"}
+ updatedRoles := []string{"GROUP_OWNER", "GROUP_DATA_ACCESS_ADMIN", "GROUP_DATA_ACCESS_READ_ONLY"}
+
+ return &resource.TestCase{
+ PreCheck: func() { acc.PreCheckBasic(t) },
+ ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
+ CheckDestroy: checkDestroy,
+ Steps: []resource.TestStep{
+ {
+ Config: configBasic(orgID, teamName, projectID, roles),
+ Check: checks(projectID, roles),
+ },
+ {
+ Config: configBasic(orgID, teamName, projectID, updatedRoles),
+ Check: checks(projectID, updatedRoles),
+ },
+ {
+ ResourceName: resourceName,
+ ImportState: true,
+ ImportStateVerify: true,
+ ImportStateVerifyIdentifierAttribute: "team_id",
+ ImportStateIdFunc: importStateIDFunc(resourceName),
+ },
+ },
+ }
+}
+
+func configBasic(orgID, teamName, projectID string, roles []string) string {
+ rolesStr := `"` + strings.Join(roles, `", "`) + `"`
+ return fmt.Sprintf(`
+ resource "mongodbatlas_team" "test" {
+ org_id = %[1]q
+ name = %[2]q
+ }
+
+ resource "mongodbatlas_team_project_assignment" "test" {
+ project_id = %[3]q
+ team_id = mongodbatlas_team.test.team_id
+ role_names = [%[4]s]
+ }
+
+ data "mongodbatlas_team_project_assignment" "test" {
+ project_id = %[3]q
+ team_id = mongodbatlas_team_project_assignment.test.team_id
+ }
+
+ `, orgID, teamName, projectID, rolesStr)
+}
+
+func checks(projectID string, roles []string) resource.TestCheckFunc {
+ attrsSet := []string{"team_id"}
+ attrsMap := map[string]string{
+ "project_id": projectID,
+ "role_names.#": fmt.Sprint(len(roles)),
+ }
+ extraChecks := []resource.TestCheckFunc{
+ resource.TestCheckResourceAttrPair(dataSourceName, "team_id", resourceName, "team_id"),
+ }
+ for _, role := range roles {
+ extraChecks = append(extraChecks, resource.TestCheckTypeSetElemAttr(resourceName, "role_names.*", role))
+ }
+
+ return acc.CheckRSAndDS(resourceName, &dataSourceName, nil, attrsSet, attrsMap, extraChecks...)
+}
+
+func importStateIDFunc(resourceName string) func(s *terraform.State) (string, error) {
+ return func(s *terraform.State) (string, error) {
+ attrs := s.RootModule().Resources[resourceName].Primary.Attributes
+ teamID := attrs["team_id"]
+ projectID := attrs["project_id"]
+ return projectID + "/" + teamID, nil
+ }
+}
+
+func checkDestroy(s *terraform.State) error {
+ for _, rs := range s.RootModule().Resources {
+ if rs.Type != "mongodbatlas_team_project_assignment" {
+ continue
+ }
+ teamID := rs.Primary.Attributes["team_id"]
+ projectID := rs.Primary.Attributes["project_id"]
+ conn := acc.ConnV2()
+ apiListResp, _, err := conn.TeamsApi.ListGroupTeams(context.Background(), projectID).Execute()
+ if err != nil {
+ continue
+ }
+
+ if apiListResp != nil && apiListResp.Results != nil {
+ results := *apiListResp.Results
+ for i := range results {
+ if results[i].GetTeamId() == teamID {
+ return fmt.Errorf("team %s still exists", teamID)
+ }
+ }
+ }
+ }
+ return nil
+}
diff --git a/internal/service/teamprojectassignment/schema.go b/internal/service/teamprojectassignment/schema.go
new file mode 100644
index 0000000000..0d686b8705
--- /dev/null
+++ b/internal/service/teamprojectassignment/schema.go
@@ -0,0 +1,41 @@
+package teamprojectassignment
+
+import (
+ dsschema "github.com/hashicorp/terraform-plugin-framework/datasource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/resource/schema"
+ "github.com/hashicorp/terraform-plugin-framework/types"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+)
+
+func resourceSchema() schema.Schema {
+ return schema.Schema{
+ Attributes: map[string]schema.Attribute{
+ "project_id": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Unique 24-hexadecimal digit string that identifies your project. Use the [/groups](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listprojects) endpoint to retrieve all projects to which the authenticated user has access.\n\n**NOTE**: Groups and projects are synonymous terms. Your group id is the same as your project id. For existing groups, your group/project id remains the same. The resource and corresponding endpoints use the term groups.",
+ },
+ "role_names": schema.SetAttribute{
+ ElementType: types.StringType,
+ Required: true,
+ MarkdownDescription: "One or more project-level roles assigned to the team.",
+ },
+ "team_id": schema.StringAttribute{
+ Required: true,
+ MarkdownDescription: "Unique 24-hexadecimal character string that identifies the team.",
+ },
+ },
+ }
+}
+
+func dataSourceSchema() dsschema.Schema {
+ return conversion.DataSourceSchemaFromResource(resourceSchema(), &conversion.DataSourceSchemaRequest{
+ RequiredFields: []string{"project_id", "team_id"},
+ })
+}
+
+type TFModel struct {
+ ProjectId types.String `tfsdk:"project_id"`
+ RoleNames types.Set `tfsdk:"role_names"`
+ TeamId types.String `tfsdk:"team_id"`
+}
diff --git a/internal/service/teamprojectassignment/tfplugingen/generator_config.yml b/internal/service/teamprojectassignment/tfplugingen/generator_config.yml
new file mode 100644
index 0000000000..590e7ae159
--- /dev/null
+++ b/internal/service/teamprojectassignment/tfplugingen/generator_config.yml
@@ -0,0 +1,23 @@
+provider:
+ name: mongodbatlas
+
+resources:
+ team_project_assignment:
+ read:
+ path: /api/atlas/v2/groups/{groupId}/teams/{teamId}
+ method: GET
+ create:
+ path: /api/atlas/v2/groups/{groupId}/teams
+ method: POST
+ update:
+ path: /api/atlas/v2/groups/{groupId}/teams/{teamId}
+ method: PATCH
+ delete:
+ path: /api/atlas/v2/groups/{groupId}/teams/{teamId}
+ method: DELETE
+
+data_sources:
+ cloud_user_org_assignment:
+ read:
+ path: /api/atlas/v2/groups/{groupId}/teams/{teamId}
+ method: GET
diff --git a/internal/serviceapi/searchdeploymentapi/resource_test.go b/internal/serviceapi/searchdeploymentapi/resource_test.go
index 67fe68ec18..ac30197f68 100644
--- a/internal/serviceapi/searchdeploymentapi/resource_test.go
+++ b/internal/serviceapi/searchdeploymentapi/resource_test.go
@@ -8,6 +8,7 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/searchdeployment"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -91,17 +92,17 @@ func advancedClusterConfig(orgID, projectName, clusterName string) string {
cluster_type = "REPLICASET"
retain_backups_enabled = "true"
- replication_specs {
- region_configs {
- electable_specs {
+ replication_specs = [{
+ region_configs = [{
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
- }
- }
+ }]
+ }]
}
`, orgID, projectName, clusterName)
}
diff --git a/internal/testutil/acc/advanced_cluster.go b/internal/testutil/acc/advanced_cluster.go
index e3af1dcf00..1847e61705 100644
--- a/internal/testutil/acc/advanced_cluster.go
+++ b/internal/testutil/acc/advanced_cluster.go
@@ -6,13 +6,14 @@ import (
"strings"
"time"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/retry"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/retrystrategy"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/advancedclustertpf"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
var (
@@ -59,12 +60,6 @@ func TestStepImportCluster(resourceName string, ignorePrefixFields ...string) re
"delete_on_create_timeout", // This field is TF specific and not returned by Atlas, so Import can't fill it in.
)
- // auto_scaling & specs (electable_specs, read_only_specs, etc.) are only set in state in SDKv2 if present in the definition.
- // However, as import doesn't have a previous state to compare with, import will always fill them.
- // This will make these fields differ in the state, although the plan change won't be shown to the user as they're computed values.
- if !config.PreviewProviderV2AdvancedCluster() {
- ignorePrefixFields = append(ignorePrefixFields, "replication_specs", "id") // TenantUpgrade changes the ID and can make the test flaky
- }
return resource.TestStep{
ResourceName: resourceName,
ImportStateIdFunc: ImportStateIDFuncProjectIDClusterName(resourceName, "project_id", "name"),
@@ -111,24 +106,24 @@ func CheckExistsCluster(resourceName string) resource.TestCheckFunc {
}
}
-func CheckFCVPinningConfig(usePreviewProvider bool, resourceName, dataSourceName, pluralDataSourceName string, mongoDBMajorVersion int, pinningExpirationDate *string, fcvVersion *int) resource.TestCheckFunc {
+func CheckFCVPinningConfig(resourceName, dataSourceName, pluralDataSourceName string, mongoDBMajorVersion int, pinningExpirationDate *string, fcvVersion *int) resource.TestCheckFunc {
mapChecks := map[string]string{
"mongo_db_major_version": fmt.Sprintf("%d.0", mongoDBMajorVersion),
}
if pinningExpirationDate != nil {
- mapChecks["pinned_fcv.0.expiration_date"] = *pinningExpirationDate
+ mapChecks["pinned_fcv.expiration_date"] = *pinningExpirationDate
} else {
- mapChecks["pinned_fcv.#"] = "0"
+ mapChecks["pinned_fcv.%"] = "0"
}
if fcvVersion != nil {
- mapChecks["pinned_fcv.0.version"] = fmt.Sprintf("%d.0", *fcvVersion)
+ mapChecks["pinned_fcv.version"] = fmt.Sprintf("%d.0", *fcvVersion)
}
additionalCheck := resource.TestCheckResourceAttrWith(resourceName, "mongo_db_version", MatchesExpression(fmt.Sprintf("%d..*", mongoDBMajorVersion)))
- return CheckRSAndDSPreviewProviderV2(usePreviewProvider, resourceName, admin.PtrString(dataSourceName), admin.PtrString(pluralDataSourceName), []string{}, mapChecks, additionalCheck)
+ return CheckRSAndDS(resourceName, admin.PtrString(dataSourceName), admin.PtrString(pluralDataSourceName), []string{}, mapChecks, additionalCheck)
}
func CheckIndependentShardScalingMode(resourceName, clusterName, expectedMode string) resource.TestCheckFunc {
@@ -194,28 +189,26 @@ func ConfigBasicDedicated(projectID, name, zoneName string) string {
name = %[2]q
cluster_type = "REPLICASET"
- replication_specs {
- region_configs {
+ replication_specs = [{
+ region_configs = [{
priority = 7
provider_name = "AWS"
region_name = "US_EAST_1"
- electable_specs {
+ electable_specs = {
node_count = 3
instance_size = "M10"
}
- }
+ }]
%[3]s
- }
+ }]
}
data "mongodbatlas_advanced_cluster" "test" {
project_id = mongodbatlas_advanced_cluster.test.project_id
name = mongodbatlas_advanced_cluster.test.name
- use_replication_spec_per_shard = true
depends_on = [mongodbatlas_advanced_cluster.test]
}
data "mongodbatlas_advanced_clusters" "test" {
- use_replication_spec_per_shard = true
project_id = mongodbatlas_advanced_cluster.test.project_id
depends_on = [mongodbatlas_advanced_cluster.test]
}
diff --git a/internal/testutil/acc/advanced_cluster_preview_provider_v2.go b/internal/testutil/acc/advanced_cluster_mig_TPF.go
similarity index 55%
rename from internal/testutil/acc/advanced_cluster_preview_provider_v2.go
rename to internal/testutil/acc/advanced_cluster_mig_TPF.go
index 160fd83a71..b06fb74b52 100644
--- a/internal/testutil/acc/advanced_cluster_preview_provider_v2.go
+++ b/internal/testutil/acc/advanced_cluster_mig_TPF.go
@@ -9,11 +9,10 @@ import (
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/hcl/v2/hclwrite"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/hcl"
+ "github.com/stretchr/testify/assert"
"github.com/zclconf/go-cty/cty"
- "github.com/stretchr/testify/assert"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/hcl"
)
// IsTestSDKv2ToTPF returns if we want to run migration tests from SDKv2 to TPF.
@@ -22,58 +21,54 @@ func IsTestSDKv2ToTPF() bool {
return env
}
-func CheckRSAndDSPreviewProviderV2(usePreviewProvider bool, resourceName string, dataSourceName, pluralDataSourceName *string, attrsSet []string, attrsMap map[string]string, extra ...resource.TestCheckFunc) resource.TestCheckFunc {
- modifiedSet := ConvertToPreviewProviderV2AttrsSet(usePreviewProvider, attrsSet)
- modifiedMap := ConvertToPreviewProviderV2AttrsMap(usePreviewProvider, attrsMap)
+func CheckRSAndDSMigTPF(isTPF bool, resourceName string, dataSourceName, pluralDataSourceName *string, attrsSet []string, attrsMap map[string]string, extra ...resource.TestCheckFunc) resource.TestCheckFunc {
+ modifiedSet := ConvertToMigTPFAttrsSet(isTPF, attrsSet)
+ modifiedMap := ConvertToMigTPFAttrsMap(isTPF, attrsMap)
return CheckRSAndDS(resourceName, dataSourceName, pluralDataSourceName, modifiedSet, modifiedMap, extra...)
}
-func TestCheckResourceAttrPreviewProviderV2(usePreviewProvider bool, name, key, value string) resource.TestCheckFunc {
- return resource.TestCheckResourceAttr(name, AttrNameToPreviewProviderV2(usePreviewProvider, key), value)
-}
-
-func TestCheckResourceAttrSetPreviewProviderV2(usePreviewProvider bool, name, key string) resource.TestCheckFunc {
- return resource.TestCheckResourceAttrSet(name, AttrNameToPreviewProviderV2(usePreviewProvider, key))
+func TestCheckResourceAttrMigTPF(isTPF bool, name, key, value string) resource.TestCheckFunc {
+ return resource.TestCheckResourceAttr(name, AttrNameToMigTPF(isTPF, key), value)
}
-func TestCheckResourceAttrWithPreviewProviderV2(usePreviewProvider bool, name, key string, checkValueFunc resource.CheckResourceAttrWithFunc) resource.TestCheckFunc {
- return resource.TestCheckResourceAttrWith(name, AttrNameToPreviewProviderV2(usePreviewProvider, key), checkValueFunc)
+func TestCheckResourceAttrSetMigTPF(isTPF bool, name, key string) resource.TestCheckFunc {
+ return resource.TestCheckResourceAttrSet(name, AttrNameToMigTPF(isTPF, key))
}
-func TestCheckTypeSetElemNestedAttrsPreviewProviderV2(usePreviewProvider bool, name, key string, values map[string]string) resource.TestCheckFunc {
- return resource.TestCheckTypeSetElemNestedAttrs(name, AttrNameToPreviewProviderV2(usePreviewProvider, key), values)
+func TestCheckResourceAttrWithMigTPF(isTPF bool, name, key string, checkValueFunc resource.CheckResourceAttrWithFunc) resource.TestCheckFunc {
+ return resource.TestCheckResourceAttrWith(name, AttrNameToMigTPF(isTPF, key), checkValueFunc)
}
-func AddAttrChecksPreviewProviderV2(usePreviewProvider bool, name string, checks []resource.TestCheckFunc, mapChecks map[string]string) []resource.TestCheckFunc {
- return AddAttrChecks(name, checks, ConvertToPreviewProviderV2AttrsMap(usePreviewProvider, mapChecks))
+func AddAttrChecksMigTPF(isTPF bool, name string, checks []resource.TestCheckFunc, mapChecks map[string]string) []resource.TestCheckFunc {
+ return AddAttrChecks(name, checks, ConvertToMigTPFAttrsMap(isTPF, mapChecks))
}
-func AddAttrSetChecksPreviewProviderV2(usePreviewProvider bool, name string, checks []resource.TestCheckFunc, attrNames ...string) []resource.TestCheckFunc {
- return AddAttrSetChecks(name, checks, ConvertToPreviewProviderV2AttrsSet(usePreviewProvider, attrNames)...)
+func AddAttrSetChecksMigTPF(isTPF bool, name string, checks []resource.TestCheckFunc, attrNames ...string) []resource.TestCheckFunc {
+ return AddAttrSetChecks(name, checks, ConvertToMigTPFAttrsSet(isTPF, attrNames)...)
}
-func AddAttrChecksPrefixPreviewProviderV2(usePreviewProvider bool, name string, checks []resource.TestCheckFunc, mapChecks map[string]string, prefix string, skipNames ...string) []resource.TestCheckFunc {
- return AddAttrChecksPrefix(name, checks, ConvertToPreviewProviderV2AttrsMap(usePreviewProvider, mapChecks), prefix, skipNames...)
+func AddAttrChecksPrefixMigTPF(isTPF bool, name string, checks []resource.TestCheckFunc, mapChecks map[string]string, prefix string, skipNames ...string) []resource.TestCheckFunc {
+ return AddAttrChecksPrefix(name, checks, ConvertToMigTPFAttrsMap(isTPF, mapChecks), prefix, skipNames...)
}
-func ConvertToPreviewProviderV2AttrsMap(usePreviewProvider bool, attrsMap map[string]string) map[string]string {
- if skipPreviewProviderV2Work(usePreviewProvider) {
+func ConvertToMigTPFAttrsMap(isTPF bool, attrsMap map[string]string) map[string]string {
+ if skipMigTPFWork(isTPF) {
return attrsMap
}
ret := make(map[string]string, len(attrsMap))
for name, value := range attrsMap {
- ret[AttrNameToPreviewProviderV2(usePreviewProvider, name)] = value
+ ret[AttrNameToMigTPF(isTPF, name)] = value
}
return ret
}
-func ConvertToPreviewProviderV2AttrsSet(usePreviewProvider bool, attrsSet []string) []string {
- if skipPreviewProviderV2Work(usePreviewProvider) {
+func ConvertToMigTPFAttrsSet(isTPF bool, attrsSet []string) []string {
+ if skipMigTPFWork(isTPF) {
return attrsSet
}
ret := make([]string, 0, len(attrsSet))
for _, name := range attrsSet {
- ret = append(ret, AttrNameToPreviewProviderV2(usePreviewProvider, name))
+ ret = append(ret, AttrNameToMigTPF(isTPF, name))
}
return ret
}
@@ -91,8 +86,8 @@ var tpfSingleNestedAttrs = []string{
"tags",
}
-func AttrNameToPreviewProviderV2(usePreviewProvider bool, name string) string {
- if skipPreviewProviderV2Work(usePreviewProvider) {
+func AttrNameToMigTPF(isTPF bool, name string) string {
+ if skipMigTPFWork(isTPF) {
return name
}
for _, singleAttrName := range tpfSingleNestedAttrs {
@@ -101,9 +96,9 @@ func AttrNameToPreviewProviderV2(usePreviewProvider bool, name string) string {
return name
}
-func ConvertAdvancedClusterToPreviewProviderV2(t *testing.T, usePreviewProvider bool, def string) string {
+func ConvertAdvancedClusterToTPF(t *testing.T, isTPF bool, def string) string {
t.Helper()
- if skipPreviewProviderV2Work(usePreviewProvider) {
+ if skipMigTPFWork(isTPF) {
return def
}
parse := hcl.GetDefParser(t, def)
@@ -123,12 +118,12 @@ func ConvertAdvancedClusterToPreviewProviderV2(t *testing.T, usePreviewProvider
convertKeyValueAttrs(t, "tags", writeBody)
}
result := string(parse.Bytes())
- result = AttrNameToPreviewProviderV2(usePreviewProvider, result) // useful for lifecycle ingore definitions
+ result = AttrNameToMigTPF(isTPF, result) // useful for lifecycle ingore definitions
return result
}
-func skipPreviewProviderV2Work(usePreviewProvider bool) bool {
- return !config.PreviewProviderV2AdvancedCluster() || !usePreviewProvider
+func skipMigTPFWork(isTPF bool) bool {
+ return !isTPF
}
func AssertEqualHCL(t *testing.T, expected, actual string, msgAndArgs ...interface{}) {
diff --git a/internal/testutil/acc/advanced_cluster_preview_provider_v2_test.go b/internal/testutil/acc/advanced_cluster_mig_TPF_test.go
similarity index 89%
rename from internal/testutil/acc/advanced_cluster_preview_provider_v2_test.go
rename to internal/testutil/acc/advanced_cluster_mig_TPF_test.go
index 737a966ec4..4faba33f66 100644
--- a/internal/testutil/acc/advanced_cluster_preview_provider_v2_test.go
+++ b/internal/testutil/acc/advanced_cluster_mig_TPF_test.go
@@ -4,15 +4,12 @@ import (
"sort"
"testing"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/stretchr/testify/assert"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
-func TestConvertToPreviewProviderV2AttrsMapAndAttrsSet(t *testing.T) {
- if !config.PreviewProviderV2AdvancedCluster() {
- t.Skip("Skipping test as not in PreviewProviderV2AdvancedCluster")
- }
+func TestConvertToMigTPFAttrsMapAndAttrsSet(t *testing.T) {
attrsMap := map[string]string{
"attr": "val1",
"electable_specs.0": "val2",
@@ -31,7 +28,7 @@ func TestConvertToPreviewProviderV2AttrsMapAndAttrsSet(t *testing.T) {
"connection_strings.standard": "val6",
"connection_strings.standard_srv": "val6",
}
- actualMap := acc.ConvertToPreviewProviderV2AttrsMap(true, attrsMap)
+ actualMap := acc.ConvertToMigTPFAttrsMap(true, attrsMap)
assert.Equal(t, expectedMap, actualMap)
attrsSet := make([]string, 0, len(attrsMap))
@@ -42,16 +39,13 @@ func TestConvertToPreviewProviderV2AttrsMapAndAttrsSet(t *testing.T) {
for name := range expectedMap {
expectedSet = append(expectedSet, name)
}
- actualSet := acc.ConvertToPreviewProviderV2AttrsSet(true, attrsSet)
+ actualSet := acc.ConvertToMigTPFAttrsSet(true, attrsSet)
sort.Strings(expectedSet)
sort.Strings(actualSet)
assert.Equal(t, expectedSet, actualSet)
}
-func TestConvertAdvancedClusterToPreviewProviderV2(t *testing.T) {
- if !config.PreviewProviderV2AdvancedCluster() {
- t.Skip("Skipping test as not in PreviewProviderV2AdvancedCluster")
- }
+func TestConvertAdvancedClusterToTPF(t *testing.T) {
var (
input = `
resource "mongodbatlas_advanced_cluster" "cluster2" {
@@ -161,7 +155,7 @@ func TestConvertAdvancedClusterToPreviewProviderV2(t *testing.T) {
}
}
`
- // expected has the attributes sorted alphabetically to match the output of ConvertAdvancedClusterToPreviewProviderV2
+ // expected has the attributes sorted alphabetically to match the output of ConvertAdvancedClusterToTPF
expected = `
resource "mongodbatlas_advanced_cluster" "cluster2" {
project_id = "MY-PROJECT-ID"
@@ -252,7 +246,7 @@ func TestConvertAdvancedClusterToPreviewProviderV2(t *testing.T) {
}
`
)
- actual := acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, input)
+ actual := acc.ConvertAdvancedClusterToTPF(t, true, input)
acc.AssertEqualHCL(t, expected, actual)
}
diff --git a/internal/testutil/acc/attribute_checks.go b/internal/testutil/acc/attribute_checks.go
index 8f1e1e1ae7..1014a6a3f3 100644
--- a/internal/testutil/acc/attribute_checks.go
+++ b/internal/testutil/acc/attribute_checks.go
@@ -14,6 +14,11 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
)
+var (
+ matchTimestamp = regexp.MustCompile(`^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?(?:Z|[+-]\d{2}:\d{2})$`)
+ matchUsername = regexp.MustCompile(`.*@mongodb\.com$`)
+)
+
func MatchesExpression(expr string) resource.CheckResourceAttrWithFunc {
return func(value string) error {
matched, err := regexp.MatchString(expr, value)
@@ -27,6 +32,33 @@ func MatchesExpression(expr string) resource.CheckResourceAttrWithFunc {
}
}
+// IsTimestamp checks if the value is a valid timestamp in RFC3339 format.
+func IsTimestamp() resource.CheckResourceAttrWithFunc {
+ return func(value string) error {
+ matched, err := regexp.MatchString(matchTimestamp.String(), value)
+ if err != nil {
+ return err
+ }
+ if !matched {
+ return fmt.Errorf("expected a timestamp, got %s", value)
+ }
+ return nil
+ }
+}
+
+func IsUsername() resource.CheckResourceAttrWithFunc {
+ return func(value string) error {
+ matched, err := regexp.MatchString(matchUsername.String(), value)
+ if err != nil {
+ return err
+ }
+ if !matched {
+ return fmt.Errorf("expected a username, got %s", value)
+ }
+ return nil
+ }
+}
+
func CIDRBlockExpression() resource.CheckResourceAttrWithFunc {
return func(value string) error {
_, _, err := net.ParseCIDR(value)
diff --git a/internal/testutil/acc/cluster.go b/internal/testutil/acc/cluster.go
index 95a48e43d8..ec2001e87b 100644
--- a/internal/testutil/acc/cluster.go
+++ b/internal/testutil/acc/cluster.go
@@ -5,9 +5,10 @@ import (
"os"
"testing"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/constant"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
// ClusterRequest contains configuration for a cluster where all fields are optional and AddDefaults is used for required fields.
diff --git a/internal/testutil/acc/common_config.go b/internal/testutil/acc/common_config.go
new file mode 100644
index 0000000000..4514ca0a93
--- /dev/null
+++ b/internal/testutil/acc/common_config.go
@@ -0,0 +1,51 @@
+package acc
+
+import "fmt"
+
+func TimeoutConfig(createTimeout, updateTimeout, deleteTimeout *string) string {
+ createTimeoutConfig := ""
+ updateTimeoutConfig := ""
+ deleteTimeoutConfig := ""
+
+ if createTimeout != nil {
+ createTimeoutConfig = fmt.Sprintf(`
+ create = %q
+ `, *createTimeout)
+ }
+ if updateTimeout != nil {
+ updateTimeoutConfig = fmt.Sprintf(`
+ update = %q
+ `, *updateTimeout)
+ }
+ if deleteTimeout != nil {
+ deleteTimeoutConfig = fmt.Sprintf(`
+ delete = %q
+ `, *deleteTimeout)
+ }
+ timeoutConfig := "timeouts ="
+
+ return fmt.Sprintf(`
+ %[1]s {
+ %[2]s
+ %[3]s
+ %[4]s
+ }
+ `, timeoutConfig, createTimeoutConfig, updateTimeoutConfig, deleteTimeoutConfig)
+}
+
+func ConfigRemove(resourceName string) string {
+ return fmt.Sprintf(`
+ removed {
+ from = %s
+ lifecycle {
+ destroy = false
+ }
+ }
+ `, resourceName)
+}
+
+func ConfigEmpty() string {
+ return `
+ # empty config to trigger delete
+ `
+}
diff --git a/internal/testutil/acc/config_cluster.go b/internal/testutil/acc/config_cluster.go
index b2725316d2..3cad80df50 100644
--- a/internal/testutil/acc/config_cluster.go
+++ b/internal/testutil/acc/config_cluster.go
@@ -5,9 +5,10 @@ import (
"fmt"
"strings"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+
"github.com/hashicorp/hcl/v2/hclwrite"
"github.com/zclconf/go-cty/cty"
- "go.mongodb.org/atlas-sdk/v20250312007/admin"
)
func ClusterDatasourceHcl(req *ClusterRequest) (configStr, clusterName, resourceName string, err error) {
@@ -34,10 +35,12 @@ func ClusterDatasourceHcl(req *ClusterRequest) (configStr, clusterName, resource
} else {
clusterRootAttributes["project_id"] = projectID
}
- addPrimitiveAttributes(cluster, clusterRootAttributes)
+ setAttributes(cluster, clusterRootAttributes)
return "\n" + string(f.Bytes()), clusterName, clusterResourceName, err
}
+// ClusterResourceHcl generates Terraform HCL configuration for MongoDB Atlas advanced clusters.
+// It converts a ClusterRequest into valid Terraform HCL string
func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceName string, err error) {
if req == nil || req.ProjectID == "" {
return "", "", "", errors.New("must specify a ClusterRequest with at least ProjectID set")
@@ -73,39 +76,37 @@ func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceNa
} else {
clusterRootAttributes["project_id"] = projectID
}
- if req.DiskSizeGb != 0 {
- clusterRootAttributes["disk_size_gb"] = req.DiskSizeGb
- }
+
if req.RetainBackupsEnabled {
clusterRootAttributes["retain_backups_enabled"] = req.RetainBackupsEnabled
}
- addPrimitiveAttributes(cluster, clusterRootAttributes)
- cluster.AppendNewline()
+ setAttributes(cluster, clusterRootAttributes)
+
+ if err := validateAdvancedConfig(req.AdvancedConfiguration); err != nil {
+ return "", "", "", err
+ }
if len(req.AdvancedConfiguration) > 0 {
- for _, key := range sortStringMapKeysAny(req.AdvancedConfiguration) {
- if !knownAdvancedConfig[key] {
- return "", "", "", fmt.Errorf("unknown key in advanced configuration: %s", key)
- }
- }
- advancedClusterBlock := cluster.AppendNewBlock("advanced_configuration", nil).Body()
- addPrimitiveAttributes(advancedClusterBlock, req.AdvancedConfiguration)
cluster.AppendNewline()
+ setAttributes(cluster, map[string]any{
+ "advanced_configuration": req.AdvancedConfiguration,
+ })
}
- for i, spec := range specs {
- err = writeReplicationSpec(cluster, spec)
- if err != nil {
- return "", "", "", fmt.Errorf("error writing hcl for replication spec %d: %w", i, err)
- }
+
+ cluster.AppendNewline()
+ err = writeReplicationSpec(cluster, specs)
+ if err != nil {
+ return "", "", "", fmt.Errorf("error writing hcl for replication specs: %w", err)
}
+
if len(req.Tags) > 0 {
+ tagMap := make(map[string]cty.Value, len(req.Tags))
for _, key := range SortStringMapKeys(req.Tags) {
- value := req.Tags[key]
- tagBlock := cluster.AppendNewBlock("tags", nil).Body()
- tagBlock.SetAttributeValue("key", cty.StringVal(key))
- tagBlock.SetAttributeValue("value", cty.StringVal(value))
+ tagMap[key] = cty.StringVal(req.Tags[key])
}
+ cluster.SetAttributeValue("tags", cty.ObjectVal(tagMap))
}
cluster.AppendNewline()
+
if req.ResourceDependencyName != "" {
if !strings.Contains(req.ResourceDependencyName, ".") {
return "", "", "", fmt.Errorf("req.ResourceDependencyName must have a '.'")
@@ -119,42 +120,95 @@ func ClusterResourceHcl(req *ClusterRequest) (configStr, clusterName, resourceNa
return "\n" + string(f.Bytes()), clusterName, clusterResourceName, err
}
-func writeReplicationSpec(cluster *hclwrite.Body, spec admin.ReplicationSpec20240805) error {
- replicationBlock := cluster.AppendNewBlock("replication_specs", nil).Body()
- err := addPrimitiveAttributesViaJSON(replicationBlock, spec)
- if err != nil {
- return err
- }
- for _, rc := range spec.GetRegionConfigs() {
- if rc.Priority == nil {
- rc.SetPriority(7)
- }
- replicationBlock.AppendNewline()
- rcBlock := replicationBlock.AppendNewBlock("region_configs", nil).Body()
- err = addPrimitiveAttributesViaJSON(rcBlock, rc)
- if err != nil {
- return err
+func writeReplicationSpec(cluster *hclwrite.Body, specs []admin.ReplicationSpec20240805) error {
+ var allSpecs []cty.Value
+
+ for _, spec := range specs {
+ specMap := make(map[string]cty.Value)
+
+ if spec.ZoneName != nil {
+ specMap["zone_name"] = cty.StringVal(*spec.ZoneName)
}
- autoScalingBlock := rcBlock.AppendNewBlock("auto_scaling", nil).Body()
- if rc.AutoScaling == nil {
- autoScalingBlock.SetAttributeValue("disk_gb_enabled", cty.BoolVal(false))
- } else {
- autoScaling := rc.GetAutoScaling()
- asDisk := autoScaling.GetDiskGB()
- autoScalingBlock.SetAttributeValue("disk_gb_enabled", cty.BoolVal(asDisk.GetEnabled()))
- if autoScaling.Compute != nil {
- return fmt.Errorf("auto_scaling.compute is not supportd yet %v", autoScaling)
+
+ var rcList []cty.Value
+ for _, rc := range spec.GetRegionConfigs() {
+ if rc.Priority == nil {
+ rc.SetPriority(7)
+ }
+
+ rcMap := map[string]cty.Value{
+ "priority": cty.NumberIntVal(int64(*rc.Priority)),
+ "provider_name": cty.StringVal(*rc.ProviderName),
+ "region_name": cty.StringVal(*rc.RegionName),
+ }
+ if rc.BackingProviderName != nil {
+ rcMap["backing_provider_name"] = cty.StringVal(*rc.BackingProviderName)
+ }
+
+ if rc.AutoScaling == nil {
+ rcMap["auto_scaling"] = cty.ObjectVal(map[string]cty.Value{
+ "disk_gb_enabled": cty.BoolVal(false),
+ })
+ } else {
+ as := rc.GetAutoScaling()
+ asDisk := as.GetDiskGB()
+ if as.Compute != nil {
+ return fmt.Errorf("auto_scaling.compute is not supported yet %v", as)
+ }
+ rcMap["auto_scaling"] = cty.ObjectVal(map[string]cty.Value{
+ "disk_gb_enabled": cty.BoolVal(asDisk.GetEnabled()),
+ })
+ }
+
+ es := rc.GetElectableSpecs()
+ esMap := map[string]cty.Value{}
+ if es.InstanceSize != nil {
+ esMap["instance_size"] = cty.StringVal(*es.InstanceSize)
+ }
+ if es.NodeCount != nil {
+ esMap["node_count"] = cty.NumberIntVal(int64(*es.NodeCount))
+ }
+ if es.EbsVolumeType != nil && *es.EbsVolumeType != "" {
+ esMap["ebs_volume_type"] = cty.StringVal(*es.EbsVolumeType)
+ }
+ if es.DiskIOPS != nil {
+ esMap["disk_iops"] = cty.NumberIntVal(int64(*es.DiskIOPS))
}
+ if len(esMap) > 0 {
+ rcMap["electable_specs"] = cty.ObjectVal(esMap)
+ }
+
+ ros := rc.GetReadOnlySpecs()
+ roMap := map[string]cty.Value{}
+ if ros.InstanceSize != nil {
+ roMap["instance_size"] = cty.StringVal(*ros.InstanceSize)
+ }
+ if ros.NodeCount != nil && *ros.NodeCount != 0 {
+ roMap["node_count"] = cty.NumberIntVal(int64(*ros.NodeCount))
+ }
+ if ros.DiskIOPS != nil {
+ roMap["disk_iops"] = cty.NumberIntVal(int64(*ros.DiskIOPS))
+ }
+ if len(roMap) > 0 {
+ rcMap["read_only_specs"] = cty.ObjectVal(roMap)
+ }
+
+ rcList = append(rcList, cty.ObjectVal(rcMap))
}
- nodeSpec := rc.GetElectableSpecs()
- nodeSpecBlock := rcBlock.AppendNewBlock("electable_specs", nil).Body()
- err = addPrimitiveAttributesViaJSON(nodeSpecBlock, nodeSpec)
-
- readOnlySpecs := rc.GetReadOnlySpecs()
- if readOnlySpecs.GetNodeCount() != 0 {
- readOnlyBlock := rcBlock.AppendNewBlock("read_only_specs", nil).Body()
- err = addPrimitiveAttributesViaJSON(readOnlyBlock, readOnlySpecs)
+ // Use TupleVal instead of ListVal so region/spec objects can have different fields without type conflicts.
+ specMap["region_configs"] = cty.TupleVal(rcList)
+ allSpecs = append(allSpecs, cty.ObjectVal(specMap))
+ }
+
+ cluster.SetAttributeValue("replication_specs", cty.TupleVal(allSpecs))
+ return nil
+}
+
+func validateAdvancedConfig(cfg map[string]any) error {
+ for k := range cfg {
+ if !knownAdvancedConfig[k] {
+ return fmt.Errorf("unknown advanced configuration key: %s", k)
}
}
- return err
+ return nil
}
diff --git a/internal/testutil/acc/config_cluster_test.go b/internal/testutil/acc/config_cluster_test.go
index 59c6e45d88..c20b1f882f 100644
--- a/internal/testutil/acc/config_cluster_test.go
+++ b/internal/testutil/acc/config_cluster_test.go
@@ -18,22 +18,21 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "US_WEST_2"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 1"
+ }]
}
`
@@ -47,27 +46,26 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = true
retain_backups_enabled = true
- advanced_configuration {
+ advanced_configuration = {
oplog_min_retention_hours = 8
}
- replication_specs {
- zone_name = "Zone X"
-
- region_configs {
- priority = 7
- provider_name = "AZURE"
- region_name = "MY_REGION_1"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
ebs_volume_type = "STANDARD"
instance_size = "M30"
node_count = 30
}
- }
- }
+ priority = 7
+ provider_name = "AZURE"
+ region_name = "MY_REGION_1"
+ }]
+ zone_name = "Zone X"
+ }]
}
`
@@ -80,22 +78,21 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "US_WEST_2"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 1"
+ }]
depends_on = [mongodbatlas_project.project_execution]
}
@@ -108,22 +105,21 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "US_WEST_2"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 1"
+ }]
depends_on = [mongodbatlas_private_endpoint_regional_mode.atlasrm, mongodbatlas_privatelink_endpoint_service.atlasple]
}
@@ -136,38 +132,35 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "US_WEST_1"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
- replication_specs {
- zone_name = "Zone 2"
-
- region_configs {
priority = 7
provider_name = "AWS"
- region_name = "EU_WEST_2"
- auto_scaling {
+ region_name = "US_WEST_1"
+ }]
+ zone_name = "Zone 1"
+ }, {
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_2"
+ }]
+ zone_name = "Zone 2"
+ }]
}
`
@@ -179,35 +172,32 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "US_WEST_1"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
-
- region_configs {
priority = 7
provider_name = "AWS"
- region_name = "EU_WEST_1"
- auto_scaling {
+ region_name = "US_WEST_1"
+ }, {
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
+ priority = 7
+ provider_name = "AWS"
+ region_name = "EU_WEST_1"
+ }]
+ zone_name = "Zone 1"
+ }]
}
`
@@ -220,29 +210,24 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 7
- provider_name = "AWS"
- region_name = "US_WEST_2"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = true
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 3
}
- }
- }
- tags {
- key = "ArchiveTest"
- value = "true"
- }
- tags {
- key = "Owner"
- value = "test"
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_WEST_2"
+ }]
+ zone_name = "Zone 1"
+ }]
+ tags = {
+ ArchiveTest = "true"
+ Owner = "test"
}
}
@@ -255,26 +240,25 @@ resource "mongodbatlas_advanced_cluster" "cluster_info" {
pit_enabled = false
project_id = "project"
- replication_specs {
- zone_name = "Zone 1"
-
- region_configs {
- priority = 5
- provider_name = "AWS"
- region_name = "US_EAST_1"
- auto_scaling {
+ replication_specs = [{
+ region_configs = [{
+ auto_scaling = {
disk_gb_enabled = false
}
- electable_specs {
+ electable_specs = {
instance_size = "M10"
node_count = 5
}
- read_only_specs {
+ priority = 5
+ provider_name = "AWS"
+ read_only_specs = {
instance_size = "M10"
node_count = 1
}
- }
- }
+ region_name = "US_EAST_1"
+ }]
+ zone_name = "Zone 1"
+ }]
}
`
diff --git a/internal/testutil/acc/config_formatter.go b/internal/testutil/acc/config_formatter.go
index 50ef365fbe..47a31f19c1 100644
--- a/internal/testutil/acc/config_formatter.go
+++ b/internal/testutil/acc/config_formatter.go
@@ -1,7 +1,6 @@
package acc
import (
- "encoding/json"
"fmt"
"regexp"
"sort"
@@ -69,15 +68,6 @@ func FormatToHCLLifecycleIgnore(keys ...string) string {
return strings.Join(lines, "\n")
}
-func sortStringMapKeysAny(m map[string]any) []string {
- keys := make([]string, 0, len(m))
- for k := range m {
- keys = append(keys, k)
- }
- sort.Strings(keys)
- return keys
-}
-
var matchFirstCap = regexp.MustCompile("(.)([A-Z][a-z]+)")
var matchAllCap = regexp.MustCompile("([a-z0-9])([A-Z])")
@@ -94,44 +84,11 @@ var (
}
)
-// addPrimitiveAttributesViaJSON adds "primitive" bool/string/int/float attributes of a struct.
-func addPrimitiveAttributesViaJSON(b *hclwrite.Body, obj any) error {
- var objMap map[string]any
- inrec, err := json.Marshal(obj)
- if err != nil {
- return err
- }
- err = json.Unmarshal(inrec, &objMap)
- if err != nil {
- return err
- }
- addPrimitiveAttributes(b, objMap)
- return nil
-}
-
-func addPrimitiveAttributes(b *hclwrite.Body, values map[string]any) {
- for _, keyCamel := range sortStringMapKeysAny(values) {
- key := ToSnakeCase(keyCamel)
- value := values[keyCamel]
- switch value := value.(type) {
- case bool:
- b.SetAttributeValue(key, cty.BoolVal(value))
- case string:
- if value != "" {
- b.SetAttributeValue(key, cty.StringVal(value))
- }
- case int:
- b.SetAttributeValue(key, cty.NumberIntVal(int64(value)))
- // int gets parsed as float64 for json
- case float64:
- b.SetAttributeValue(key, cty.NumberIntVal(int64(value)))
- default:
- continue
- }
- }
-}
-
-// Sometimes it is easier to set a value using hcl/tf syntax instead of creating complex values like list hcl.Traversal.
+// setAttributeHcl inserts a raw HCL assignment into the body by parsing a snippet like:
+//
+// project_id = mongodbatlas_project.test.id or depends_on = [mongodbatlas_project.test.id] etc
+//
+// and copying its tokens directly. Used for expressions or references that can’t emit simple Go literals.
func setAttributeHcl(body *hclwrite.Body, tfExpression string) error {
src := []byte(tfExpression)
@@ -173,3 +130,58 @@ func setAttributeHcl(body *hclwrite.Body, tfExpression string) error {
body.SetAttributeRaw(attributeName, valueTokens)
return nil
}
+
+// setAttributes iterates over attrs, snake-cases each key, converts the value
+// with toCtyValue, and calls body.SetAttributeValue.
+func setAttributes(body *hclwrite.Body, attrs map[string]any) {
+ keys := make([]string, 0, len(attrs))
+ for k := range attrs {
+ keys = append(keys, k)
+ }
+ sort.Strings(keys)
+
+ for _, camel := range keys {
+ key := ToSnakeCase(camel)
+ if cv, ok := toCtyValue(attrs[camel]); ok {
+ body.SetAttributeValue(key, cv)
+ }
+ }
+}
+
+// toCtyValue handles:
+// - bool, string, int, float64
+// - map[string]any (recursively)
+func toCtyValue(v any) (cty.Value, bool) {
+ switch v := v.(type) {
+ case bool:
+ return cty.BoolVal(v), true
+ case string:
+ if v == "" {
+ return cty.NullVal(cty.String), false
+ }
+ return cty.StringVal(v), true
+ case int:
+ return cty.NumberIntVal(int64(v)), true
+ case float64:
+ return cty.NumberIntVal(int64(v)), true
+ case map[string]any:
+ if len(v) == 0 {
+ return cty.NullVal(cty.EmptyObject), false
+ }
+ obj := make(map[string]cty.Value, len(v))
+ // sort keys for deterministic output
+ keys := make([]string, 0, len(v))
+ for k := range v {
+ keys = append(keys, k)
+ }
+ sort.Strings(keys)
+ for _, k := range keys {
+ if cv, ok := toCtyValue(v[k]); ok {
+ obj[ToSnakeCase(k)] = cv
+ }
+ }
+ return cty.ObjectVal(obj), true
+ default:
+ return cty.NilVal, false
+ }
+}
diff --git a/internal/testutil/acc/encryption_at_rest.go b/internal/testutil/acc/encryption_at_rest.go
index 25417c2b12..fdace6820d 100644
--- a/internal/testutil/acc/encryption_at_rest.go
+++ b/internal/testutil/acc/encryption_at_rest.go
@@ -3,12 +3,16 @@ package acc
import (
"context"
"fmt"
+ "os"
"strconv"
+ "testing"
"go.mongodb.org/atlas-sdk/v20250312007/admin"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
+ "github.com/stretchr/testify/require"
)
func ConfigEARAzureKeyVault(projectID string, azure *admin.AzureKeyVault, useRequirePrivateNetworking, useDatasource bool) string {
@@ -153,3 +157,65 @@ func EARImportStateIDFunc(resourceName string) resource.ImportStateIdFunc {
return rs.Primary.ID, nil
}
}
+
+// EncryptionAtRestExecution creates an encryption at rest configuration for test execution.
+func EncryptionAtRestExecution(tb testing.TB) string {
+ tb.Helper()
+ SkipInUnitTest(tb)
+ require.True(tb, sharedInfo.init, "SetupSharedResources must called from TestMain test package")
+
+ projectID := os.Getenv("MONGODB_ATLAS_PROJECT_EAR_PE_AWS_ID")
+
+ sharedInfo.mu.Lock()
+ defer sharedInfo.mu.Unlock()
+
+ // lazy creation so it's only done if really needed
+ if !sharedInfo.encryptionAtRestEnabled {
+ tb.Logf("Creating execution encryption at rest configuration for project: %s\n", projectID)
+
+ // Create encryption at rest configuration using environment variables
+ awsKms := &admin.AWSKMSConfiguration{
+ Enabled: conversion.Pointer(true),
+ CustomerMasterKeyID: conversion.StringPtr(os.Getenv("AWS_CUSTOMER_MASTER_KEY_ID")),
+ Region: conversion.StringPtr(conversion.AWSRegionToMongoDBRegion(os.Getenv("AWS_REGION"))),
+ RoleId: conversion.StringPtr(os.Getenv("AWS_EAR_ROLE_ID")),
+ RequirePrivateNetworking: conversion.Pointer(true),
+ }
+
+ createEncryptionAtRest(tb, projectID, awsKms)
+ sharedInfo.encryptionAtRestEnabled = true
+ }
+
+ return projectID
+}
+
+func createEncryptionAtRest(tb testing.TB, projectID string, aws *admin.AWSKMSConfiguration) {
+ tb.Helper()
+
+ encryptionAtRestReq := &admin.EncryptionAtRest{
+ AwsKms: aws,
+ }
+
+ _, _, err := ConnV2().EncryptionAtRestUsingCustomerKeyManagementApi.UpdateEncryptionAtRest(tb.Context(), projectID, encryptionAtRestReq).Execute()
+ require.NoError(tb, err, "Failed to create encryption at rest configuration for project: %s", projectID)
+}
+
+func deleteEncryptionAtRest(projectID string) {
+ // Disable encryption at rest by setting all providers to disabled
+ encryptionAtRestReq := &admin.EncryptionAtRest{
+ AwsKms: &admin.AWSKMSConfiguration{
+ Enabled: conversion.Pointer(false),
+ },
+ AzureKeyVault: &admin.AzureKeyVault{
+ Enabled: conversion.Pointer(false),
+ },
+ GoogleCloudKms: &admin.GoogleCloudKMS{
+ Enabled: conversion.Pointer(false),
+ },
+ }
+
+ _, _, err := ConnV2().EncryptionAtRestUsingCustomerKeyManagementApi.UpdateEncryptionAtRest(context.Background(), projectID, encryptionAtRestReq).Execute()
+ if err != nil {
+ fmt.Printf("Failed to delete encryption at rest for project %s: %s\n", projectID, err)
+ }
+}
diff --git a/internal/testutil/acc/flex_cluster.go b/internal/testutil/acc/flex_cluster.go
index 0b0029dc33..7b1d6ad3bf 100644
--- a/internal/testutil/acc/flex_cluster.go
+++ b/internal/testutil/acc/flex_cluster.go
@@ -15,9 +15,13 @@ var (
data "mongodbatlas_flex_cluster" "test" {
project_id = mongodbatlas_flex_cluster.test.project_id
name = mongodbatlas_flex_cluster.test.name
+
+ depends_on = [mongodbatlas_flex_cluster.test]
}
data "mongodbatlas_flex_clusters" "test" {
project_id = mongodbatlas_flex_cluster.test.project_id
+
+ depends_on = [mongodbatlas_flex_cluster.test]
}`
)
diff --git a/internal/testutil/acc/pre_check.go b/internal/testutil/acc/pre_check.go
index 98402972fa..d6acde1226 100644
--- a/internal/testutil/acc/pre_check.go
+++ b/internal/testutil/acc/pre_check.go
@@ -7,7 +7,6 @@ import (
"time"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)
func PreCheckBasic(tb testing.TB) {
@@ -19,13 +18,6 @@ func PreCheckBasic(tb testing.TB) {
}
}
-func SkipIfAdvancedClusterV2Schema(tb testing.TB) {
- tb.Helper()
- if config.PreviewProviderV2AdvancedCluster() {
- tb.Skip("Skipping test in PreviewProviderV2AdvancedCluster as implementation is pending or test is not applicable")
- }
-}
-
// PreCheckBasicSleep is a helper function to call SerialSleep, see its help for more info.
// Some examples of use are when the test is calling ProjectIDExecution or GetClusterInfo to create clusters.
func PreCheckBasicSleep(tb testing.TB, clusterInfo *ClusterInfo, projectID, clusterName string) func() {
diff --git a/internal/testutil/acc/privatelink_endpoint.go b/internal/testutil/acc/privatelink_endpoint.go
new file mode 100644
index 0000000000..d5809ea528
--- /dev/null
+++ b/internal/testutil/acc/privatelink_endpoint.go
@@ -0,0 +1,43 @@
+package acc
+
+import (
+ "context"
+ "fmt"
+ "testing"
+ "time"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/service/privatelinkendpoint"
+ "github.com/stretchr/testify/require"
+ "go.mongodb.org/atlas-sdk/v20250312007/admin"
+)
+
+func createPrivateLinkEndpoint(tb testing.TB, projectID, providerName, region string) string {
+ tb.Helper()
+
+ request := &admin.CloudProviderEndpointServiceRequest{
+ ProviderName: providerName,
+ Region: region,
+ }
+
+ privateEndpoint, _, err := ConnV2().PrivateEndpointServicesApi.CreatePrivateEndpointService(tb.Context(), projectID, request).Execute()
+ require.NoError(tb, err)
+
+ stateConf := privatelinkendpoint.CreateStateChangeConfig(tb.Context(), ConnV2(), projectID, providerName, privateEndpoint.GetId(), 1*time.Hour)
+ _, err = stateConf.WaitForStateContext(tb.Context())
+ require.NoError(tb, err, "Private link endpoint creation failed: %s, err: %s", privateEndpoint.GetId(), err)
+
+ return privateEndpoint.GetId()
+}
+
+func deletePrivateLinkEndpoint(projectID, providerName, privateLinkEndpointID string) {
+ _, err := ConnV2().PrivateEndpointServicesApi.DeletePrivateEndpointService(context.Background(), projectID, providerName, privateLinkEndpointID).Execute()
+ if err != nil {
+ fmt.Printf("Failed to delete private link endpoint %s: %s\n", privateLinkEndpointID, err)
+ return
+ }
+ stateConf := privatelinkendpoint.DeleteStateChangeConfig(context.Background(), ConnV2(), projectID, providerName, privateLinkEndpointID, 1*time.Hour)
+ _, err = stateConf.WaitForStateContext(context.Background())
+ if err != nil {
+ fmt.Printf("Failed to delete private link endpoint %s: %s\n", privateLinkEndpointID, err)
+ }
+}
diff --git a/internal/testutil/acc/shared_resource.go b/internal/testutil/acc/shared_resource.go
index 960823166d..219201f84c 100644
--- a/internal/testutil/acc/shared_resource.go
+++ b/internal/testutil/acc/shared_resource.go
@@ -3,12 +3,10 @@ package acc
import (
"context"
"fmt"
- "os"
"sync"
"testing"
"time"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/clean"
"github.com/stretchr/testify/require"
)
@@ -22,7 +20,6 @@ const (
// It returns the cleanup function that must be called at the end of TestMain.
func SetupSharedResources() func() {
sharedInfo.init = true
- setupTestsSDKv2ToTPF()
return cleanupSharedResources
}
@@ -45,6 +42,22 @@ func cleanupSharedResources() {
fmt.Printf("Failed to delete stream instances: for execution project %s, error: %s\n", projectID, err)
}
}
+ if sharedInfo.privateLinkEndpointID != "" {
+ projectID := sharedInfo.projectID
+ if projectID == "" {
+ projectID = projectIDLocal()
+ }
+ fmt.Printf("Deleting execution private link endpoint: %s, project id: %s, provider: %s\n", sharedInfo.privateLinkEndpointID, projectID, sharedInfo.privateLinkProviderName)
+ deletePrivateLinkEndpoint(projectID, sharedInfo.privateLinkProviderName, sharedInfo.privateLinkEndpointID)
+ }
+ if sharedInfo.encryptionAtRestEnabled {
+ projectID := sharedInfo.projectID
+ if projectID == "" {
+ projectID = projectIDLocal()
+ }
+ fmt.Printf("Deleting execution encryption at rest: project id: %s\n", projectID)
+ deleteEncryptionAtRest(projectID)
+ }
if sharedInfo.projectID != "" {
fmt.Printf("Deleting execution project: %s, id: %s\n", sharedInfo.projectName, sharedInfo.projectID)
deleteProject(sharedInfo.projectID)
@@ -177,6 +190,29 @@ func SerialSleep(tb testing.TB) {
time.Sleep(5 * time.Second)
}
+// PrivateLinkEndpointIDExecution returns a private link endpoint id created for the execution of the tests.
+// The endpoint is created with provider "AWS" and region from environment variable.
+// When `MONGODB_ATLAS_PROJECT_ID` is defined, it is used instead of creating a project.
+func PrivateLinkEndpointIDExecution(tb testing.TB, providerName, region string) (projectID, privateLinkEndpointID string) {
+ tb.Helper()
+ SkipInUnitTest(tb)
+ require.True(tb, sharedInfo.init, "SetupSharedResources must called from TestMain test package")
+
+ projectID = ProjectIDExecution(tb) // ensure the execution project is created before endpoint creation
+
+ sharedInfo.mu.Lock()
+ defer sharedInfo.mu.Unlock()
+
+ // lazy creation so it's only done if really needed
+ if sharedInfo.privateLinkEndpointID == "" {
+ tb.Logf("Creating execution private link endpoint for provider: %s, region: %s\n", providerName, region)
+ sharedInfo.privateLinkEndpointID = createPrivateLinkEndpoint(tb, projectID, providerName, region)
+ sharedInfo.privateLinkProviderName = providerName
+ }
+
+ return projectID, sharedInfo.privateLinkEndpointID
+}
+
type projectInfo struct {
id string
name string
@@ -185,14 +221,17 @@ type projectInfo struct {
}
var sharedInfo = struct {
- projectID string
- projectName string
- clusterName string
- streamInstanceName string
- projects []projectInfo
- mu sync.Mutex
- muSleep sync.Mutex
- init bool
+ projectName string
+ clusterName string
+ streamInstanceName string
+ privateLinkEndpointID string
+ privateLinkProviderName string
+ projectID string
+ projects []projectInfo
+ mu sync.Mutex
+ muSleep sync.Mutex
+ encryptionAtRestEnabled bool
+ init bool
}{
projects: []projectInfo{},
}
@@ -218,11 +257,3 @@ func NextProjectIDClusterName(totalNodeCount, freeTierClusterCount int, projectC
}
return project.id, RandomClusterName()
}
-
-// setupTestsSDKv2ToTPF sets the Preview environment variable to false so the previous version in migration tests uses SDKv2.
-// However the current version will use TPF as the variable is only read once during import when it was true.
-func setupTestsSDKv2ToTPF() {
- if IsTestSDKv2ToTPF() && config.PreviewProviderV2AdvancedCluster() {
- os.Setenv(config.PreviewProviderV2AdvancedClusterEnvVar, "false")
- }
-}
diff --git a/internal/testutil/mig/pre_check.go b/internal/testutil/mig/pre_check.go
index acfc144728..b75370c985 100644
--- a/internal/testutil/mig/pre_check.go
+++ b/internal/testutil/mig/pre_check.go
@@ -1,6 +1,8 @@
package mig
import (
+ "os"
+ "strconv"
"testing"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
@@ -22,12 +24,35 @@ func PreCheckBasicSleep(tb testing.TB) func() {
}
}
+func PreCheckLast1XVersion(tb testing.TB) {
+ tb.Helper()
+ if os.Getenv("MONGODB_ATLAS_LAST_1X_VERSION") == "" {
+ tb.Fatal("`MONGODB_ATLAS_LAST_1X_VERSION` must be set for this migration testing")
+ }
+}
+
func PreCheck(tb testing.TB) {
tb.Helper()
checkLastVersion(tb)
acc.PreCheck(tb)
}
+// This pre-check can be removed when migration testing against v1.x is no longer needed
+func PreCheckOldPreviewEnv(tb testing.TB) func() {
+ tb.Helper()
+ return func() {
+ if IsProviderVersionLowerThan("2.0.0") {
+ envValue := os.Getenv("MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER")
+ if envValue == "" {
+ tb.Fatal("`MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER` must be set for migration testing for lower provider versions")
+ }
+ if _, err := strconv.ParseBool(envValue); err != nil {
+ tb.Fatalf("`MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER` must be a valid boolean value, got: %s", envValue)
+ }
+ }
+ }
+}
+
func PreCheckBasicOwnerID(tb testing.TB) {
tb.Helper()
PreCheckBasic(tb)
diff --git a/internal/testutil/mig/provider.go b/internal/testutil/mig/provider.go
index 2ff5665d64..a3fcab0759 100644
--- a/internal/testutil/mig/provider.go
+++ b/internal/testutil/mig/provider.go
@@ -6,6 +6,7 @@ import (
"github.com/hashicorp/go-version"
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
@@ -23,6 +24,12 @@ func IsProviderVersionAtLeast(minVersion string) bool {
return errProvider == nil && errMin == nil && vProvider.GreaterThanOrEqual(vMin)
}
+func IsProviderVersionLowerThan(v string) bool {
+ vProvider, errProvider := version.NewVersion(versionConstraint())
+ vArg, err := version.NewVersion(v)
+ return errProvider == nil && err == nil && vProvider.LessThanOrEqual(vArg)
+}
+
func ExternalProviders() map[string]resource.ExternalProvider {
return acc.ExternalProviders(versionConstraint())
}
diff --git a/internal/testutil/mig/test_case.go b/internal/testutil/mig/test_case.go
index b91cfedf8c..54355d25f3 100644
--- a/internal/testutil/mig/test_case.go
+++ b/internal/testutil/mig/test_case.go
@@ -9,10 +9,12 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)
-func CreateAndRunTest(t *testing.T, test *resource.TestCase) {
+// shouldUseClusterTpfForEmptyPlanStep is only used for advanced cluster migration tests (SDKv2 -> TPF).
+// This can be removed once these tests are no longer used.
+func CreateAndRunTest(t *testing.T, test *resource.TestCase, shouldUseClusterTpfForEmptyPlanStep ...bool) {
t.Helper()
acc.SkipInUnitTest(t) // Migration tests create external resources and use MONGODB_ATLAS_LAST_VERSION env-var.
- resource.ParallelTest(t, CreateTest(t, test))
+ resource.ParallelTest(t, CreateTest(t, test, shouldUseClusterTpfForEmptyPlanStep...))
}
// avoids running migration test in parallel
@@ -36,13 +38,21 @@ func CreateTestAndRunUseExternalProviderNonParallel(t *testing.T, test *resource
// CreateTest returns a new TestCase that reuses step 1 and adds a TestStepCheckEmptyPlan.
// Requires: `MONGODB_ATLAS_LAST_VERSION` to be present.
-func CreateTest(t *testing.T, test *resource.TestCase) resource.TestCase {
+// shouldUseClusterTpfForEmptyPlanStep is only used for advanced cluster migration tests (SDKv2 -> TPF).
+// This can be removed once these tests are no longer used.
+func CreateTest(t *testing.T, test *resource.TestCase, shouldUseClusterTpfForEmptyPlanStep ...bool) resource.TestCase {
t.Helper()
validateReusableCase(t, test)
firstStep := test.Steps[0]
+
+ emptyPlanStep := TestStepCheckEmptyPlan(firstStep.Config)
+ if len(shouldUseClusterTpfForEmptyPlanStep) > 0 && shouldUseClusterTpfForEmptyPlanStep[0] {
+ emptyPlanStep = TestStepCheckEmptyPlan(acc.ConvertAdvancedClusterToTPF(t, true, firstStep.Config))
+ }
+
steps := []resource.TestStep{
useExternalProvider(&firstStep, ExternalProviders()),
- TestStepCheckEmptyPlan(acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, firstStep.Config)),
+ emptyPlanStep,
}
newTest := reuseCase(test, steps)
return newTest
@@ -57,7 +67,7 @@ func CreateTestUseExternalProvider(t *testing.T, test *resource.TestCase, extern
validateReusableCase(t, test)
firstStep := test.Steps[0]
require.NotContains(t, additionalProviders, "mongodbatlas", "Will use the local provider, cannot specify mongodbatlas provider")
- emptyPlanStep := TestStepCheckEmptyPlan(acc.ConvertAdvancedClusterToPreviewProviderV2(t, true, firstStep.Config))
+ emptyPlanStep := TestStepCheckEmptyPlan(firstStep.Config)
steps := []resource.TestStep{
useExternalProvider(&firstStep, externalProviders),
useExternalProvider(&emptyPlanStep, additionalProviders),
diff --git a/internal/testutil/mig/test_case_test.go b/internal/testutil/mig/test_case_test.go
index 67137a5bfe..2491e2e639 100644
--- a/internal/testutil/mig/test_case_test.go
+++ b/internal/testutil/mig/test_case_test.go
@@ -5,9 +5,10 @@ import (
"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/hashicorp/terraform-plugin-testing/terraform"
+ "github.com/stretchr/testify/assert"
+
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/mig"
- "github.com/stretchr/testify/assert"
)
func TestConvertToMigration(t *testing.T) {
diff --git a/internal/testutil/unit/http_mocker_data_test.go b/internal/testutil/unit/http_mocker_data_test.go
index bab58ea148..8531b81019 100644
--- a/internal/testutil/unit/http_mocker_data_test.go
+++ b/internal/testutil/unit/http_mocker_data_test.go
@@ -4,10 +4,12 @@ import (
"strings"
"testing"
- "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/unit"
+ "gopkg.in/yaml.v3"
+
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
- "gopkg.in/yaml.v3"
+
+ "github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/unit"
)
func TestMockHTTPData_UpdateVariables(t *testing.T) {
@@ -67,17 +69,15 @@ steps:
data "mongodbatlas_advanced_cluster" "test" {
project_id = mongodbatlas_advanced_cluster.test.project_id
name = mongodbatlas_advanced_cluster.test.name
- use_replication_spec_per_shard = true
}
data "mongodbatlas_advanced_clusters" "test" {
project_id = mongodbatlas_advanced_cluster.test.project_id
- use_replication_spec_per_shard = true
}
diff_requests: []
request_responses: []
`
-var tfDsString = "\ndata \"mongodbatlas_advanced_cluster\" \"test\" {\n project_id = mongodbatlas_advanced_cluster.test.project_id\n name = mongodbatlas_advanced_cluster.test.name\n use_replication_spec_per_shard = true\n}\ndata \"mongodbatlas_advanced_clusters\" \"test\" {\n project_id = mongodbatlas_advanced_cluster.test.project_id\n use_replication_spec_per_shard = true\n}\n \n"
+var tfDsString = "\ndata \"mongodbatlas_advanced_cluster\" \"test\" {\n project_id = mongodbatlas_advanced_cluster.test.project_id\n name = mongodbatlas_advanced_cluster.test.name\n}\ndata \"mongodbatlas_advanced_clusters\" \"test\" {\n project_id = mongodbatlas_advanced_cluster.test.project_id\n}\n \n"
func TestDumpingConfigUsesLiteralStyle(t *testing.T) {
mockData := unit.NewMockHTTPData(t, 2, []string{"", tfDsString})
diff --git a/scripts/check-upgrade-guide-exists.sh b/scripts/check-upgrade-guide-exists.sh
old mode 100755
new mode 100644
index d1cdf7f136..721a14de0b
--- a/scripts/check-upgrade-guide-exists.sh
+++ b/scripts/check-upgrade-guide-exists.sh
@@ -8,8 +8,8 @@ RELEASE_NUMBER=$(echo "${RELEASE_TAG}" | tr -d v)
IFS='.' read -r MAJOR MINOR PATCH <<< "$RELEASE_NUMBER"
-# Check if it's a major release (patch version is 0)
-if [ "$PATCH" -eq 0 ]; then
+# Check if it's a major release (minor and patch versions are 0)
+if [ "$PATCH" -eq 0 ] && [ "$MINOR" -eq 0 ]; then
UPGRADE_GUIDE_PATH="docs/guides/$MAJOR.$MINOR.$PATCH-upgrade-guide.md"
echo "Checking for the presence of $UPGRADE_GUIDE_PATH"
if [ ! -f "$UPGRADE_GUIDE_PATH" ]; then
diff --git a/scripts/tf-validate.sh b/scripts/tf-validate.sh
index 1ade9b4bcd..69f59cedca 100755
--- a/scripts/tf-validate.sh
+++ b/scripts/tf-validate.sh
@@ -34,41 +34,13 @@ provider_installation {
}
EOF
-# Function to check if directory is a V2 schema directory
-is_v2_dir() {
- local parent_dir
- local grand_parent_dir
- parent_dir=$(basename "$1")
- grand_parent_dir=$(basename "$(dirname "$1")")
- local v2_parent_dirs=("cluster_with_schedule")
- local v2_grand_parent_dirs=("module_maintainer" "module_user" "migrate_cluster_to_advanced_cluster" "mongodbatlas_backup_compliance_policy") # module_maintainer and module_user uses {PARENT_DIR}/vX/main.tf
-
- for dir in "${v2_parent_dirs[@]}"; do
- if [[ $parent_dir =~ $dir ]]; then
- return 0 # True
- fi
- done
- for dir in "${v2_grand_parent_dirs[@]}"; do
- if [[ $grand_parent_dir =~ $dir ]]; then
- return 0 # True
- fi
- done
- return 1 # False
-}
-
for DIR in $(find ./examples -type f -name '*.tf' -exec dirname {} \; | sort -u); do
[ ! -d "$DIR" ] && continue
pushd "$DIR"
echo; echo -e "\e[1;35m===> Example: $DIR <===\e[0m"; echo
terraform init > /dev/null # suppress output as it's very verbose
terraform fmt -check -recursive
+ terraform validate
- if is_v2_dir "$DIR"; then
- echo "v2 schema detected for $DIR"
- MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER=true terraform validate
- else
- echo "v1 schema detected for $DIR"
- terraform validate
- fi
popd
done
diff --git a/scripts/update-examples-reference-in-docs.sh b/scripts/update-examples-reference-in-docs.sh
index 90ff16ba54..58bf2dcf79 100755
--- a/scripts/update-examples-reference-in-docs.sh
+++ b/scripts/update-examples-reference-in-docs.sh
@@ -4,21 +4,44 @@ set -euo pipefail
: "${1?"Tag of new release must be provided"}"
-FILE_PATH="./docs/index.md"
RELEASE_TAG=$1
# Define the old URL pattern and new URL
-OLD_URL_PATTERN="\[example configurations\](https:\/\/github.com\/mongodb\/terraform-provider-mongodbatlas\/tree\/[a-zA-Z0-9._-]*\/examples)"
-NEW_URL="\[example configurations\](https:\/\/github.com\/mongodb\/terraform-provider-mongodbatlas\/tree\/$RELEASE_TAG\/examples)"
-
-
-TMP_FILE_NAME="docs.tmp"
-rm -f $TMP_FILE_NAME
-
-# Use sed to update the URL and write to temporary file
-sed "s|$OLD_URL_PATTERN|$NEW_URL|g" "$FILE_PATH" > "$TMP_FILE_NAME"
-
-# Move temporary file to original file
-mv "$TMP_FILE_NAME" "$FILE_PATH"
-
-echo "Link updated successfully in $FILE_PATH"
+OLD_URL_PATTERN="https:\/\/github.com\/mongodb\/terraform-provider-mongodbatlas\/tree\/[a-zA-Z0-9._-]*\/examples"
+NEW_URL="https:\/\/github.com\/mongodb\/terraform-provider-mongodbatlas\/tree\/$RELEASE_TAG\/examples"
+
+FILES=()
+
+# 1) docs/index.md
+FILES+=("./docs/index.md")
+
+# 2) collect all *.md and *.md.tmpl under docs/resources, templates/resources,
+# docs/data-sources, and templates/data-sources
+TARGET_DIRS=(
+ "./docs/resources"
+ "./templates/resources"
+ "./docs/data-sources"
+ "./templates/data-sources"
+)
+
+for DIR in "${TARGET_DIRS[@]}"; do
+ if [ -d "$DIR" ]; then
+ while IFS= read -r -d '' f; do
+ FILES+=("$f")
+ done < <(find "$DIR" -type f \( -name "*.md" -o -name "*.md.tmpl" \) -print0)
+ fi
+done
+
+# Update links in each target file
+for FILE_PATH in "${FILES[@]}"; do
+ TMP_FILE_NAME="${FILE_PATH}.tmp"
+ rm -f "$TMP_FILE_NAME"
+
+ # Use sed to update the URL and write to temporary file
+ sed "s|$OLD_URL_PATTERN|$NEW_URL|g" "$FILE_PATH" > "$TMP_FILE_NAME"
+
+ # Move temporary file to original file
+ mv "$TMP_FILE_NAME" "$FILE_PATH"
+
+ echo "Link updated successfully in $FILE_PATH"
+done
diff --git a/templates/data-sources/api_key_project_assignment.md.tmpl b/templates/data-sources/api_key_project_assignment.md.tmpl
index 3d583aa602..051782368f 100644
--- a/templates/data-sources/api_key_project_assignment.md.tmpl
+++ b/templates/data-sources/api_key_project_assignment.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes an API Key Project Assignment.
diff --git a/templates/data-sources/api_key_project_assignments.md.tmpl b/templates/data-sources/api_key_project_assignments.md.tmpl
index bc4e1e069b..58e703f797 100644
--- a/templates/data-sources/api_key_project_assignments.md.tmpl
+++ b/templates/data-sources/api_key_project_assignments.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides an API Key Project Assignments data source. The data source lets you list all API key project assignments for an organization.
diff --git a/templates/data-sources/cloud_user_org_assignment.md.tmpl b/templates/data-sources/cloud_user_org_assignment.md.tmpl
new file mode 100644
index 0000000000..1d5e441a45
--- /dev/null
+++ b/templates/data-sources/cloud_user_org_assignment.md.tmpl
@@ -0,0 +1,19 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Cloud User Organization Assignment data source. The data source lets you retrieve a user assigned to an organization.
+
+**NOTE**: Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint are not returned with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser) for details.
+To manage such users with this resource, refer to our [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+{{ .SchemaMarkdown | trimspace }}
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser) Documentation.
diff --git a/templates/data-sources/cloud_user_project_assignment.md.tmpl b/templates/data-sources/cloud_user_project_assignment.md.tmpl
new file mode 100644
index 0000000000..1cb9ff7dc6
--- /dev/null
+++ b/templates/data-sources/cloud_user_project_assignment.md.tmpl
@@ -0,0 +1,19 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Cloud User Project Assignment data source. The data source lets you retrieve a user assigned to a project.
+
+-> **NOTE:** Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint are not returned with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectteam) for details.
+To manage such users with this resource, refer to our [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+{{ .SchemaMarkdown | trimspace }}
+
+For more information, see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectuser) Documentation.
diff --git a/templates/data-sources/cloud_user_team_assignment.md.tmpl b/templates/data-sources/cloud_user_team_assignment.md.tmpl
new file mode 100644
index 0000000000..d1c43dd438
--- /dev/null
+++ b/templates/data-sources/cloud_user_team_assignment.md.tmpl
@@ -0,0 +1,19 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Cloud User Team Assignment data source. The data source lets you retrieve a user assigned to a team.
+
+-> **NOTE**Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint are not returned with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listteamusers) for details.
+To manage such users with this resource, refer to our [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](../guides/atlas-user-management).
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+{{ .SchemaMarkdown | trimspace }}
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listteamusers) Documentation.
diff --git a/templates/data-sources/control_plane_ip_addresses.md.tmpl b/templates/data-sources/control_plane_ip_addresses.md.tmpl
index 2da732f681..afef21b8da 100644
--- a/templates/data-sources/control_plane_ip_addresses.md.tmpl
+++ b/templates/data-sources/control_plane_ip_addresses.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Root"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns all control plane IP addresses.
diff --git a/templates/data-sources/encryption_at_rest.md.tmpl b/templates/data-sources/encryption_at_rest.md.tmpl
index 1b2100bad5..d05cb7087f 100644
--- a/templates/data-sources/encryption_at_rest.md.tmpl
+++ b/templates/data-sources/encryption_at_rest.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes encryption at rest configuration for an Atlas project with one of the following providers:
diff --git a/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl b/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl
index 3fabfc0e27..ea20cf39dc 100644
--- a/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl
+++ b/templates/data-sources/encryption_at_rest_private_endpoint.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a private endpoint used for encryption at rest using customer-managed keys.
diff --git a/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl b/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl
index 8c2f815fa7..f706de898e 100644
--- a/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl
+++ b/templates/data-sources/encryption_at_rest_private_endpoints.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes private endpoints of a particular cloud provider used for encryption at rest using customer-managed keys.
diff --git a/templates/data-sources/flex_cluster.md.tmpl b/templates/data-sources/flex_cluster.md.tmpl
index bc17e39ba1..883f931e93 100644
--- a/templates/data-sources/flex_cluster.md.tmpl
+++ b/templates/data-sources/flex_cluster.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Clusters"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a flex cluster.
diff --git a/templates/data-sources/flex_clusters.md.tmpl b/templates/data-sources/flex_clusters.md.tmpl
index 95a58f646c..339c081b5b 100644
--- a/templates/data-sources/flex_clusters.md.tmpl
+++ b/templates/data-sources/flex_clusters.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Clusters"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns all flex clusters in a project.
diff --git a/templates/data-sources/flex_restore_job.md.tmpl b/templates/data-sources/flex_restore_job.md.tmpl
index 7f57850f46..266ed5249b 100644
--- a/templates/data-sources/flex_restore_job.md.tmpl
+++ b/templates/data-sources/flex_restore_job.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Restore Jobs"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a flex restore job.
diff --git a/templates/data-sources/flex_restore_jobs.md.tmpl b/templates/data-sources/flex_restore_jobs.md.tmpl
index 1593aeba14..d31df0615f 100644
--- a/templates/data-sources/flex_restore_jobs.md.tmpl
+++ b/templates/data-sources/flex_restore_jobs.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Restore Jobs"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns all flex restore job of a flex cluster.
diff --git a/templates/data-sources/flex_snapshot.md.tmpl b/templates/data-sources/flex_snapshot.md.tmpl
index f5b6f8896a..6aef6a0778 100644
--- a/templates/data-sources/flex_snapshot.md.tmpl
+++ b/templates/data-sources/flex_snapshot.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Snapshots"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a flex snapshot.
diff --git a/templates/data-sources/flex_snapshots.md.tmpl b/templates/data-sources/flex_snapshots.md.tmpl
index 0a0780212d..0c3aed0acf 100644
--- a/templates/data-sources/flex_snapshots.md.tmpl
+++ b/templates/data-sources/flex_snapshots.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Snapshots"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns all snapshots of a flex cluster.
diff --git a/templates/data-sources/mongodb_employee_access_grant.md.tmpl b/templates/data-sources/mongodb_employee_access_grant.md.tmpl
index 2ae2d82e6d..843d383ae0 100644
--- a/templates/data-sources/mongodb_employee_access_grant.md.tmpl
+++ b/templates/data-sources/mongodb_employee_access_grant.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Clusters"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a MongoDB employee access grant.
diff --git a/templates/data-sources/project_ip_addresses.md.tmpl b/templates/data-sources/project_ip_addresses.md.tmpl
index 6b71f0efb0..74606e0a69 100644
--- a/templates/data-sources/project_ip_addresses.md.tmpl
+++ b/templates/data-sources/project_ip_addresses.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Projects"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns the IP addresses in a project categorized by services.
diff --git a/templates/data-sources/push_based_log_export.md.tmpl b/templates/data-sources/push_based_log_export.md.tmpl
index 0a0ea3fe3c..023c1d2fe1 100644
--- a/templates/data-sources/push_based_log_export.md.tmpl
+++ b/templates/data-sources/push_based_log_export.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Push-Based Log Export"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes the configured project level settings for the push-based log export feature.
diff --git a/templates/data-sources/resource_policies.md.tmpl b/templates/data-sources/resource_policies.md.tmpl
index f11ece93be..76988acb51 100644
--- a/templates/data-sources/resource_policies.md.tmpl
+++ b/templates/data-sources/resource_policies.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Resource Policies"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns all resource policies in an organization.
diff --git a/templates/data-sources/resource_policy.md.tmpl b/templates/data-sources/resource_policy.md.tmpl
index c9cca70438..959c3193be 100644
--- a/templates/data-sources/resource_policy.md.tmpl
+++ b/templates/data-sources/resource_policy.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Resource Policies"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a resource policy in an organization.
diff --git a/templates/data-sources/search_deployment.md.tmpl b/templates/data-sources/search_deployment.md.tmpl
index b746ea483e..3188b28e29 100644
--- a/templates/data-sources/search_deployment.md.tmpl
+++ b/templates/data-sources/search_deployment.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a search node deployment.
diff --git a/templates/data-sources/stream_account_details.md.tmpl b/templates/data-sources/stream_account_details.md.tmpl
index a2f282d20e..3db30ba5e6 100644
--- a/templates/data-sources/stream_account_details.md.tmpl
+++ b/templates/data-sources/stream_account_details.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns the AWS Account ID/Azure Subscription ID, and the AWS VPC ID/Azure Virtual Network Name for the group, cloud provider, and region that you specify.
diff --git a/templates/data-sources/stream_privatelink_endpoint.md.tmpl b/templates/data-sources/stream_privatelink_endpoint.md.tmpl
index 32aeff155c..a2808f93fb 100644
--- a/templates/data-sources/stream_privatelink_endpoint.md.tmpl
+++ b/templates/data-sources/stream_privatelink_endpoint.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a Privatelink Endpoint for Streams.
diff --git a/templates/data-sources/stream_privatelink_endpoints.md.tmpl b/templates/data-sources/stream_privatelink_endpoints.md.tmpl
index daaaf7d295..9c8bd8cebb 100644
--- a/templates/data-sources/stream_privatelink_endpoints.md.tmpl
+++ b/templates/data-sources/stream_privatelink_endpoints.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a Privatelink Endpoint for Streams.
diff --git a/templates/data-sources/stream_processor.md.tmpl b/templates/data-sources/stream_processor.md.tmpl
index f2a0b02309..12d23c63c4 100644
--- a/templates/data-sources/stream_processor.md.tmpl
+++ b/templates/data-sources/stream_processor.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a stream processor.
diff --git a/templates/data-sources/stream_processors.md.tmpl b/templates/data-sources/stream_processors.md.tmpl
index 126e4164f2..f7ec887b58 100644
--- a/templates/data-sources/stream_processors.md.tmpl
+++ b/templates/data-sources/stream_processors.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` returns all stream processors in a stream instance.
diff --git a/templates/data-sources/team_project_assignment.md.tmpl b/templates/data-sources/team_project_assignment.md.tmpl
new file mode 100644
index 0000000000..b6ed2ab221
--- /dev/null
+++ b/templates/data-sources/team_project_assignment.md.tmpl
@@ -0,0 +1,15 @@
+---
+subcategory: "Teams"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Team Project Assignment data source. The data source lets you retrieve a team assigned to a project.
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+{{ .SchemaMarkdown | trimspace }}
+
+For more information, see: [MongoDB Atlas API - Teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectteam) Documentation.
diff --git a/templates/resources/api_key_project_assignment.md.tmpl b/templates/resources/api_key_project_assignment.md.tmpl
index effc013f6f..a3a9392284 100644
--- a/templates/resources/api_key_project_assignment.md.tmpl
+++ b/templates/resources/api_key_project_assignment.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Programmatic API Keys"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides an API Key Project Assignment resource. The resource lets you create, edit, and delete Organization API keys assignments to projects.
@@ -6,6 +10,9 @@
{{ tffile "examples/mongodbatlas_api_key/main.tf"}}
+### Further Examples
+- [Assign API Key to Project](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_api_key_assignment)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/cloud_user_org_assignment.md.tmpl b/templates/resources/cloud_user_org_assignment.md.tmpl
new file mode 100644
index 0000000000..9f2a89ccf3
--- /dev/null
+++ b/templates/resources/cloud_user_org_assignment.md.tmpl
@@ -0,0 +1,32 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Cloud User Organization Assignment resource. The resource lets you import, assign, remove, or update a user to an organization.
+
+**NOTE**: Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint cannot be managed with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser) for details.
+To manage such users with this resource, refer to our [Org Invitation to Cloud User Org Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+### Further Examples
+- [Cloud User Organization Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_org_assignment)
+
+{{ .SchemaMarkdown | trimspace }}
+
+## Import
+
+Cloud User Org Assignment resource can be imported using the Org ID & Username OR Org ID & User ID, in the format `ORG_ID/USERNAME` OR `ORG_ID/USER_ID`.
+
+```
+$ terraform import mongodbatlas_cloud_user_org_assignment.test 63cfbf302333a3011d98592e/test-user@example.com
+OR
+$ terraform import mongodbatlas_cloud_user_org_assignment.test 63cfbf302333a3011d98592e/5f18367ccb7a503a2b481b7a
+```
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createorganizationuser) Documentation.
diff --git a/templates/resources/cloud_user_project_assignment.md.tmpl b/templates/resources/cloud_user_project_assignment.md.tmpl
new file mode 100644
index 0000000000..cc4c18b53d
--- /dev/null
+++ b/templates/resources/cloud_user_project_assignment.md.tmpl
@@ -0,0 +1,37 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Cloud User Project Assignment resource. It lets you manage the association between a cloud user and a project, enabling you to import, assign, remove, or update the user's membership.
+
+Depending on the user's current membership status in the project's organization, MongoDB Cloud handles invitations and access in different ways:
+- If the user has a pending invitation to join the project's organization, MongoDB Cloud modifies it and grants project access.
+- If the user doesn't have an invitation to join the organization, MongoDB Cloud sends a new invitation that grants the user organization and project access.
+- If the user is already active in the project's organization, MongoDB Cloud grants access to the project.
+
+-> **NOTE:** Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint cannot be managed with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getprojectteam) for details.
+To manage such users with this resource, refer to our [Project Invitation to Cloud User Project Assignment Migration Guide](../guides/atlas-user-management).
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+### Further Examples
+- [Cloud User Project Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_project_assignment)
+
+{{ .SchemaMarkdown | trimspace }}
+
+## Import
+
+Cloud User Project Assignment resource can be imported using the Project ID & Username OR Project ID & User ID, in the format `PROJECT_ID/USERNAME` OR `PROJECT_ID/USER_ID`.
+
+```
+$ terraform import mongodbatlas_cloud_user_project_assignment.test 9f3a7c2e54b8d1a0e6f4b3c2/test-user@example.com
+OR
+$ terraform import mongodbatlas_cloud_user_project_assignment.test 9f3a7c2e54b8d1a0e6f4b3c2/5f18367ccb7a503a2b481b7a
+```
+
+For more information, see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addprojectuser) Documentation.
diff --git a/templates/resources/cloud_user_team_assignment.md.tmpl b/templates/resources/cloud_user_team_assignment.md.tmpl
new file mode 100644
index 0000000000..ff6e824813
--- /dev/null
+++ b/templates/resources/cloud_user_team_assignment.md.tmpl
@@ -0,0 +1,32 @@
+---
+subcategory: "MongoDB Cloud Users"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Cloud User Team Assignment resource. It lets you manage the association between a cloud user and a team, enabling you to import, assign, remove, or update the user's membership.
+
+-> **NOTE**Users with pending invitations created using the deprecated `mongodbatlas_project_invitation` resource or via the deprecated [Invite One MongoDB Cloud User to One Project](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-getorganizationuser#tag/Projects/operation/createProjectInvitation)
+endpoint cannot be managed with this resource. See [MongoDB Atlas API](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-listteamusers) for details.
+To manage such users with this resource, refer to our [Migration Guide: Team Usernames Attribute to Cloud User Team Assignment](../guides/atlas-user-management).
+
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+### Further Examples
+- [Cloud User Team Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_cloud_user_team_assignment)
+
+{{ .SchemaMarkdown | trimspace }}
+
+## Import
+
+Cloud User Team Assignment resource can be imported using the Org ID & Team ID & User ID OR Org ID & Team ID & Username, in the format `ORG_ID/TEAM_ID/USER_ID` OR `ORG_ID/TEAM_ID/USERNAME`.
+
+```
+$ terraform import mongodbatlas_cloud_user_team_assignment.test 63cfbf302333a3011d98592e/9f3c1e7a4d8b2f6051acde47/5f18367ccb7a503a2b481b7a
+OR
+$ terraform import mongodbatlas_cloud_user_team_assignment.test 63cfbf302333a3011d98592e/9f3c1e7a4d8b2f6051acde47/test-user@example.com
+```
+
+For more information see: [MongoDB Atlas API - Cloud Users](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addusertoteam) Documentation.
diff --git a/templates/resources/encryption_at_rest.md.tmpl b/templates/resources/encryption_at_rest.md.tmpl
index 41add50a7f..6035274453 100644
--- a/templates/resources/encryption_at_rest.md.tmpl
+++ b/templates/resources/encryption_at_rest.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` allows management of Encryption at Rest for an Atlas project using Customer Key Management configuration. The following providers are supported:
@@ -59,7 +63,11 @@ This approach uses role-based authentication through Cloud Provider Access for a
{{ tffile (printf "examples/%s/gcp/main.tf" .Name )}}
-For a complete example that includes GCP KMS resource creation and IAM binding setup, see the [GCP encryption at rest example](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/gcp/).
+### Further Examples
+- [AWS KMS Encryption at Rest](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/aws)
+- [Azure Key Vault Encryption at Rest](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/azure)
+- [GCP KMS Encryption at Rest](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest/gcp/)
+
{{ .SchemaMarkdown | trimspace }}
diff --git a/templates/resources/encryption_at_rest_private_endpoint.md.tmpl b/templates/resources/encryption_at_rest_private_endpoint.md.tmpl
index ad5e925831..54ac20bf5f 100644
--- a/templates/resources/encryption_at_rest_private_endpoint.md.tmpl
+++ b/templates/resources/encryption_at_rest_private_endpoint.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Encryption at Rest using Customer Key Management"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a resource for managing a private endpoint used for encryption at rest with customer-managed keys. This ensures all traffic between Atlas and customer key management systems take place over private network interfaces.
@@ -23,6 +27,10 @@ Make sure to reference the [complete example section](https://github.com/mongodb
{{ tffile (printf "examples/%s/aws/main.tf" .Name )}}
+### Further Examples
+- [AWS KMS Encryption at Rest Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/aws)
+- [Azure Key Vault Encryption at Rest Private Endpoint](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_encryption_at_rest_private_endpoint/azure)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/flex_cluster.md.tmpl b/templates/resources/flex_cluster.md.tmpl
index 576f07a1e1..3f6c3d5c88 100644
--- a/templates/resources/flex_cluster.md.tmpl
+++ b/templates/resources/flex_cluster.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Flex Clusters"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a Flex Cluster resource. The resource lets you create, update, delete and import a flex cluster.
@@ -8,6 +12,9 @@
{{ tffile (printf "examples/%s/main.tf" .Name )}}
+### Further Examples
+- [Flex Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_flex_cluster)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/mongodb_employee_access_grant.md.tmpl b/templates/resources/mongodb_employee_access_grant.md.tmpl
index 03102aec95..1216f61079 100644
--- a/templates/resources/mongodb_employee_access_grant.md.tmpl
+++ b/templates/resources/mongodb_employee_access_grant.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Clusters"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a MongoDB Employee Access Grant resource. The resource lets you create, delete, update and import a MongoDB employee access grant.
@@ -6,6 +10,9 @@
{{ tffile (printf "examples/%s/main.tf" .Name )}}
+### Further Examples
+- [Grant log access to MongoDB employees](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_mongodb_employee_access_grant)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/push_based_log_export.md.tmpl b/templates/resources/push_based_log_export.md.tmpl
index 44a45cf6e7..6372ee264b 100644
--- a/templates/resources/push_based_log_export.md.tmpl
+++ b/templates/resources/push_based_log_export.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Push-Based Log Export"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a resource for push-based log export feature. The resource lets you configure, enable & disable the project level settings for the push-based log export feature. Using this resource you
@@ -10,6 +14,9 @@ The [push based log export Terraform module](https://registry.terraform.io/modul
{{ tffile (printf "examples/%s/main.tf" .Name )}}
+### Further Examples
+- [Push-Based Log Export](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_push_based_log_export)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/resource_policy.md.tmpl b/templates/resources/resource_policy.md.tmpl
index 87c274b895..40b81b221f 100644
--- a/templates/resources/resource_policy.md.tmpl
+++ b/templates/resources/resource_policy.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Resource Policies"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a Resource Policy resource. The resource lets you create, edit and delete resource policies to prevent misconfigurations and reduce the need for corrective interventions in your organization.
@@ -7,6 +11,9 @@
{{ tffile (printf "examples/%s/main.tf" .Name )}}
+### Further Examples
+- [Atlas Resource Policy](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_resource_policy)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/search_deployment.md.tmpl b/templates/resources/search_deployment.md.tmpl
index c03d61efbf..f546b3d5b1 100644
--- a/templates/resources/search_deployment.md.tmpl
+++ b/templates/resources/search_deployment.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Atlas Search"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a Search Deployment resource. The resource lets you create, edit and delete dedicated search nodes in a cluster.
@@ -10,6 +14,9 @@
{{ tffile (printf "examples/%s/main.tf" .Name )}}
+### Further Examples
+- [Atlas Cluster with dedicated Search Nodes Deployment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_search_deployment)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/stream_privatelink_endpoint.md.tmpl b/templates/resources/stream_privatelink_endpoint.md.tmpl
index 32aeff155c..8681e9a17e 100644
--- a/templates/resources/stream_privatelink_endpoint.md.tmpl
+++ b/templates/resources/stream_privatelink_endpoint.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` describes a Privatelink Endpoint for Streams.
@@ -13,6 +17,13 @@
### AWS S3 Privatelink
{{ tffile (printf "examples/%s/s3/main.tf" .Name )}}
+### Further Examples
+- [AWS Confluent PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/confluent_serverless)
+- [Confluent Dedicated Cluster](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/confluent_dedicated_cluster)
+- [AWS MSK PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/aws_msk_cluster)
+- [AWS S3 PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/s3)
+- [Azure PrivateLink](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_privatelink_endpoint/azure)
+
{{ .SchemaMarkdown | trimspace }}
For more information see: [MongoDB Atlas API - Streams Privatelink](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-createprivatelinkconnection) Documentation.
diff --git a/templates/resources/stream_processor.md.tmpl b/templates/resources/stream_processor.md.tmpl
index 2013115144..b4d84801f7 100644
--- a/templates/resources/stream_processor.md.tmpl
+++ b/templates/resources/stream_processor.md.tmpl
@@ -1,3 +1,7 @@
+---
+subcategory: "Streams"
+---
+
# {{.Type}}: {{.Name}}
`{{.Name}}` provides a Stream Processor resource. The resource lets you create, delete, import, start and stop a stream processor in a stream instance.
@@ -11,6 +15,9 @@
{{ tffile (printf "examples/%s/main.tf" .Name )}}
+### Further Examples
+- [Atlas Stream Processor](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_stream_processor)
+
{{ .SchemaMarkdown | trimspace }}
## Import
diff --git a/templates/resources/team_project_assignment.md.tmpl b/templates/resources/team_project_assignment.md.tmpl
new file mode 100644
index 0000000000..bd300bb101
--- /dev/null
+++ b/templates/resources/team_project_assignment.md.tmpl
@@ -0,0 +1,25 @@
+---
+subcategory: "Teams"
+---
+
+# {{.Type}}: {{.Name}}
+
+`{{.Name}}` provides a Team Project Assignment resource. It lets you manage the association between a team and a project, enabling you to import, assign, remove, or update the team's membership.
+## Example Usages
+
+{{ tffile (printf "examples/%s/main.tf" .Name )}}
+
+### Further Examples
+- [Team Project Assignment](https://github.com/mongodb/terraform-provider-mongodbatlas/tree/master/examples/mongodbatlas_team_project_assignment)
+
+{{ .SchemaMarkdown | trimspace }}
+
+## Import
+
+Team Project Assignment resource can be imported using the Project ID & TeamID, in the format `PROJECT_ID/TEAM_ID`.
+
+```
+$ terraform import mongodbatlas_team_project_assignment.test 9f3a7c2e54b8d1a0e6f4b3c2/a4d9f7b18e52c0fa36b7e9cd
+```
+
+For more information, see: [MongoDB Atlas API - Teams](https://www.mongodb.com/docs/api/doc/atlas-admin-api-v2/operation/operation-addallteamstoproject) Documentation.