Skip to content

Commit 0270301

Browse files
committed
Doing Scott's work for him
1 parent 3b329aa commit 0270301

File tree

1 file changed

+33
-17
lines changed

1 file changed

+33
-17
lines changed

README.rst

Lines changed: 33 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,12 @@ If `deploy_wazuh` is set to true, an infrastructure VM will be created that
148148
hosts the Wazuh manager. The Wazuh deployment playbooks will also be triggered
149149
automatically to deploy Wazuh agents to the overcloud hosts.
150150

151+
If `deploy_pulp` is set to true, a local pulp container will be deployed on the
152+
seed node. This is mandatory for any multinode not running on SMS. Pulp can
153+
sync a lot of data, so it is recommended that you ensure `seed_disk_size` is
154+
greater than 150 when using this option. Local pulp deployments require
155+
additional configuration, which is detailed below.
156+
151157
Generate a plan:
152158

153159
.. code-block:: console
@@ -220,18 +226,43 @@ These playbooks are tagged so that they can be invoked or skipped as required. F
220226
221227
ansible-playbook -i ansible/inventory.yml ansible/configure-hosts.yml --skip-tags fqdn
222228
229+
The Ansible Control host should now be accessible with the following command:
230+
231+
.. code-block:: console
232+
233+
ssh $(terraform output -raw ssh_user)@$(terraform output -raw ansible_control_access_ip_v4)
234+
235+
Deploy Pulp
236+
-----------
237+
238+
To set up a local pulp service on the seed, first obtain/generate a set of Ark credentials using `this workflow <https://github.com/stackhpc/stackhpc-release-train-clients/actions/workflows/create-client-credentials.yml>`_, then add the following configuration to ``~/src/kayobe-config/etc/kayobe/environments/ci-multinode/stackhpc-ci.yml``on the Ansible Control host.
239+
240+
.. code-block:: yaml
241+
242+
stackhpc_release_pulp_username: <ark-credentials-username>
243+
stackhpc_release_pulp_password: !vault |
244+
<vault-encrypted-ark-password>
245+
246+
pulp_username: admin
247+
pulp_password: <randomly-generated-password-to-set-for-local-pulp-admin-user>
248+
249+
Run the command below to automatically comment out the overrides in ``stackhpc-ci.yml`` for pointing to test pulp.
250+
251+
.. code-block:: console
252+
253+
sed -i -e 's/^resolv_/#resolv_/g' -e 's/^stackhpc_repo_/#stackhpc_repo_/g' -e 's/^stackhpc_include/#stackhpc_include/g' -e 's/^stackhpc_docker_registry:/#stackhpc_docker_registry:/g' ~/src/kayobe-config/etc/kayobe/environments/ci-multinode/stackhpc-ci.yml
254+
223255
Deploy OpenStack
224256
----------------
225257

226258
Once the Ansible control host has been configured with a Kayobe/OpenStack configuration you can then begin the process of deploying OpenStack.
227-
This can be achieved by either manually running the various commands to configures the hosts and deploy the services or automated by using `deploy-openstack.sh`,
259+
This can be achieved by either manually running the various commands to configure the hosts and deploy the services or automated by using `deploy-openstack.sh`,
228260
which should be available within the homedir on your Ansible control host provided you ran `deploy-openstack-config.yml` earlier.
229261

230262
If you choose to opt for automated method you must first SSH into your Ansible control host and then run the `deploy-openstack.sh` script
231263

232264
.. code-block:: console
233265
234-
ssh $(terraform output -raw ssh_user)@$(terraform output -raw ansible_control_access_ip_v4)
235266
~/deploy-openstack.sh
236267
237268
This script will go through the process of performing the following tasks
@@ -243,21 +274,6 @@ This script will go through the process of performing the following tasks
243274
* openstack configuration
244275
* tempest testing
245276

246-
**Note**: When setting up a multinode on a cloud which doesn't have access to test pulp (i.e. everywhere except SMS lab) a separate local pulp must be deployed. Before doing so, it is a good idea to make sure your seed VM has sufficient disk space by setting ``seed_disk_size`` in your ``terraform.tfvars`` to an appropriate value (100-200 GB should suffice). In order to set up the local pulp service on the seed, first obtain/generate a set of Ark credentials using `this workflow <https://github.com/stackhpc/stackhpc-release-train-clients/actions/workflows/create-client-credentials.yml>`_, then add the following configuration to ``etc/kayobe/environments/ci-multinode/stackhpc-ci.yml``
247-
248-
.. code-block:: yaml
249-
250-
stackhpc_release_pulp_username: <ark-credentials-username>
251-
stackhpc_release_pulp_password: !vault |
252-
<vault-encrypted-ark-password>
253-
254-
pulp_username: admin
255-
pulp_password: <randomly-generated-password-to-set-for-local-pulp-admin-user>
256-
257-
You may also need to comment out many of the other config overrides in ``stackhpc-ci.yml`` such as ``stackhpc_repo_mirror_url`` plus all of the ``stackhpc_repo_*`` and ``stackhpc_docker_registry*`` variables which only apply to local pulp.
258-
259-
To create the local Pulp as part of the automated deployment, set ``deploy_pulp`` to ``true`` in your ``terraform.tfvars`` file.
260-
261277
Accessing OpenStack
262278
-------------------
263279

0 commit comments

Comments
 (0)