Skip to content

Infrastructure-Deployment #43

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 111 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 103 commits
Commits
Show all changes
111 commits
Select commit Hold shift + click to select a range
645e3a2
Additions to enable Azimuth appliance support.
MaxBed4d Jan 3, 2024
5b8bd80
Small typo amendment.
MaxBed4d Jan 3, 2024
18e5d28
Updated usage template for multinode appliance.
MaxBed4d Jan 4, 2024
7bb2cd9
Added 'roles' path to ansible.cfg.
MaxBed4d Jan 4, 2024
e97a5bd
Moved ansible.cfg to repo root directory.
MaxBed4d Jan 4, 2024
d4f9786
Added a requirements yaml file to repo root.
MaxBed4d Jan 4, 2024
0fec8b3
Roles directory typo fix in config.
MaxBed4d Jan 4, 2024
644d46f
Changed ansible collection version.
MaxBed4d Jan 4, 2024
93a2a2f
Added symlink of requirements.
MaxBed4d Jan 4, 2024
b15ebf1
Deleted link to create one after.
MaxBed4d Jan 4, 2024
0921854
Symlink added.
MaxBed4d Jan 4, 2024
fd1e533
Added Group Vars to maybe fix the inability to find roles.
MaxBed4d Jan 5, 2024
baa2521
Change roles directory name.
MaxBed4d Jan 5, 2024
2f36b84
Added templates so that gateway_ips can be provided.
MaxBed4d Jan 5, 2024
c29c28c
Changed few tf files and 'template' to submit correct data.
MaxBed4d Jan 5, 2024
63c8f54
Trying to copy all tf files to the appropriate location.
MaxBed4d Jan 5, 2024
4351735
Added TF Variables to use to build.
MaxBed4d Jan 5, 2024
6a0f55e
Check current playbook directory.
MaxBed4d Jan 5, 2024
bacaa2f
remove versions.tf
MaxBed4d Jan 5, 2024
f72a422
Edited backend type and vars.
MaxBed4d Jan 5, 2024
7403414
Locate playbook directory.
MaxBed4d Jan 5, 2024
f666fed
Change directory for project.
MaxBed4d Jan 5, 2024
1e1bdbe
Remove backend type.
MaxBed4d Jan 5, 2024
e6969d2
Typo fix.
MaxBed4d Jan 5, 2024
73e4f9e
Change tfvars to j2 temp.
MaxBed4d Jan 5, 2024
d834bb6
SSH gen alternative method.
MaxBed4d Jan 5, 2024
6a3cea8
SSH alt method 2.
MaxBed4d Jan 5, 2024
608881f
Update cluster_gateway_ip output variable.
MaxBed4d Jan 8, 2024
b4096ed
Include cluster_nodes variable in output.
MaxBed4d Jan 8, 2024
4fa6cbb
Remove backend from cluster_nodes variable.
MaxBed4d Jan 8, 2024
d934190
Amend variable call.
MaxBed4d Jan 8, 2024
3d3324e
Changed variables being provided to cluster-nodes.
MaxBed4d Jan 8, 2024
6f921ca
Test change to cluster_nodes variable name.
MaxBed4d Jan 8, 2024
09be066
Remove cluster_nodes concat var.
MaxBed4d Jan 8, 2024
ac40ab6
Formatting amendment.
MaxBed4d Jan 8, 2024
ff3f864
Concat the list of cluster_nodes.
MaxBed4d Jan 8, 2024
a75a201
Alter cluster_nodes variables.
MaxBed4d Jan 8, 2024
f1c84f9
create join list to save a loop.
MaxBed4d Jan 8, 2024
5514ce2
amend typo
MaxBed4d Jan 8, 2024
ebd2ae4
Change 'join' formatting.
MaxBed4d Jan 8, 2024
a3bb3e4
Created for loop for cluster_nodes definition.
MaxBed4d Jan 8, 2024
a1585d5
removed fact for autherisation.
MaxBed4d Jan 8, 2024
fbb5968
Remove index notation for IP.
MaxBed4d Jan 8, 2024
ca48124
Changed backend type to a variable.
MaxBed4d Jan 8, 2024
acd6d1c
Added azimuth ssh key.
MaxBed4d Jan 8, 2024
0fa648b
Commented out ssh key gen.
MaxBed4d Jan 8, 2024
a6b6306
Change from deploy to user key.
MaxBed4d Jan 8, 2024
76672b8
Set ssh deploy key to be equal to the user ssh key.
MaxBed4d Jan 8, 2024
c087cb7
Pass multiple ssh keys.
MaxBed4d Jan 8, 2024
1de8778
Amend comment to be able to delete instance.
MaxBed4d Jan 8, 2024
8248634
Converted userdata into a template for ssh keys.
MaxBed4d Jan 8, 2024
5aea5ab
Amend directory typo.
MaxBed4d Jan 8, 2024
5bdbcdb
Comment out ssh key copy.
MaxBed4d Jan 8, 2024
0094547
Create and add ansible ssh key so it can run in runner.
MaxBed4d Jan 9, 2024
021d3eb
Correct variable output.
MaxBed4d Jan 9, 2024
4c7f736
Configure the inventory and install ansible galaxy.
MaxBed4d Jan 9, 2024
06fdbd7
Run command through localhost.
MaxBed4d Jan 9, 2024
3657300
Merge requirements.
MaxBed4d Jan 9, 2024
6a1c393
Move ssh var key definition to main playbook.
MaxBed4d Jan 9, 2024
68159d2
Edit and remove nested template expressions.
MaxBed4d Jan 9, 2024
dd8874c
Make ssh variables for all hosts.
MaxBed4d Jan 9, 2024
e0c9720
SSH Key setup for Multinode Ansible.
MaxBed4d Jan 9, 2024
b15fccb
Variable removal amendment.
MaxBed4d Jan 9, 2024
4f89a78
Changed MN flavour and ssh user username.
MaxBed4d Jan 9, 2024
c0f733a
Link some variables back to the previous directory.
MaxBed4d Jan 9, 2024
ea11995
Fix symlink
MaxBed4d Jan 9, 2024
4fab3af
Remove symlinks.
MaxBed4d Jan 9, 2024
a3e7b2d
add ansible_user to vars.
MaxBed4d Jan 9, 2024
57756dd
Variable set with quote marks.
MaxBed4d Jan 9, 2024
bde16cd
Giving a host to playbook.
MaxBed4d Jan 9, 2024
ab7e903
Create block for tasks.
MaxBed4d Jan 9, 2024
c8ac249
Comment out task test.
MaxBed4d Jan 9, 2024
b0f2932
Debug Groups variable.
MaxBed4d Jan 9, 2024
98d6872
Test new group structure.
MaxBed4d Jan 9, 2024
17c0ca2
Tupple list amend.
MaxBed4d Jan 9, 2024
69846b4
Add command line playbook deployment.
MaxBed4d Jan 9, 2024
1dce1b5
Amend indentations
MaxBed4d Jan 9, 2024
879e67d
Provide Terraform Vars for playbook.
MaxBed4d Jan 9, 2024
d9d60bd
Changed output and converted resources into cluster_nodes output.
MaxBed4d Jan 10, 2024
8dc1d58
Amend playbook vars.
MaxBed4d Jan 10, 2024
fa6fffc
Change to import playbook.
MaxBed4d Jan 10, 2024
c8ba8d0
Install ansible.posix
MaxBed4d Jan 10, 2024
c0dff50
Amended playbook for installing ansible galaxy requirements.
MaxBed4d Jan 10, 2024
ceabfe2
Remove Ansible-galaxy install as it should be done by the requirements.
MaxBed4d Jan 10, 2024
a40845e
This is a combination of 5 commits.
MaxBed4d Jan 10, 2024
5a7c94b
No Wazuh deploy.
MaxBed4d Jan 17, 2024
4519eab
Create infrastructure only option.
MaxBed4d Jan 18, 2024
232ae60
Checkout the main ansible folder so that these changes are solely foc…
MaxBed4d Jan 18, 2024
6fa2086
Create a second App UI to deploy just the infrastructure as a test.
MaxBed4d Jan 19, 2024
00ab6fb
Update meta UI for Infrastructure deployment.
MaxBed4d Jan 22, 2024
6bedaf8
Update UI to allow Openstack version select.
MaxBed4d Jan 23, 2024
775b7f1
Try to allow custom input.
MaxBed4d Jan 24, 2024
305fc34
UI Changes
MaxBed4d Jan 24, 2024
cee87a5
Given user choice over image.
MaxBed4d Jan 24, 2024
1cddc5f
Change the way the ssh command is provided.
MaxBed4d Jan 24, 2024
acf6248
set ssh user after automatically.
MaxBed4d Jan 24, 2024
f987bd8
Change name of app.yaml
MaxBed4d Jan 24, 2024
257fdce
Fix ssh user declaration.
MaxBed4d Jan 24, 2024
a8bec13
Improved UI with ssh username input.
MaxBed4d Jan 25, 2024
6b8eb6f
Change description pipe symbol to have multiline outputs.
MaxBed4d Jan 26, 2024
b257275
Tidy up of the code for the infrastructure deployment.
MaxBed4d Jan 26, 2024
e9b5c85
Changes to remain consistent with OpenStack deployment future PR.
MaxBed4d Jan 30, 2024
8b27ac1
Discard changes to ansible/vars/defaults.yml
MaxBed4d Jan 30, 2024
b12c713
Remove unused variable.
MaxBed4d Jan 31, 2024
07368ad
Merge branch 'Infrastructure-Deployment' of https://github.com/stackh…
MaxBed4d Jan 31, 2024
21a0f7d
Updated the vault password to be empty if no password is provided rat…
MaxBed4d Jan 31, 2024
eecef28
Remove duplicate requirements file.
MaxBed4d Jan 31, 2024
93ddb5b
Update ui-meta/multinode-infra-appliance.yml
MaxBed4d Jan 31, 2024
4d5d2f1
Update ui-meta/multinode-infra-appliance.yml
MaxBed4d Jan 31, 2024
d91b092
Correct the default user input.
MaxBed4d Jan 31, 2024
0f92e61
Merge branch 'Infrastructure-Deployment' of https://github.com/stackh…
MaxBed4d Jan 31, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[defaults]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be interested to know what in here was actually necessary.

stdout_callback = yaml
callbacks_enabled = timer, profile_tasks, profile_roles
host_key_checking = False
pipelining = True
forks = 30
deprecation_warnings=False
roles_path = roles

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
2 changes: 1 addition & 1 deletion authentication.tf
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
resource "openstack_compute_keypair_v2" "keypair" {
name = var.multinode_keypair
public_key = file(var.ssh_public_key)
public_key = var.ssh_public_key

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will impact other multinode users. Could we instead write out the key to a file before getting here?

}
26 changes: 26 additions & 0 deletions group_vars/openstack.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# The default Terraform state key for backends that support it
terraform_state_key: "cluster/{{ cluster_id }}/tfstate"

# Set up the terraform backend
# This setup allows us to use the Consul backend when enabled without any changes
#terraform_backend_type: 'local'
terraform_backend_type: "{{ 'consul' if 'CONSUL_HTTP_ADDR' in ansible_env else 'local' }}"
terraform_backend_config_defaults:
consul:
path: "{{ terraform_state_key }}"
gzip: "true"
local: {}
terraform_backend_config: "{{ terraform_backend_config_defaults[terraform_backend_type] }}"

# These variables control the location of the Terraform binary
terraform_binary_directory: "{{ playbook_dir }}/bin"
terraform_binary_path: "{{ terraform_binary_directory }}/terraform"

# This controls the location where the Terraform files are rendered
terraform_project_path: "{{ playbook_dir }}"

# Indicates whether the Terraform operation is reconciling or removing resources
# Valid values are 'present' and 'absent'
terraform_state: "{{ cluster_state | default('present') }}"

cluster_ssh_user: "{{ ssh_user }}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see this used anywhere

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

terraform_state should be used in /roles/cluster_infra/tasks/main.yml

cluster_ssh_user isn't seemingly used. Will be removed.

63 changes: 63 additions & 0 deletions multinode-infra-app.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---

- hosts: localhost
tasks:
- name: Show Playbook Directory
debug:
msg: "{{ playbook_dir }}"

- name: Template Terraform files into project directory
template:
src: terraform.tfvars.j2
dest: "{{ playbook_dir }}/terraform.tfvars"

- name: Template Terraform userdata.cfg.tpl files into project template directory
template:
src: "{{ playbook_dir }}/templates/userdata.cfg.tpl.j2"
dest: "{{ playbook_dir }}/templates/userdata.cfg.tpl"

# Provision the infrastructure The CaaS puts hosts for accessing the OpenStack
# API into the 'openstack' group
- hosts: openstack
roles:
- cluster_infra

- hosts: localhost
tasks:
# Check whether an ans_vlt_pwd variable is defined and if so, save it into a
# file called '~/vault.password'. If it doesn't exist, create a the
# '~/vault.password' file with ans_vlt_pwd = "password_not_set" as the
# password.
- name: Create vault password file
vars:
ans_dflt: 'default_password'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it makes sense to have a default here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have considered putting this in the UI meta interface, however, the UI parameter's interpretation of default is what is displayed on the UI which, I believe, for passwords should be blank. Additionally, this means that users can submit an empty password and at the point of creating the file with the secret, which causes a failure if not provided, a variable string can be inserted to create a file.

This should not be the place where an actual password/secret should be provided, due to the obvious lack of security. This is for just when the user enters nothing. Maybe I can change it to:

Suggested change
ans_dflt: 'default_password'
ans_dflt: 'no_password_provided'

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been removed all together.

ansible.builtin.copy:
content: "{{ ans_vlt_pwd | default( ans_dflt , true ) }}"
dest: "~/vault.password"
mode: 0600

# If openstack_deploy is true then continue if not end the playbook.

# Import the playbook to start configuring the multi-node hosts.
- name: Configure hosts and deploy ansible
import_playbook: ansible/configure-hosts.yml
when: openstack_deploy == true

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should do this unconditionally

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this matter even when just deploying the VMs?
I am pretty sure that if we were to enable this for just the infrastructure, it would create issues for deploying OpenStack on the system, as an Azimuth patch, at a later date due to the changes made by some of the playbooks called within configure-hosts.yml. This step is partially responsible for the idempotency issue.



- hosts: ansible_control
vars:
ansible_pipelining: true
ansible_ssh_pipelining: true
Comment on lines +44 to +46

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have pipelining in ansible.cfg

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not entirely sure if they would be carried over when switching to an external host which then calls another instance of ansible itself.

I am also not confident in which ansible.cfg is used and what happens when another instance of ansible is called in an environment with a different ansible.cfg (this is referring to the \ansible\ directory.)

tasks:
- name: Deploy OpenStack.
ansible.builtin.command:
cmd: "bash ~/deploy-openstack.sh"
when: openstack_deploy == true

# This is to get the ip of the ansible-controller host.
- hosts: localhost
tasks:
- debug: var=outputs
vars:
outputs:
cluster_access_ip: "{{ hostvars[groups['openstack'][0]].cluster_gateway_ip }}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if we can assume that the first host in the openstack group is the ansible control host. Is there an ansible_control group?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We define the output of the cluster_nodes variable in outputs.tf which will always list the ansible_control host first, however, I agree that a check should definitely be put in place.

The reason for not hard coding it in place is because in the usage_template I would like to eventually provide a list of all the available instances' IP.

118 changes: 89 additions & 29 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@ output "ansible_control_access_ip_v4" {
value = openstack_compute_instance_v2.ansible_control.access_ip_v4
}

output "cluster_gateway_ip" {
value = openstack_compute_instance_v2.ansible_control.access_ip_v4
}

output "seed_access_ip_v4" {
value = openstack_compute_instance_v2.seed.access_ip_v4
}
Expand Down Expand Up @@ -75,38 +79,94 @@ resource "local_file" "deploy_openstack" {
file_permission = "0755"
}

resource "ansible_host" "control_host" {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing these will break the manual deployment.

name = openstack_compute_instance_v2.ansible_control.access_ip_v4
groups = ["ansible_control"]
output "cluster_nodes" {
description = "A list of the cluster nodes and their IP addresses which will be used by the Ansible inventory"
value = concat(
[
{
name = openstack_compute_instance_v2.ansible_control.name
ip = openstack_compute_instance_v2.ansible_control.access_ip_v4
groups = ["ansible_control"]
variables = {
ansible_user = var.ssh_user
}
}
],
flatten([
for node in openstack_compute_instance_v2.compute: {
name = node.name
ip = node.access_ip_v4
groups = ["compute"]
variables = {
ansible_user = var.ssh_user
}
}
]),
flatten([
for node in openstack_compute_instance_v2.controller: {
name = node.name
ip = node.access_ip_v4
groups = ["controllers"]
variables = {
ansible_user = var.ssh_user
}
}
]),
[{
name = openstack_compute_instance_v2.seed.name
ip = openstack_compute_instance_v2.seed.access_ip_v4
groups = ["seed"]
variables = {
ansible_user = var.ssh_user
}
}],
flatten([
for node in openstack_compute_instance_v2.storage: {
name = node.name
ip = node.access_ip_v4
groups = ["storage"]
variables = {
ansible_user = var.ssh_user
}
}
])
)
}

resource "ansible_host" "compute_host" {
for_each = { for host in openstack_compute_instance_v2.compute : host.name => host.access_ip_v4 }
name = each.value
groups = ["compute"]
}
# Template of all the hosts' configuration which can be used to generate Ansible varables.

resource "ansible_host" "controllers_hosts" {
for_each = { for host in openstack_compute_instance_v2.controller : host.name => host.access_ip_v4 }
name = each.value
groups = ["controllers"]
}
# resource "ansible_host" "control_host" {
# name = openstack_compute_instance_v2.ansible_control.access_ip_v4
# groups = ["ansible_control"]
# }

resource "ansible_host" "seed_host" {
name = openstack_compute_instance_v2.seed.access_ip_v4
groups = ["seed"]
}
# resource "ansible_host" "compute_host" {
# for_each = { for host in openstack_compute_instance_v2.compute : host.name => host.access_ip_v4 }
# name = each.value
# groups = ["compute"]
# }

resource "ansible_host" "storage" {
for_each = { for host in openstack_compute_instance_v2.storage : host.name => host.access_ip_v4 }
name = each.value
groups = ["storage"]
}
# resource "ansible_host" "controllers_hosts" {
# for_each = { for host in openstack_compute_instance_v2.controller : host.name => host.access_ip_v4 }
# name = each.value
# groups = ["controllers"]
# }

resource "ansible_group" "cluster_group" {
name = "cluster"
children = ["compute", "ansible_control", "controllers", "seed", "storage"]
variables = {
ansible_user = var.ssh_user
}
}
# resource "ansible_host" "seed_host" {
# name = openstack_compute_instance_v2.seed.access_ip_v4
# groups = ["seed"]
# }

# resource "ansible_host" "storage" {
# for_each = { for host in openstack_compute_instance_v2.storage : host.name => host.access_ip_v4 }
# name = each.value
# groups = ["storage"]
# }

# resource "ansible_group" "cluster_group" {
# name = "cluster"
# children = ["compute", "ansible_control", "controllers", "seed", "storage"]
# variables = {
# ansible_user = var.ssh_user
# }
# }
9 changes: 9 additions & 0 deletions requirements.yml

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already a requirements.yml in ansible/requirements.yml. Can you remove it and update the README to use this one?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
collections:
- name: https://github.com/stackhpc/ansible-collection-terraform
type: git
version: 8c7acce4538aab8c0e928972155a2ccb5cb1b2a1
- name: cloud.terraform
- name: ansible.posix
roles:
- src: mrlesmithjr.manage_lvm
42 changes: 42 additions & 0 deletions roles/cluster_infra/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---

- name: Install Terraform binary
include_role:
name: stackhpc.terraform.install

- name: Make Terraform project directory
file:
path: "{{ terraform_project_path }}"
state: directory

- name: Write backend configuration
copy:
content: |
terraform {
backend "{{ terraform_backend_type }}" { }
}
dest: "{{ terraform_project_path }}/backend.tf"

# Patching in this appliance is implemented as a switch to a new base image
# So unless explicitly patching, we want to use the same image as last time
# To do this, we query the previous Terraform state before updating
- block:
- name: Get previous Terraform state
stackhpc.terraform.terraform_output:
binary_path: "{{ terraform_binary_path }}"
project_path: "{{ terraform_project_path }}"
backend_config: "{{ terraform_backend_config }}"
register: cluster_infra_terraform_output

- name: Extract image from Terraform state
set_fact:
cluster_previous_image: "{{ cluster_infra_terraform_output.outputs.cluster_image.value }}"
when: '"cluster_image" in cluster_infra_terraform_output.outputs'
when:
- terraform_state == "present"
- cluster_upgrade_system_packages is not defined or not cluster_upgrade_system_packages


- name: Provision infrastructure
include_role:
name: stackhpc.terraform.infra
1 change: 1 addition & 0 deletions roles/requirements.yml
2 changes: 1 addition & 1 deletion templates/deploy-openstack.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ fi
if [[ "$(sudo docker image ls)" == *"kayobe"* ]]; then
echo "Image already exists skipping docker build"
else
sudo DOCKER_BUILDKIT=1 docker build --network host --build-arg BASE_IMAGE=$$BASE_IMAGE --file $${config_directories[kayobe]}/.automation/docker/kayobe/Dockerfile --tag kayobe:latest $${config_directories[kayobe]}
sudo DOCKER_BUILDKIT=1 docker build --network host --build-arg BASE_IMAGE=$BASE_IMAGE --file $${config_directories[kayobe]}/.automation/docker/kayobe/Dockerfile --tag kayobe:latest $${config_directories[kayobe]}
fi

set +x
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,6 @@ packages:
- git
- vim
- tmux
ssh_authorized_keys:
- "{{ cluster_deploy_ssh_public_key }}"
- "{{ cluster_user_ssh_public_key }}"
Comment on lines +10 to +12

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These won't be defined in manual deployments.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be a bit cleaner to create a Terraform input for these, rather than templating twice.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the SSH keys injected into the VM instance and shouldn't be needed for the manual deployment, therefore they should be set as empty during the manual deployment and ignored. This, again, is something that requires testing.

29 changes: 29 additions & 0 deletions terraform.tfvars.j2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to clearly link this file to the azimuth app. You could either rename it or put it in a role with other azimuth specific things.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is linked in the multinode-infra-app.yml and it is in there which it is templated, but I agree that it should be placed somewhere with other templates.

This did give me the idea that maybe all files which are massively altered for the Azimuth deployment can be placed in a folder as templates and are moved out during the Azimuth deployment only.

Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
prefix = "{{ cluster_name }}"

ansible_control_vm_flavor = "general.v1.small"
ansible_control_vm_name = "ansible-control"
ansible_control_disk_size = 25

seed_vm_flavor = "general.v1.small"
seed_disk_size = 25

multinode_flavor = "general.v1.medium"
multinode_image = "{{ multinode_image }}"
multinode_keypair = "MaxMNKP"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work for other people. Does Azimuth not provide a keypair name to use?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This variable is one that should be provided by the user, as it should be a keypair for the user who the multinode is being created for. This is because the keypair which would be provided by Azimuth would create two issues:

  • The keypair name will likely belong to an already existing keypair, and this deployment hasn't been configured to deal with already existing keypairs. This will cause a Terraform/OpenStack error.
  • The SSH key associated with the Azimuth's keypair will be one created by Azimuth prior to the multinode deployment. This is an SSH key which doesn't necessarily belong to the user and maybe shouldn't be the proprietary key for access to the instance. Azimuth only needs temporary and circumstantial access to the VM, potentially none at all if deploying just the infrastructure, whereas the user's access is a more continuous requirement.

multinode_vm_network = "stackhpc-ipv4-geneve"
multinode_vm_subnet = "stackhpc-ipv4-geneve-subnet"
compute_count = "2"
controller_count = "3"
compute_disk_size = 25
controller_disk_size = 25
Comment on lines +17 to +18

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unlikely to be large enough for a real deployment.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree. This is just for the deployment of the VMs without OpenStack so it seemed unneccessary for me to request such large volumes for testing to see whether the App was capable of deploying VMs. The storage config is reverted for the branch which is concerned about deploying multinodes with OpenStack, however, these volumes will soon be a user defined input option.


ssh_public_key = "{{ cluster_user_ssh_public_key }}"
ssh_user = "{{ ssh_user }}"

storage_count = "3"
storage_flavor = "general.v1.small"
storage_disk_size = 25

deploy_wazuh = false
infra_vm_flavor = "general.v1.small"
infra_vm_disk_size = 25
183 changes: 183 additions & 0 deletions ui-meta/multinode-infra-appliance.yml

Large diffs are not rendered by default.

6 changes: 0 additions & 6 deletions versions.tf
Original file line number Diff line number Diff line change
@@ -1,15 +1,9 @@
terraform {
required_version = ">= 0.14"
backend "local" {
}
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "1.49.0"
}
ansible = {
source = "ansible/ansible"
version = "1.1.0"
}
}
}