PoC: Deploy GitLab Helm on HA Kubeadm Cluster using QEMU + KVM with Packer, Terraform, Vault, and Ansible
This project serves as a personal learning initiative for IaC, utilizing QEMU-KVM to set up an HA Kubernetes Cluster in an on-premise environment. I created it mainly because I wanted to learn IaC and Kubernetes, but had exhausted my GCP free credits. Since this is currently being developed independently, the project has only been tested on RHEL 10 and Ubuntu 24 operating systems. Everyone is welcome to fork it for experimentation.
The machine specifications used for development are as follows, for reference:
- Chipset: Intel® HM770
- CPU: Intel® Core™ i7 processor 14700HX
- RAM: Micron Crucial Pro 64GB Kit (32GBx2) DDR5-5600 UDIMM
- SSD: WD PC SN560 SDDPNQE-1T00-1032
You can clone the project using the following command:
git clone https://github.com/csning1998/on-premise-gitlab-deployment.git
- This project currently only works on Linux devices with CPU virtualization support, and has not yet been tested on other distros such as Fedora 41, Arch, CentOS, and WSL2.
- Currently, these features have only been tested on my personal computer through several system reinstallations, so there may inevitably be some functionality issues. I've tried my best to prevent and address these.
Before you begin, ensure you have the following:
- A Linux host (RHEL 10 or Ubuntu 24 recommended).
- A CPU with virtualization support enabled (VT-x or AMD-V).
sudo
access.podman
andpodman compose
installed (for containerized mode).whois
package installed (for themkpasswd
command).
This project requires CPU with virtualization support. For users whose CPUs don't support virtualization, you can refer to the legacy-workstation-on-ubuntu
branch. This has been tested on Ubuntu 24 to achieve the same basic functionality. Use the following in shell to check if your device support virtualization:
lscpu | grep Virtualization
The output may show:
- Virtualization: VT-x (Intel)
- Virtualization: AMD-V (AMD)
- If there is no output, virtualization might not be supported.
The content in Section 1 and Section 2 serves as prerequisite setup before formal execution. Project lifecycle management and configuration are handled through the entry.sh
script in the root directory. After executing ./entry.sh
, you will see the following content:
âžś on-premise-gitlab-deployment git:(main) âś— ./entry.sh
... (Some preflight check)
======= IaC-Driven Virtualization Management =======
Environment: NATIVE
Vault Server Status: Running (Unsealed)
1) [ONCE-ONLY] Set up CA Certs for TLS 9) Purge All Libvirt Resources
2) [ONCE-ONLY] Initialize Vault 10) Purge All Packer and Terraform Resources
3) [ONCE-ONLY] Generate SSH Key 11) Build Packer Base Image
4) [ONCE-ONLY] Setup KVM / QEMU for Native 12) Provision Terraform Layer 10
5) [ONCE-ONLY] Setup Core IaC Tools for Native 13) [DEV] Rebuild Layer 10 via Ansible Command
6) [ONCE-ONLY] Verify IaC Environment for Native 14) Verify SSH
7) Unseal Vault 15) Quit
8) Switch Environment Strategy
>>> Please select an action:
Among options 11
, 12
, and 13
, there are submenus. These menus are dynamically created based on the directories under packer/output
and terraform/layers
. In the current complete setup, they are:
-
If you select
11) Build Packer Base Image
>>> Please select an action: 11 # Entering Packer build selection menu... #### Checking status of libvirt service... --> libvirt service is already running. 1) 02-base-kubeadm 2) 03-base-microk8s 3) 04-base-postgres 4) Back to Main Menu >>> Please select an action:
-
If you select
12) Provision Terraform Layer 10
>>> Please select an action: 12 # Entering Terraform layer management menu... #### Checking status of libvirt service... --> libvirt service is already running. 1) 10-provision-harbor 3) 10-provision-postgres 5) Back to Main Menu 2) 10-provision-kubeadm 4) 50-provision-kubeadm-addons >>> Please select an action:
-
If you select
13) [DEV] Rebuild Layer 10 via Ansible Command
>>> Please select an action: 13 # Executing [DEV] Rebuild via direct Ansible command... #### Checking status of libvirt service... --> libvirt service is already running. 1) 10-provision-cluster.yaml 3) 10-provision-postgres.yaml 2) 10-provision-harbor.yaml 4) Back to Main Menu >>> Please select an action:
A description of how to use this script follows below.
The user's CPU must support virtualization technology to enable QEMU-KVM functionality. You can choose whether to install it through option 4
via script (but this has only been tested on Ubuntu 24 and RHEL 10), or refer to relevant resources to set up the KVM and QEMU environment on your own.
-
Install HashiCorp Toolkit - Terraform and Packer
Next, you can install Terraform, Packer, and Ansible by running
entry.sh
in the project root directory and selecting option5
for "Setup Core IaC Tools for Native". Or alternatively, follow the offical installation guide:Reference: Terraform Installation
Reference: Packer Installation > Reference: Ansible InstallationExpected output should reflect the latest versions. For instance (in zsh):
... >>> Please select an action: 4 >>> STEP: Verifying full native IaC environment... >>> Verifying Libvirt/KVM environment... #### QEMU/KVM: Installed #### Libvirt Client (virsh): Installed >>> Verifying Core IaC Tools (HashiCorp/Ansible)... #### HashiCorp Packer: Installed #### HashiCorp Terraform: Installed #### HashiCorp Vault: Installed #### Red Hat Ansible: Installed
-
I am still looking for a method that does not require
sudo
. -
Please ensure Podman is installed correctly. You can refer to the following URL and choose the installation method corresponding to your platform
-
After completing the Podman installation, please switch to the project root directory:
-
If using for the first time, execute the following command
sudo podman compose up --build
-
After creating the Container, you only need to run the container to perform operations:
sudo podman compose up -d
-
Currently, the default setting is
DEBIAN_FRONTEND=noninteractive
. If you need to make any modifications and check inside the container, you can executesudo podman exec -it iac-controller bash
Where
iac-controller
is the Container name for the project.
Attention: When switching between a Podman container and a native environment, you might encounter inconsistencies in Terraform's state within Virsh. If this happens, you can delete the
terraform/terraform.tfstate
file and run the following command to fix it.sudo virsh pool-destroy iac-kubeadm sudo virsh pool-undefine iac-kubeadm
-
-
Suggested Plugins for VSCode: Enhance productivity with syntax support:
-
Ansible language support extension. Marketplace Link of Ansible
code --install-extension redhat.ansible
-
HCL language support extension for Terraform. Marketplace Link of HashiCorp HCL
code --install-extension HashiCorp.HCL
-
Packer tool extension. Marketplace Link of Packer Powertools
code --install-extension szTheory.vscode-packer-powertools
-
To ensure the project runs smoothly, please follow the procedures below to complete the initialization setup.
-
Environment variable file: The script
entry.sh
will automatically create a.env
environment variable that is used by for script files, which can be ignored. -
Generate SSH Key: During the execution of Terraform and Ansible, SSH keys will be used for node access authentication and configuration management. You can generate these by running
./entry.sh
and entering3
to access the "Generate SSH Key" option. You can enter your desired key name or simply use the default valueid_ed25519_on-premise-gitlab-deployment
. The generated public and private key pair will be stored in the~/.ssh
directory -
Switch Environment: You can switch between "Container" or "Native" environment by using
./entry.sh
and entering8
. Currently this project primarily uses Podman, and I personally recommend decoupling the Podman and Docker runtime environments to prevent execution issues caused by SELinux-related permissions. For example, SELinux policies do not allow acontainer_t
process that is common in Docker to connect to avirt_var_run_t
Socket, which may cause Terraform Provider orvirsh
to receive "Permission Denied" errors when running in containers, even though everything appears normal from a filesystem permissions perspective.
Libvirt's settings directly impact Terraform's execution permissions, thus some permission checks are required.
-
Ensure the user's account is a member of the
libvirt
group.sudo usermod -aG libvirt $(whoami)
Note: After completing this step, you must fully log out and log back in, or restart your computer, for the new group membership to take effect in your shell session.
-
Modify the
libvirtd
configuration file to explicitly state that thelibvirt
group should manage the socket.# If you use vim sudo vim /etc/libvirt/libvirtd.conf # If you use nano sudo nano /etc/libvirt/libvirtd.conf
Locate and uncomment (remove the
#
from the beginning of the line) the following two lines.unix_sock_group = "libvirt" # ... unix_sock_rw_perms = "0770"
-
Override the
systemd
socket unit settings, assystemd
's socket configuration takes precedence overlibvirtd.conf
.-
Executing the following command will open a
nano
editor.sudo systemctl edit libvirtd.socket
-
In the opened editor, paste the following content. Make sure to paste it above the line that says
### Edits below this comment will be discarded
to prevent the configuration file from becoming invalid.[Socket] SocketGroup=libvirt SocketMode=0770
Once done, use
Ctrl+O
to save andCtrl+X
to exit the editor.
-
-
Now, the services need to be restarted in the correct order for all the settings to take effect.
-
Reload
systemd
configurations:sudo systemctl daemon-reload
-
Stop all related services to ensure a clean state:
sudo systemctl stop libvirtd.service libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket
-
Disable
libvirtd.service
to fully hand over management to Socket Activation:sudo systemctl disable libvirtd.service
-
Restart
libvirtd.socket
sudo systemctl restart libvirtd.socket
-
-
Verification
-
Check the socket permissions: The output should show the group as
libvirt
and the permissions assrwxrwx---
.ls -la /var/run/libvirt/libvirt-sock
-
Execute
virsh
commands as a non-root user.virsh list --all
If the command executes successfully and lists the virtual machines (even if the list is empty), it means you have all the necessary permissions.
-
All secrets will be integrated into HashiCorp Vault. This project defaults to using Vault with HTTPS configuration. Please follow the steps below to ensure correct setup.
-
First, run the
entry.sh
script and select option1
to generate the files required for the TLS handshake. When creating the self-signed CA certificate, you can leave the fields blank for now. If you need to regenerate the TLS files, you can simply run option1
again. -
Switch to the project's root directory and run the following command to start the Vault server.
-
If you prefer to run it on the host:
vault server -config=vault/vault.hcl
-
If you prefer to run it in a container (side-car mode):
sudo podman compose up -d iac-vault-server
After starting the server, Vault will create
vault.db
and Raft-related files in thevault/data/
directory. If you need to reinitialize Vault, you must manually clear all files within thevault/data/
directory.Note: Please open a new terminal window or tab for subsequent operations.
-
-
Once the previous steps are complete, you can run
entry.sh
and select option2
to initialize Vault. This process will also automatically perform the required unseal action for Vault. -
Next, you only need to manually modify the following variables used in the project.
-
For Common Variables in this Project
vault kv put \ -address="https://127.0.0.1:8200" \ -ca-cert="${PWD}/vault/tls/ca.pem" \ secret/on-premise-gitlab-deployment/variables \ ssh_username="some-user-name-for-ssh" \ ssh_password="some-user-password-for-ssh" \ ssh_password_hash=$(echo -n "$ssh_password" | mkpasswd -m sha-512 -P 0) \ vm_username="some-user-name-for-vm" \ vm_password="some-user-password-for-vm" \ ssh_public_key_path="~/.ssh/some-ssh-key-name.pub" \ ssh_private_key_path="~/.ssh/some-ssh-key-name"
-
For Databases
vault kv put \ -address="https://127.0.0.1:8200" \ -ca-cert="${PWD}/vault/tls/ca.pem" \ secret/on-premise-gitlab-deployment/databases \ pg_superuser_password="a-more-secure-pwd-for-superuser" \ pg_replication_password="a-more-secure-pwd-for-replication"
-
Note 1:
In which
ssh_username
andssh_password
are the account and password used to log into the virtual machine; whilessh_password_hash
is the hashed password required for virtual machine automatic installation (Cloud-init). This password needs to be generated using the password string fromssh_password
. For instance, if the password isHelloWorld@k8s
, then the corresponding password hash should be generated using the following command:mkpasswd -m sha-512 HelloWorld@k8s
If it shows
mkpasswd
command not found, possibly due to lack ofwhois
package.In which
ssh_public_key_path
needs to be changed to the public key name generated earlier, the public key will be in*.pub
format. -
Note 2:
The current SSH identity variables are primarily used for Packer in a single-use scenario, while the VM identity variables are used by Terraform when cloning VMs. In principle, they can be set to the same value. However, if you need to set different names for different VMs, you can directly modify the objects and relevant code in HCL. Typically, you would modify the
node_config
variable and related variable passing, and then use afor_each
loop for iteration. This increases complexity, so if there are no other requirements, it is recommanded to keep the SSH and VM identity variables the same.
-
-
When starting the project, you only need to launch the Vault server in a single terminal (maybe in your IDE) window using
vault server -config=vault/vault.hcl
. Afterward, you can use option7
inentry.sh
to unseal the Vault database; Alternatively, you may use container as described in B.1-2.
-
You can create the
terraform/layers/10-cluster-provision/terraform.tfvars
file using the following command:cat << EOF > terraform/layers/10-cluster-provision/terraform.tfvars # Defines the hardware and IP addresses for each virtual machine in the cluster. kubeadm_cluster_config = { nodes = { masters = [ { ip = "172.16.134.200", vcpu = 4, ram = 4096 }, { ip = "172.16.134.201", vcpu = 4, ram = 4096 }, { ip = "172.16.134.202", vcpu = 4, ram = 4096 } ] workers = [ { ip = "172.16.134.210", vcpu = 4, ram = 4096 }, { ip = "172.16.134.211", vcpu = 4, ram = 4096 }, { ip = "172.16.134.212", vcpu = 4, ram = 4096 } ] } ha_virtual_ip = "172.16.134.250" } cluster_infrastructure = { network = { nat = { name = "iac-kubeadm-nat-net" cidr = "172.16.86.0/24" bridge_name = "kubeadm-nat-br" # IFNAMSIZ is 15 user-visible characters with the null terminator included. } hostonly = { name = "iac-kubeadm-hostonly-net" cidr = "172.16.134.0/24" bridge_name = "kubeadm-host-br" } } } EOF
For users setting up an (HA) Cluster, the number of elements in
kubeadm_cluster_config.nodes.masters
andkubeadm_cluster_config.nodes.workers
determines the number of nodes generated. Ensure the quantity of nodes inkubeadm_cluster_config.nodes.masters
is an odd number to prevent the etcd Split-Brain risk in Kubernetes. Meanwhile,kubeadm_cluster_config.nodes.workers
can be configured based on the number of IPs. The IPs provided by the user must correspond to the host-only network segment. -
The variable file for the (HA) Harbor in
terraform/layers/10-provision-harbor/terraform.tfvars
can be created using the following command:cat << EOF > terraform/layers/10-provision-harbor/terraform.tfvars # Defines the hardware and IP addresses for each virtual machine in the cluster. harbor_cluster_config = { cluster_name = "10-harbor-cluster" nodes = { harbor = [ { ip = "172.16.135.200", vcpu = 2, ram = 4096 }, { ip = "172.16.135.201", vcpu = 2, ram = 4096 }, { ip = "172.16.135.202", vcpu = 2, ram = 4096 }, ] } } cluster_infrastructure = { network = { nat = { name = "iac-registry-nat-net" cidr = "172.16.87.0/24" bridge_name = "reg-nat-br" # IFNAMSIZ is 15 user-visible characters with the null terminator included. } hostonly = { name = "iac-registry-hostonly-net" cidr = "172.16.135.0/24" bridge_name = "reg-host-br" } } } EOF
This architecture was designed primarily to conform to the structural specifications of the variables in the
terraform/modules/11-provisioner-kvm/variables.tf
module. -
The variable file for the (HA) Postgres / etcd in
terraform/layers/10-provision-postgres/terraform.tfvars
can be created using the following command:cat << EOF > terraform/layers/10-provision-postgres/terraform.tfvars # Defines the hardware and IP addresses for each virtual machine in the cluster. postgres_cluster_config = { cluster_name = "10-postgres-cluster" nodes = { postgres = [ { ip = "172.16.136.200", vcpu = 4, ram = 4096 }, # { ip = "172.16.136.201", vcpu = 4, ram = 4096 }, # for HA Postgres # { ip = "172.16.136.202", vcpu = 4, ram = 4096 }, ], etcd = [ { ip = "172.16.136.210", vcpu = 2, ram = 2048 }, # { ip = "172.16.136.211", vcpu = 2, ram = 2048 }, # for HA etcd # { ip = "172.16.136.212", vcpu = 2, ram = 2048 } ], haproxy = [ { ip = "172.16.136.220", vcpu = 2, ram = 2048 } ] } } postgres_infrastructure = { network = { nat = { name = "iac-postgres-nat-net" cidr = "172.16.88.0/24" bridge_name = "pos-nat-br" # IFNAMSIZ is 15 user-visible characters with the null terminator included. } hostonly = { name = "iac-postgres-hostonly-net" cidr = "172.16.136.0/24" bridge_name = "pos-host-br" } postgres_allowed_subnet = "172.16.136.0/24" } } EOF
This architecture was designed primarily to conform to the structural specifications of the variables in the
terraform/modules/11-provisioner-kvm/variables.tf
module.Because Postgres resources will be callable by other Terraform layers (e.g. GitLab and Harbor in the future), the
bridge_name
and virtual machine naming logic may still be modified.
Note: The bridge_name
in terraform.tfvars
must not exceed 15 characters due to the IFNAMSIZ(15)
limitation.
-
The project currently uses Ubuntu 24.04.3 for VM deployment. If you wish to use other distro as virtual machine, it is recommended that you first verify the Ubuntu Server version and checksum.
- The latest version is available at https://cdimage.ubuntu.com/ubuntu/releases/24.04/release/ ,
- The test version of this project is also available at https://old-releases.ubuntu.com/releases/noble/ .
- After selecting your version, please verify the checksum.
- For latest Noble version: https://releases.ubuntu.com/noble/SHA256SUMS
- For "Noble-old-release" version: https://old-releases.ubuntu.com/releases/noble/SHA256SUMS
Deploying other Linux distro would be supported if I have time. I'm still a full-time university student.
-
To deploy a complete HA Kubernetes cluster from scratch:
-
First Step: Enter the main menu
11) Rebuild Packer Image
, then select02-base-kubeadm
to build the base image required by Kubeadm. -
Second Step: After the previous step is complete, return to the main menu and enter
12) Manage Terraform Layer
, then select10-provision-kubeadm
to deploy the entire Kubernetes cluster.Based on testing, this complete process (from building the Packer image to completing the Terraform deployment) takes approximately 7 minutes.
-
-
Isolated Testing and Development:
The
11
,12
, and13
menus can be used for separate testing-
To test Packer image building independently, use
11) Rebuild Packer Image
. -
To test or rebuild a specific Terraform module layer independently (such as Harbor or Postgres), use
12) Manage Terraform Layer
. -
To repeatedly test Ansible Playbooks on existing machines without recreating virtual machines, use
13) [DEV] Run Ansible Playbook
.
-
-
Resource Cleanup:
-
9) Purge All Libvirt Resources
:This option executes
purge_libvirt_resources "all"
, which only deletes all virtual machines, virtual networks, and storage pools created by this project, but will preserve Packer's output images and Terraform's local state files. This is suitable for scenarios where you only want to clean up virtualization resources without clearing the project state. -
10) Reset Packer and Terraform
:This option deletes all Packer output images and all Terraform Layer local states, causing Packer and Terraform states in this project to be restored to an almost brand new state.
-
This project employs three tools - Packer, Terraform, and Ansible - using an Infrastructure as Code (IaC) approach to achieve a fully automated workflow from virtual machine image creation to Kubernetes cluster deployment. The overall architecture follows the principle of Immutable Infrastructure, ensuring that each deployment environment is consistent and predictable.
The entire automated deployment process is triggered by option 12
"Rebuild Kubeadm Cluster (Packer + TF)
" in the ./entry.sh
script, with detailed steps shown in the diagram below:
sequenceDiagram
actor User
participant Entrypoint as entry.sh
participant Packer
participant Terraform
participant Ansible
participant Libvirt as Libvirt/QEMU
User->>+Entrypoint: Execute 'Rebuild All'
Entrypoint->>+Packer: 1. Execute 'build_packer'
Packer->>+Libvirt: 1a. Build VM from ISO
note right of Packer: Provisioner 'ansible' is triggered
Packer->>+Ansible: 1b. Execute Playbook<br>(00-provision-base-image.yaml)
Ansible-->>-Packer: (Bake k8s components into image)
Libvirt-->>-Packer: 1c. Output Golden Image (.qcow2)
Packer-->>-Entrypoint: Image creation complete
Entrypoint->>+Terraform: 2. Execute 'apply_terraform_10-cluster-provision'
note right of Terraform: Reads .tf definitions
Terraform->>+Libvirt: 2a. Create Network, Pool, Volumes (from .qcow2), Cloud-init ISOs
Terraform->>+Libvirt: 2b. Create and Start VMs (Domains)
note right of Terraform: Provisioner 'local-exec' is triggered
Terraform->>+Ansible: 2c. Execute Playbook<br>(10-provision-cluster.yaml)
Ansible->>Libvirt: (via SSH) 2d. Configure HA (Keepalived/HAProxy)
Ansible->>Libvirt: (via SSH) 2e. Init/Join Kubernetes Cluster
Ansible-->>-Terraform: Playbook execution complete
Terraform-->>-Entrypoint: 'apply' complete
Entrypoint-->>-User: Display 'Rebuild All workflow completed'
This is somewhat outdated, but since the architecture may still undergo significant adjustments, it will not be updated here for now.
The setup process is based on the commands provided by Bibin Wilson (2025), which I implemented using an Ansible Playbook. Thanks to the author, Bibin Wilson, for the contribution on his article
Work Cited: Bibin Wilson, B. (2025). How To Setup Kubernetes Cluster Using Kubeadm. devopscube. https://devopscube.com/setup-kubernetes-cluster-kubeadm/#vagrantfile-kubeadm-scripts-manifests
-
Packer + Ansible: Provisioning base Kubernetes Golden Image
Packer plays the role of an "image factory" in this project, with its core task being to automate the creation of a standardized virtual machine template (Golden Image) pre-configured with all Kubernetes dependencies. The project uses
packer/source.pkr.hcl
as its definition file and it's driven bypacker/02-base-kubeadm.pkrvars.hcl
, with a workflow that includes: automatically downloading theUbuntu Server 24.04 ISO
file and completing unattended installation using cloud-init; starting SSH connection and invoking the Ansible Provisioner after installation; executingansible/playbooks/00-provision-base-image.yaml
to install necessary components such askubelet
,kubeadm
,kubectl
, andCRI-O
(also configure it to usecgroup
driver); finally shutting down the virtual machine and producing a*.qcow2
format template for Terraform to use. The goal of this phase is to "bake" all infrequently changing software and configurations into the image to reduce the time required for subsequent deployments. -
Terraform: The Infrastructure Orchestrator
Terraform is responsible for managing the infrastructure lifecycle and serves as the core orchestration component of the entire architecture. Terraform reads the image template produced by Packer and deploys the actual virtual machine cluster in Libvirt/QEMU. The definition files are the
.tf
files in theterraform/
directory, with the workflow as follows:-
Node Deployment (Layer 10):
- Based on
kubeadm_cluster_config
defined interraform/terraform.tfvars
, Terraform calculates the number of nodes that need to be created. - Next, Terraform's libvirt provider will quickly clone virtual machines based on the
.qcow2
file. Under the hardware resources listed in Section 0, cloning 6 virtual machines can be completed in approximately 15 seconds.
- Based on
-
Cluster Configuration (Layer 50):
- Once all nodes are ready, Terraform dynamically generates
ansible/inventory.yaml
list file. - Then, Terraform invokes Ansible to execute the
ansible/playbooks/10-provision-cluster.yaml
Playbook to complete the initialization of the Kubernetes cluster.
- Once all nodes are ready, Terraform dynamically generates
-
-
Ansible: The Configuration Manager
This is the twice call for Ansible, serving as the configuration manager at different stages. The project's Playbooks are stored in the
ansible/playbooks/ directory
. In terms of role assignment, Ansible is primarily responsible for cluster initialization (invoked by Terraform), executing the following tasks through the10-provision-cluster.yaml
Playbook:- Setup HA Load Balancer on all master nodes if it's not a cluster with single master node.
- Initialize the primary master node
- Generate and fetch join commands from primary master
- Executing
kubeadm join
on- Other master node if it's HA Cluster
- Worker nodes to join them to the cluster.
-
HashiCorp Vault (with TLS)
HashiCorp Vault is integrated into this project to serve as a centralized and secure backend for managing all sensitive data, such as SSH credentials and user passwords. This approach removes the need to store plaintext secrets in version-controlled files like
*.tfvars
or*.pkrvars.hcl
, aligning with Infrastructure as Code best practices.Both the
native
(host-based) andcontainer
(Podman-based) IaC execution strategies can connect to the same containerized Vault instance through the host network (network_mode: "host"
). The Vault server is configured to start automatically on system boot if you use container, ensuring the secrets backend is always available for the IaC workflow without manual intervention.-
Packer: During the image-building phase, Packer authenticates with the Vault server and uses the
vault()
function within its HCL files to dynamically fetch the necessary secrets (e.g.,ssh_username
,ssh_password_hash
) required for the unattended OS installation. -
Terraform: Similarly, Terraform utilizes the Vault Provider to read infrastructure-related secrets, such as
vm_username
andssh_private_key_path
, directly from Vault at runtime. This allows Terraform to provision VMs and configure SSH access without exposing any sensitive credentials in its configuration files.
-
(To be continued...)