Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 4 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,13 @@ Automated talos cluster with system extensions

Docker is mandatory on the `Client` as this projects builds a custom talos image with system extensions using the [imager](https://github.com/siderolabs/talos/pkgs/container/installer) docker image on the `Client` itself.

## Create an HA Proxy Server
## Options for creation of HA Proxy Server

You can use the [no-lb](https://github.com/Naman1997/simple-talos-cluster/tree/no-lb) branch in case you do not want to use an external load-balancer. This branch uses the 1st master node that gets created as the cluster endpoint.

I've installed `haproxy` on my Raspberry Pi. You can choose to do the same in a LXC container or a VM.
The `main` banch will automatically create a VM for a load balancer with 2 CPUs and 2 GiB of memory on your Proxmox node.

You need to have passwordless SSH access to a user (from the Client node) in this node which has the permissions to modify the file `/etc/haproxy/haproxy.cfg` and permissions to run `sudo systemctl restart haproxy`. An example is covered in this [doc](docs/HA_Proxy.md).
You can use the [no-lb](https://github.com/Naman1997/simple-talos-cluster/tree/no-lb) branch in case you do not want to use an external load-balancer. This branch uses the 1st master node that gets created as the cluster endpoint.

Another option is to use the [manual-lb](https://github.com/Naman1997/simple-talos-cluster/tree/manual-lb) branch in case you wish to create an external lb manually.

## Create the terraform.tfvars file

Expand All @@ -50,24 +49,6 @@ terraform plan
terraform apply --auto-approve
```

## Using HAProxy as a Load Balancer for an Ingress

Since HAProxy is load-balancing ports 80 and 443 (of worker nodes), we can deploy nginx-controller such that it uses those ports as an external load balancer IP.

```
kubectl label ns ingress-nginx pod-security.kubernetes.io/enforce=privileged
# Update the IP address in the controller yaml
vim ./nginx-example/nginx-controller.yaml
helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx --values ./nginx-example/nginx-controller.yaml --create-namespace
kubectl create deployment nginx --image=nginx --replicas=5
k expose deploy nginx --port 80
# Edit this config to point to your domain
vim ./nginx-example/ingress.yaml.example
mv ./nginx-example/ingress.yaml.example ./nginx-example/ingress.yaml
k create -f ./nginx-example/ingress.yaml
curl -k https://192.168.0.101
```

## Expose your cluster to the internet (Optional)

It is possible to expose your cluster to the internet over a small vps even if both your vps and your public ips are dynamic. This is possible by setting up dynamic dns for both your internal network and the vps using something like duckdns
Expand Down
256 changes: 0 additions & 256 deletions docs/Wireguard_Setup.md

This file was deleted.

32 changes: 22 additions & 10 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ terraform {
}
proxmox = {
source = "bpg/proxmox"
version = "0.57.1"
version = "0.65.0"
}
}
}
Expand All @@ -24,6 +24,7 @@ data "external" "versions" {
}

locals {
ha_proxy_user = "ubuntu"
qemu_ga_version = data.external.versions.result["qemu_ga_version"]
amd_ucode_version = data.external.versions.result["amd_ucode_version"]
intel_ucode_version = data.external.versions.result["intel_ucode_version"]
Expand Down Expand Up @@ -159,10 +160,19 @@ module "worker_domain" {
scan_interface = var.INTERFACE_TO_SCAN
}

module "proxy" {
source = "./modules/proxy"
ha_proxy_user = local.ha_proxy_user
DEFAULT_BRIDGE = var.DEFAULT_BRIDGE
TARGET_NODE = var.TARGET_NODE
ssh_key = join("", [var.SSH_KEY, ".pub"])
}

resource "local_file" "haproxy_config" {
depends_on = [
module.master_domain.node,
module.worker_domain.node
module.worker_domain.node,
module.proxy.node
]
content = templatefile("${path.root}/templates/haproxy.tmpl",
{
Expand All @@ -181,17 +191,17 @@ resource "local_file" "haproxy_config" {
destination = "/etc/haproxy/haproxy.cfg"
connection {
type = "ssh"
host = var.ha_proxy_server
user = var.ha_proxy_user
private_key = file(var.ha_proxy_key)
host = module.proxy.proxy_ipv4_address
user = local.ha_proxy_user
private_key = file(var.SSH_KEY)
}
}

provisioner "remote-exec" {
connection {
host = var.ha_proxy_server
user = var.ha_proxy_user
private_key = file(var.ha_proxy_key)
host = module.proxy.proxy_ipv4_address
user = local.ha_proxy_user
private_key = file(var.SSH_KEY)
}
script = "${path.root}/scripts/haproxy.sh"
}
Expand All @@ -200,11 +210,13 @@ resource "local_file" "haproxy_config" {
resource "local_file" "talosctl_config" {
depends_on = [
module.master_domain.node,
module.worker_domain.node
module.worker_domain.node,
module.proxy.node,
resource.local_file.haproxy_config
]
content = templatefile("${path.root}/templates/talosctl.tmpl",
{
load_balancer = var.ha_proxy_server,
load_balancer = module.proxy.proxy_ipv4_address,
node_map_masters = tolist(module.master_domain.*.address),
node_map_workers = tolist(module.worker_domain.*.address)
primary_controller = module.master_domain[0].address
Expand Down
2 changes: 1 addition & 1 deletion modules/domain/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.57.1"
version = "0.65.0"
}
}
}
Expand Down
Loading
Loading