Skip to content

Kubernetes homelab on Talos Linux with automated media stack, GitOps workflows, and self-hosted services. Migrated from Proxmox/Docker to Bare Metal Kubernetes infrastructure.

License

Notifications You must be signed in to change notification settings

Gauransh-Homelab/Homelab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🏠 Homelab v2 - Kubernetes Edition

Kubernetes Talos Services Status Last Updated

A modern homelab running on Kubernetes with Talos Linux, migrated from Proxmox/Docker

ArchitectureServicesInfrastructureDeploymentRoadmap


📖 Quick Overview

What: Production-grade Kubernetes homelab for self-hosted services
Why: GitOps automation, better scalability, and learning cloud-native tech
How: Talos Linux bare-metal cluster with declarative configuration
Docs: Detailed documentation on Obsidian


🏗️ Architecture

graph TB
    subgraph "External Access"
        Internet((Internet))
        CF[Cloudflare DNS]
        DD[DuckDNS]
    end

    subgraph "Homelab Network"
        Router[Router<br/>192.168.10.1]

        subgraph "Kubernetes Cluster"
            subgraph "Control Plane"
                CP[beelink-1<br/>192.168.10.147<br/>Control Plane]
            end

            subgraph "Worker Nodes"
                W1[proxmox<br/>192.168.10.165<br/>Worker Node]
            end

            subgraph "Network Layer"
                MLB[MetalLB<br/>Load Balancer]
                TRF[Traefik<br/>Ingress Controller]
            end
        end

        subgraph "Storage"
            NAS[Synology DS423+<br/>36TB Raw / 24TB Usable<br/>3x 12TB SHR - 1 Drive Redundancy<br/>NFS + iSCSI]
        end
    end

    Internet --> CF
    Internet --> DD
    CF --> Router
    DD --> Router
    Router --> MLB
    MLB --> TRF
    TRF --> CP
    TRF --> W1
    CP -.-> W1
    W1 --> NAS
    CP --> NAS

    classDef control fill:#326CE5,stroke:#fff,stroke-width:2px,color:#fff
    classDef worker fill:#00ADD8,stroke:#fff,stroke-width:2px,color:#fff
    classDef network fill:#FF7300,stroke:#fff,stroke-width:2px,color:#fff
    classDef storage fill:#40C463,stroke:#fff,stroke-width:2px,color:#fff

    class CP control
    class W1 worker
    class MLB,TRF network
    class NAS storage
Loading

✅ What's Running

🎬 Media Stack (arr-stack namespace)

Service Purpose Access
🔒 VPN Group (Gluetun Sidecar)
└─ qBittorrent Torrent downloads Port 8080
└─ NZBGet Usenet downloads Port 6789
└─ Prowlarr Indexer management Port 9696
📺 Media Management
Sonarr / Sonarr2 TV show automation Ports 8989 / 8990
Radarr / Radarr2 Movie automation Ports 7878 / 7879
Bazarr / Bazarr2 Subtitle management Ports 6767 / 6768
Notifiarr Discord notifications Port 5454

🎭 Media Frontend (jelly namespace)

  • Jellyfin - Media streaming server with Intel GPU transcoding
  • Jellyseerr - Media request management

🛠️ Infrastructure Services

Service Namespace Purpose
Traefik traefik Ingress controller & reverse proxy
Cert-Manager traefik Automatic SSL certificates via DuckDNS
MetalLB metallb Bare-metal load balancer
K8s-Cleaner k8s-cleaner Cleanup completed pods/jobs
Descheduler kube-system Workload distribution optimization
NFS Provisioner synology-csi Dynamic volume provisioning

🤖 Other Services

  • LibreChat (ai-stuff namespace) - Self-hosted AI chat interface with MongoDB backend

🔧 Infrastructure Details

Cluster Configuration

Cluster:
  OS: Talos Linux v1.6
  Kubernetes: v1.29
  CNI: Flannel

Nodes:
  - Name: beelink-1
    Role: Control Plane
    IP: 192.168.10.147
    Specs: Intel N100, 16GB RAM

  - Name: proxmox
    Role: Worker
    IP: 192.168.10.165
    Specs: Intel i5-7400, 16GB RAM, NVIDIA GT-730

Storage Architecture

Synology DS423+ (24TB Raw / ~10.9TB Usable) 1 drive fault tolerance
├── /volume1/
│   ├── NAS/
│   │   ├── Movies
│   │   ├── Shows
│   │   ├── Music
│   │   ├── Youtube
│   │   └── Downloads/
│   │       ├── Qbittorrent/
│   │       │   ├── Torrents
│   │       │   ├── Completed
│   │       │   └── Incomplete
│   │       └── Nzbget/
│   │           ├── Queue
│   │           ├── Nzb
│   │           ├── Intermediate
│   │           ├── Tmp
│   │           └── Completed
│   │
│   ├── kube/                    # NFS-based PVCs
│   │   ├── jelly/
│   │   │   └── jellyseerr-pvc
│   │   ├── ai-stuff/
│   │   │   └── mongodb-backup-pvc
│   │   ├── default/
│   │   │   └── test-pvc-worker
│   │   └── test-nfs/
│   │       └── test-nfs-pvc
│   │
│   ├── TimeMachine/             # Macbook Backups
│   │
│   └── Docker/                  # Legacy
│       └── Pihole
│
└── iSCSI LUNs (19 total)        # High-performance PVCs
    ├── jellyfin-config          # Jellyfin configs (5Gi)
    ├── jellyfin-data            # Jellyfin metadata
    ├── jellyfin-cache           # Transcoding cache
    ├── jellyfin-log             # Jellyfin logs
    ├── arr-stack configs        # All *arr app configs
    ├── librechat volumes        # AI app storage
    └── ... (other service volumes)

Storage Classes:

  • nfs-client - Dynamic NFS provisioning for general workloads
  • synology-iscsi - iSCSI LUNs for high-performance/database workloads
  • syno-storage - Synology CSI driver (alternative option)

Network Configuration

  • Load Balancer: MetalLB with IP pool 192.168.10.200-192.168.10.250
  • Ingress: Traefik v3 with automatic SSL
  • Domains:
    • Local: *.arkhaya.duckdns.org (internal services)
    • Public: *.arkhaya.xyz (external access)
  • Security: Cloudflare proxy for public services

📋 Roadmap

Live Roadmap

Synced from Obsidian on every push

📌 Current Status

📋 To Do

  • n8n
  • Homarr
  • Jellyfin Stats
  • Authentik

🚧 In Progress

✅ Recently Completed

  • Ghost Blog ✅ 2025-08-16
  • Huntarr + cleanuparr ✅ 2025-08-13
  • LGM Stack with alloy ✅ 2025-07-27
  • HA PostgreSQL ✅ 2025-08-05
  • Argo CD ✅ 2025-08-03

🚀 Future Projects

  • *arr Stack Migration (SQLite → PostgreSQL)
  • MCP Server - Discord Media Bot
  • Karakeep - bookmarking system
  • Tdarr running on beelink cause we can have using iGPU and its quite decent

📦 Archive

  • librechat-migration ✅ 2025-07-02
  • jellyfin-migration ✅ 2025-07-06
  • talos-infrastructure ✅ 2025-07-06
  • tailscale-migration ✅ 2025-07-05
  • traefik-setup ✅ 2025-07-02
  • Figure out how to host updatable markdown so can show the Kanbans ( if no other way just use vercel) ✅ 2025-06-14
  • arr-stack-migration ✅ 2025-07-02
  • synology-integration ✅ 2025-05-29
  • obsidian-setup ✅ 2025-05-29

🔧 Troubleshooting

Cert-Manager DuckDNS Issues

When using cert-manager with DuckDNS webhook for wildcard certificates, you may encounter issues:

Common Problems:

  1. "no api token secret provided" - The ClusterIssuer is looking for a secret in the wrong namespace
  2. DNS propagation timeouts - DuckDNS can take 5-10 minutes to propagate DNS changes
  3. Wrong ClusterIssuer references - Ensure you're using the Helm-deployed issuer

Solution:

If you installed the webhook via Helm:

helm install cert-manager-webhook-duckdns cert-manager-webhook-duckdns/cert-manager-webhook-duckdns \
  --namespace cert-manager \
  --set duckdns.token=$DUCKDNS_TOKEN \
  --set clusterIssuer.production.create=true \
  --set clusterIssuer.staging.create=true \
  --set clusterIssuer.email=gauranshmathur1999@gmail.com

Then use the Helm-created ClusterIssuer in your Certificate resources:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: duckdns-wildcard-cert
  namespace: traefik
spec:
  secretName: duckdns-wildcard-tls
  issuerRef:
    name: cert-manager-webhook-duckdns-production # Helm-created issuer
    kind: ClusterIssuer
  dnsNames:
    - "arkhaya.duckdns.org"
    - "*.arkhaya.duckdns.org"

🛠️ Deployment Guide

Prerequisites

  1. Hardware: 2+ machines with 8GB+ RAM
  2. Network: Static IPs, router access for port forwarding
  3. Storage: NAS with NFS enabled
  4. Tools: kubectl, helm, talosctl

Quick Start

# 1. Apply Talos configuration
talosctl apply-config --nodes 192.168.10.147 --file controlplane.yaml
talosctl apply-config --nodes 192.168.10.165 --file worker.yaml

# 2. Bootstrap cluster
talosctl bootstrap --nodes 192.168.10.147

# 3. Get kubeconfig
talosctl kubeconfig --nodes 192.168.10.147

# 4. Install core services
kubectl apply -f kubernetes/namespaces/
helm install metallb metallb/metallb -n metallb -f helm/metallb/values.yaml
helm install traefik traefik/traefik -n traefik -f helm/traefik/values.yaml

# 5. Deploy applications
kubectl apply -k kubernetes/

Directory Structure

Homelab/
├── kubernetes/         # Raw Kubernetes manifests
│   ├── arr-stack/     # Media automation stack
│   ├── jellyfin/      # Media server configs
│   └── ...
├── helm/              # Helm charts and values
│   ├── traefik/       # Ingress controller
│   ├── cert-manager/  # SSL certificates
│   └── ...
├── ansible/           # Migration playbooks
└── docs/             # Additional documentation

🔄 Migration from v1

What Changed?

Component v1 (Proxmox/Docker) v2 (Kubernetes)
Platform Proxmox VE + LXC Talos Linux bare-metal
Containers Docker Compose Kubernetes deployments
Networking Manual port mapping Service mesh + ingress
Storage Local volumes Dynamic PVCs
Updates Manual per-service Rolling updates
Backups Scripts Persistent volumes

Key Improvements

Declarative Configuration - Everything as code
Self-Healing - Automatic pod restarts
Easy Scaling - Just update replica count
Better Isolation - Namespace separation
Unified Ingress - Single entry point
Automated SSL - Cert-manager handles certificates

Challenges Solved

  1. VPN Networking → Gluetun sidecar pattern
  2. GPU Transcoding → Intel device plugin
  3. Data Migration → Ansible playbooks
  4. Service Discovery → CoreDNS + Traefik

📚 Resources


🤝 Contributing

This is a personal project, but suggestions and improvements are welcome! Feel free to open an issue.

📄 License

MIT License - Feel free to use this as inspiration for your own homelab!


Built with ❤️ and lots of ☕

About

Kubernetes homelab on Talos Linux with automated media stack, GitOps workflows, and self-hosted services. Migrated from Proxmox/Docker to Bare Metal Kubernetes infrastructure.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages