Skip to content

Manage Google Kubernetes Engine environments, including VPC, subnets, and security using GitHub Actions.

Notifications You must be signed in to change notification settings

florenciacomuzzi/k8s-environment-terraform

Repository files navigation

k8s-environment-terraform

This repository contains Terraform code to create a VPC and various types of Kubernetes clusters in an environment. This work is part of a take home assignment for a company during the interview process. These modules were tested in a GCP project created for this assignment. A budget of $100 was set. The main branch represents the production environment and is currently up. This repository works as a template and can be cloned.

  • Refer to the SETUP for instructions on setting up your own project.
  • For information on CICD, refer to the CICD documentation.
  • For information on how to contribute to your own project, refer to the SDLC documentation.

Assumptions

I have assumed the following:


Networking

The network setup is unknown. In a real scenario, there is careful planning of IP address ranges for services, pods, and load balancers with each in its own subnet. The module creates a private cluster so the cluster's master node is only accessible within the VPC. A Default Deny VPC is an area for improvement.


Security

  • The GKE node pool service account is used by the nodes in the cluster to authenticate and interact with GCP services. Workloads can impersonate a different service account using Workload Identity thus overriding the use of the node pool service account.
  • The private-k8s-cluster module creates a jump host to connect to the cluster's master node as the master node can only be accessed from within the VPC.
  • A user authenticates with the jump host using Identity-Aware Proxy.
  • By default, GKE deploys the ip-masq-agent with a configuration that selectively masquerades traffic—rewriting pod IPs for destinations that fall outside specified CIDRs.

Autoscaling

The following features are enabled by the gke module:

  • The cluster autoscaler in GKE is responsible for automatically adjusting the number of nodes in a node pool based on resource demands (like CPU and memory usage).
  • Node Autoprovisioning is a feature of GKE's cluster autoscaler but works at a higher level than individual node pools. It allows the GKE cluster to dynamically create new node pools when the existing node pools cannot meet the resource needs of the workloads.
  • Vertical Pod Autoscaling (VPA) is a feature that automatically adjusts the resource requests (CPU and memory) for individual Pods based on their actual usage over time. This helps ensure that each Pod has enough resources to operate efficiently without over-provisioning or under-provisioning resources.

CICD

CICD runs in GitHub Actions. A unique service account is used by the pipeline to authenticate with GCP. A service account credentials file is used however Workload Identity Federation is preferred for authentication. This is a potential area for improvement.

The actual Terraform state is stored in a Cloud Storage bucket. Terraform authenticates using the CICD service account however best practice is for Terraform to impersonate a separate, distinct service account used by Terraform with just enough permissions to manage GCP resources.

For more information on setting up CICD, refer to the CICD documentation.


Requirements

Name Version
terraform >= 1.0.0
terraform >= 1.0.0
google 6.27.0
random 3.7.1

Providers

No providers.

Modules

Name Source Version
gke ./modules/private-k8s-cluster n/a
vpc ./modules/vpc n/a

Resources

No resources.

Inputs

Name Description Type Default Required
cluster_autoscaling_max_cpu The maximum CPU usage across all node pools to trigger the cluster autoscaler to provision more node pools string "8" no
cluster_autoscaling_max_memory_gb The maximum memory usage across all node pools to trigger the cluster autoscaler to provision more node pools string "32" no
cluster_name The name of the GKE cluster string n/a yes
cluster_secondary_range_cidr The secondary range to use for pods string n/a yes
cluster_secondary_range_name The name of the secondary range to use for pods string "gke-pods" no
jump_host_ip_address_name Name of the IP address resource string "jump-host-ip" no
jump_host_name The name of the jump host VM string "jump-host" no
master_authorized_cidr_blocks The CIDR block allowed to connect to the master node
list(object({
cidr_block = string
display_name = string
}))
[
{
"cidr_block": "10.0.0.7/32",
"display_name": "Network 1"
},
{
"cidr_block": "192.168.1.0/24",
"display_name": "Network 2"
}
]
no
master_ipv4_cidr_block The CIDR block for the master node string n/a yes
network_name The name of the VPC network string n/a yes
node_pools The node pools to create
list(object({
name = string
node_disk_size_gb = string
node_machine_type = string
total_min_node_count = number
total_max_node_count = number
}))
n/a yes
project_id GCP project id string "florenciacomuzzi" no
region GCP region string "us-east1" no
services_secondary_range_cidr The secondary range to use for services string "10.30.0.0/16" no
services_secondary_range_name The name of the secondary range to use for services string "gke-services" no
subnet_cidr The CIDR block for the subnet string n/a yes
subnet_name The name of the subnet string n/a yes

Outputs

No outputs.

About

Manage Google Kubernetes Engine environments, including VPC, subnets, and security using GitHub Actions.

Resources

Stars

Watchers

Forks

Languages