K3s cluster deployment in AWS with Cluster.dev

K3s cluster deployment in AWS with Cluster.dev

In this article, we’ll walk through the steps to establish a production-ready K3s cluster, suitable for deployment in either a single-node or multi-node auto-scaling configuration.

stack template for K3s in aws

The K3s cluster will be configured with the following addons:

  • aws-asg-roller — to manage auto-scaling groups updates.
  • aws-ebs-csi-driver — to work with external persistent volumes.
  • local-path-provisioner — to create volumes on the EC2 instances.
  • ingress-nginx — to serve ingress objects instead of default traffic.

In case you have a DNS Zone (domain) configured in Route53:

  • external-dns — to manage zones from Ingress objects.
  • cert-manager — to generate LetsEncrypt Certificates.

Additionally, Cluster.dev will create all required AWS resources like: VPC (you can define your own), Load Balancer, IAM roles and policies, Security Groups and others.

What is K3s?

K3s stands out as a lightweight, certified Kubernetes distribution crafted for resource-constrained environments and edge computing. Originating from Rancher Labs, it is tailored to consume minimal memory and disk space, making it well-suited for applications spanning IoT devices, small clusters, CI/CD systems, and similar scenarios where a full-scale Kubernetes installation may be excessive.

Notably, K3s consolidates all necessary components into a single binary, eliminating unnecessary modules to minimize its footprint, and boasts a straightforward installation process.

What is Cluster.dev?

Cluster.dev represents an advanced cloud-native platform, purpose-built to simplify the deployment and management of infrastructure-as-code (IaC) across diverse cloud providers. By harnessing contemporary IaC tools such as Terraform, Helm, and Kubernetes, Cluster.dev simplifies the complexities of cloud configurations, guaranteeing uniform, replicable, and scalable deployments.

Cluster.dev boasts certain features that distinguish it in the cloud-native landscape:

  • Multi-cloud Support: Effortlessly deploy resources across AWS, Azure, GCP, and other providers, abstracting away provider-specific intricacies.
  • Kubernetes Integration: Swiftly initialize managed Kubernetes services like AWS-EKS and nimble distributions such as K3s.
  • Module-based Architecture: Leverage pre-built modules for common functions, including deploying static websites or establishing monitoring with Prometheus.
  • Automation & CI/CD Ready: Integrate seamlessly with existing CI/CD and GitOps pipelines, enhancing automation and minimizing manual interventions.

Prerequisites

For the detailed configuration of prerequisites and a cloud account refer to the documentation on setting up the AWS-K3s stack.

  • Terraform version 1.4+
  • AWS account / AWS CLI installed and access configured.
  • AWS S3 bucket for storing states.
  • kubectl installed.
  • Cluster.dev client installed.
  • (Optional) DNS zone in AWS account.

Project configuration

Create locally a project directory, cd into it and execute the command:

[~/tmpk3s]$ cdev project create https://github.com/shalb/cdev-aws-k3s

09:33:22 [INFO] Creating: kuard.yaml

09:33:22 [INFO] Creating: kuard.yaml

09:33:22 [INFO] Creating: backend.yaml

09:33:22 [INFO] Creating: demo-app.yaml

09:33:22 [INFO] Creating: demo-infra.yaml  # Stack describing infrastructure

09:33:22 [INFO] Creating: project.yaml

This will create a new project.

Edit variables in the example’s files:

project.yaml — main project config. Defines common global variables for a current project such as organization, region, state bucket name etc.

backend.yaml — configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml.

demo-infra.yaml — describes K3s stack configuration.

Additionally, there would be sample-application-template and demo-app.yaml files describing an example of deploying a Kubernetes sample application.

Let’s look at the demo-infra.yaml sample file:

name: k3s-demo

template: https://github.com/shalb/cdev-aws-k3s?ref=main

kind: Stack

backend: aws-backend

variables:

cluster_name: cdev-k3s-demo

bucket: {{ .project.variables.state_bucket_name }}

region: {{ .project.variables.region }}

organization: {{ .project.variables.organization }}

domain: {{ .project.variables.domain }}

instance_type: “t3.medium”

k3s_version: “v1.28.2+k3s1”

env: “demo”

public_key: “ssh-rsa 3mnUUoUrclNkr demo” # Change this.

public_key_name: demo

master_node_count: 1

worker_node_groups:

– name: “node_pool”

min_size: 2

max_size: 3

instance_type: “t3.medium”

As it is shown, we specify the StackTemplate’s location on GitHub with a tag or a branch pinned. You can fork the StackTemplate or store it locally for further customization, such as adding any necessary addons to your cluster (for example ArgoCD / Flux, etc.).

If you have a DNS zone domain in a Route53 account, you can set it in the Stack file.

Set the version of K3s release.

Make sure to modify the public key that will be deployed to the nodes and configure the desired master and worker node sizes. This process will result in the creation of a dedicated EC2 Auto-scaling group for each node pool.

Deployment of K3s project

Once all the necessary settings are configured, you can execute the `cdev plan / apply` commands to review and implement the entire stack in one go.

[~/tmpk3s]$ cdev plan

Plan results:

+———————————-+

|         WILL BE DEPLOYED         |

+———————————-+

| k3s-demo.aws_key_pair            |

| k3s-demo.route53                 |

| k3s-demo.vpc                     |

| k3s-demo.iam-policy-external-dns |

| k3s-demo.k3s                     |

| k3s-demo.kubeconfig              |

| k3s-demo.ingress-nginx           |

| k3s-demo.external-dns            |

| k3s-demo.cert-manager            |

| k3s-demo.outputs                 |

| k3s-demo.cert-manager-issuer     |

| k3s-demo-app.kuard               |

+———————————-+

Here is the sample output for `cdev apply`:

[~/tmpk3s]$ cdev apply

10:03:52 [INFO] Applying unit ‘k3s-demo.route53’:

10:03:52 [INFO] Applying unit ‘k3s-demo.iam-policy-external-dns’:

10:03:52 [INFO] Applying unit ‘k3s-demo.aws_key_pair’:

10:03:52 [INFO] [k3s-demo][route53][init] In progress…

10:03:52 [INFO] [k3s-demo][route53][init] executing in progress… 0s

10:03:57 [INFO] [k3s-demo][route53][init] Success

10:03:57 [INFO] [k3s-demo][route53][apply] In progress…

10:04:22 [INFO] [k3s-demo][route53][apply] executing in progress… 25s

10:04:23 [INFO] [k3s-demo][iam-policy-external-dns][retrieving outputs] Success

10:04:23 [INFO] Applying unit ‘k3s-demo.vpc’:

10:04:23 [INFO] [k3s-demo][vpc][init] In progress…

10:04:23 [INFO] [k3s-demo][vpc][init] executing in progress… 0s

10:04:25 [INFO] [k3s-demo][aws_key_pair][apply] Success

10:04:30 [INFO] [k3s-demo][aws_key_pair][retrieving outputs] Success

10:04:30 [INFO] [k3s-demo][vpc][init] Success

10:04:30 [INFO] [k3s-demo][vpc][apply] In progress…

10:04:30 [INFO] [k3s-demo][vpc][apply] executing in progress… 0s

10:04:32 [INFO] [k3s-demo][route53][apply] executing in progress… 35s

 

Cluster.dev intelligently identifies dependencies between units, constructing an efficient execution graph. This graph ensures the sequential application of Terraform, Helm, and Kubernetes units in the necessary order, allowing for the seamless transfer of values between units during execution.

The outputs you will get after execution:

10:25:57 [INFO] Printer: ‘k3s-demo.outputs’, Output:

k3s_version = v1.28.2+k3s1

kubeconfig = /tmp/kubeconfig_cdev-k3s-demo

region = eu-central-1

cluster_name = cdev-k3s-demo

You can inspect your cluster with the kubeconfig file:

[~/tmpk3s]$ export KUBECONFIG=/tmp/kubeconfig_cdev-k3s-demo

[~/tmpk3s]$ kubectl get nodes

NAME                                           STATUS   ROLES                       AGE   VERSION

ip-10-8-0-48.eu-central-1.compute.internal     Ready    <none>                      26m   v1.28.2+k3s1

ip-10-8-11-142.eu-central-1.compute.internal   Ready    control-plane,etcd,master   27m   v1.28.2+k3s1

ip-10-8-4-183.eu-central-1.compute.internal    Ready    <none>                      26m   v1.28.2+k3s1

 

[~/tmpk3s]$ kubectl get all -A

NAMESPACE       NAME                                            READY   STATUS    RESTARTS   AGE

cert-manager    pod/cert-manager-56b95dfb77-xn9xx               1/1     Running   0          59s

cert-manager    pod/cert-manager-cainjector-7f75fcc746-brk45    1/1     Running   0          59s

cert-manager    pod/cert-manager-webhook-7f9dc7f889-ccbbt       1/1     Running   0          59s

default         pod/kuard-deployment-5bb67b58b7-trprg           1/1     Running   0          30s

external-dns    pod/external-dns-5cdc5f687-5zbxw                1/1     Running   0          48s

ingress-nginx   pod/ingress-nginx-controller-86dbc68954-ld6rs   1/1     Running   0          63s

kube-system     pod/aws-asg-roller-6c95885f4-j54ws              1/1     Running   0          12m

kube-system     pod/aws-cloud-controller-manager-ztqm2          1/1     Running   0          12m

kube-system     pod/coredns-6799fbcd5-fztnn                     1/1     Running   0          13m

kube-system     pod/ebs-csi-controller-846b59c468-dxzn5         5/5     Running   0          12m

kube-system     pod/ebs-csi-node-5h7nw                          3/3     Running   0          12m

kube-system     pod/local-path-provisioner-84db5d44d9-bxb5k     1/1     Running   0          13m

kube-system     pod/metrics-server-67c658944b-qlpt7             1/1     Running   0          13m

To destroy the K3s cluster and created AWS resources, use this command:

 

[~/tmpk3s]$ cdev destroy

Plan results:

+———————————-+

|        WILL BE DESTROYED         |

+———————————-+

| k3s-demo-app.kuard               |

| k3s-demo.external-dns            |

| k3s-demo.ingress-nginx           |

| k3s-demo.cert-manager-issuer     |

| k3s-demo.outputs                 |

| k3s-demo.cert-manager            |

| k3s-demo.kubeconfig              |

| k3s-demo.k3s                     |

| k3s-demo.iam-policy-external-dns |

| k3s-demo.vpc                     |

| k3s-demo.aws_key_pair            |

| k3s-demo.route53                 |

+———————————-+

Continue?(yes/no) [no]: yes

10:51:40 [INFO] Destroying…

10:51:40 [INFO] Destroying unit ‘k3s-demo-app.kuard’

10:51:43 [INFO] [k3s-demo-app][kuard][init] In progress…

10:51:43 [INFO] [k3s-demo-app][kuard][init] executing in progress… 0s

10:51:45 [INFO] [k3s-demo-app][kuard][init] Success

10:51:45 [INFO] [k3s-demo][kubeconfig][init] In progress…

Summary

In this article, we delve into a straightforward method for deploying a production-ready infrastructure on AWS for K3s using Cluster.dev. The entire process takes less than 20 minutes, resulting in a wholly codified infrastructure aligned with IaC best practices.

Subsequently, you can customize your deployment by editing StackTemplates, launching single-node clusters for testing, or expanding it with your own set of addons or cloud resources. For further details, refer to our documentation at docs.cluster.dev, where you can create your own stacks on any cloud or even establish installations for your software.

An original article about K3s cluster deployment in AWS with Cluster.dev by Purity Muriuki · Published in

Published on — Last update: