Kubernetes installation & configuration with k3s

Modified on Tue, 10 Dec at 6:38 PM

Kubernetes is a system for automating, deploying, scaling and managing containerized applications.


K3s is a lightweight yet highly  available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances, perfectly suited for on-prem scenarios. 


This article will guide you in setting up a 3 master node k3s cluster in 3 Linux machines, which is a simple yet robust HA setup.

The number of master nodes must always be odd, as it could lead to a quorum loss otherwise.


Requirements (per Linux machine)


Hardware:

  • CPU: >= 2 cores
  • RAM: >= 8GB
  • Disk: SSD >= 128GB
  • Details


Software:



Pre-Installation (in your Windows machine)


1. Chocolatey is a software management automation tool for Windows. It packages installers, executables, zips, and scripts into compiled packages for easier management.


To install Chocolatey, in a Powershell window with administrator privileges run:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))


2. kubectl is a Kubernetes command-line tool that lets you interact with Kubernetes clusters via their HTTP APIs.


To install kubectl, in a cmd or Powershell window with administrator privileges run:

choco install kubernetes-cli


3. Helm simplifies the management, sharing and uprading of Kubernetes applications.


To install helm, in a cmd or Powershell window with administrator privileges run:

choco install kubernetes-helm



Installation (in each Linux machine)


1. Ensure root and ufw is off if applicable

sudo -s

ufw disable


2. Set environment variables, replacing

  • somename with a simple, unique, identifiable name, e.g. onprem1 or myk3s1
  • sometoken with a secret string to be shared between k3s cluster nodes: we recommend 32 length alphanumeric


export INSTALL_K3S_CHANNEL="stable"

export INSTALL_K3S_EXEC="--disable=local-storage --disable=servicelb --disable=traefik"

export INSTALL_K3S_NAME="somename"

export K3S_TOKEN="sometoken"

export INSTALL_K3S_VERSION="v1.30.0+k3s1"


You will use


Links to details on these required workloads are at the end of this article, as you need the cluster up and running first


3. Install k3s and run cluster node


If it is the first one:

curl -sfL https://get.k3s.io | sh -s - server --cluster-init


On the others, replace masterip with the first master IP address:

curl -sfL https://get.k3s.io | sh -s - server --server https://masterip:6443


You can then check the status with

systemctl status k3s-somename.service


4. Add the following persistent environment variables


nano ~/.bashrc

# append these 3 lines

export EDITOR=nano

export KUBE_EDITOR=nano

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

# then CTRL+X, Y and ENTER to save changes and exit

# and now apply

source ~/.bashrc


5. kubectl is already included with k3s, and with KUBECONFIG set, you can run such commands to check some resources:


kubectl get node -o wide

kubectl top node

kubectl get pod -A -o wide

kubectl get svc -A


Maintenance

When there's a need to remove a node from the cluster temporarily for some reason, a quick restart can be gracefully handled. However, the right procedure for node removal is:


1. List nodes

kubectl get node -o wide


2. Stop receiving new workloads

kubectl cordon nodename


3. Move current workloads to other nodes (--force if necessary)

kubectl drain nodename --ignore-daemonsets --delete-emptydir-data


4. Delete node to prevent cluster quorum loss

kubectl delete node nodename


And then on the node itself:


1. Stop k3s service

sudo systemctl stop k3s-somename.service


2. Disable k3s service on boot

sudo systemctl disable k3s-somename.service


To re-join, re-run initial command for the node, or simply restart and re-enable the above service



To communicate with the k3s cluster from your local machine:

  1. Copy the contents of /etc/rancher/k3s/k3s.yaml from the master node, something like:

    apiVersion: v1

    clusters:

        - cluster:

            certificate-authority-data: LS0t...

        server: http://127.0.0.1:6443

        name: default

    contexts:

        - context:

            cluster: default

        user: default

        name: default

    current-context: default

    kind: Config

    preferences: {}

    users:

        - name: default

        user:

          client-certificate-data: LS0t...

          client-key-data: LS0t...


    Paste the content into a text editor (e.g. Notepad++ or Visual Studio Code) on your machine


    For example, you can just output it on the terminal and copy directly, or download it with something like WinSCP:

    cat /etc/rancher/k3s/k3s.yaml
  2. Replace (CTRL+H) the name default with something line k3s-cluster1

  3. Replace 127.0.0.1 with a node's IP address

    • For a more robust setup, edit the Kubernetes service in the default namespace to type LoadBalancer after configuring Metallb, and use that IP instead, otherwise your kubectl commands will only be making requests to this node
    • http to https change might be needed as well
    • Note that k3s reverts this service type if a master node restarts


  4. Check or create your Kubernetes local configuration file

    • Default Windows location: C:\Users\yourusername\.kube\config
    • If the file does not exist, create it. Ensure the file is named config only, with no extension
    • If the file already exists and you don’t want to overwrite it, you need to merge this into the existing file. To do this, add each item (cluster, context, user) to the file, ensuring the name you chose (replaced with default) does not conflict with existing names
    • Example configuration:

      apiVersion: v1

      clusters:

          - cluster:

              certificate-authority-data: LS0t...

              server: http://someip:6443

              name: k3s-cluster1

          - cluster:

              certificate-authority-data: LS0t...

              server: https://someotherip:6443

              name: k3s-cluster2

      contexts:

          - context:

              cluster: k3s-cluster1

              user: k3s-cluster1

              name: k3s-cluster1

          - context:

              cluster: k3s-cluster2

              user: k3s-cluster2

              name: k3s-cluster2

      current-context: k3s-cluster1

      kind: Config

      preferences: {}

      users:

          - name: k3s-cluster1

            user:

              client-certificate-data: LS0t...

              client-key-data: LS0t...

          - name: k3s-cluster2

               user:

              client-certificate-data: LS0t...

              client-key-data: LS0t...

  5. Test your local configuration
    Use the following commands to test the kubeconfig and set the context for future commands:

           kubectl config get-contexts

           kubectl config use-context k3s-cluster1


Recommended yet optional: install kubens and kubectx

To simplify listing and changing cluster and namespace contexts:

  • Windows Chocolatey install:

    choco install kubens kubectx -y
  • Linux Chocolatey install:

    sudo apt install kubectx -y


Usage Examples:

  • Set a new cluster or namespace context for future commands:

    kubectx k3s-cluster1

    kubens some-namespace


  • Or just list available contexts or namespaces:

    kubectx

    kubens

When a namespace context is set, there is no need for -n some-namespace or --namespace some-namespace in your commands


Installation of required workloads


1. Install and configure Longhorn

2. Install and configure Metallb

3. Install and configure NGINX Ingress


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article