6 minutes
Install Kubernetes
Build Overview
Virtual Environment - https://proxmox.com/en/proxmox-virtual-environment/overview
Debian 12 (Cloud Image) - https://cloud.debian.org/images/cloud/
Kubernetes 1.30.2 - https://kubernetes.io/releases/
Containerd 1.7.14 - https://containerd.io/releases/
RunC 1.1.13 - https://github.com/opencontainers/runc/releases
CNI Plugins 1.5.1 - https://github.com/containernetworking/plugins/releases
Calico CNI 3.28.0 - https://docs.tigera.io/calico/3.27/getting-started/kubernetes/quickstart
System Summary
3 Nodes, 2 vCPU, 4GB RAM, 32GB Disk
k8s-ctrlr : 192.168.8.80
k8s-node-1 : 192.168.8.81
k8s-node-2 : 192.168.8.82
Installation Process
The following installation steps are required to be carried out on all hosts. Specific steps for the Control Node will be marked accordingly.
If you don’t have DNS then you need to update
/etc/hosts
with the host names and IP addresses of all systems
# Switch to the root user
sudo -i
# Add required kernel modules to the configuration for loading at boot
printf "overlay\nbr_netfilter\n" >> /etc/modules-load.d/containerd.conf
# Load the kernel modules immediately
modprobe overlay
modprobe br_netfilter
# Configure sysctl parameters required for Kubernetes networking
printf "net.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\n" >> /etc/sysctl.d/99-kubernetes-cri.conf
# Apply the sysctl parameters without rebooting
sysctl --system
# Download the specified version of containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gz -P /tmp
# Extract the downloaded containerd archive to /usr/local
tar Cxzvf /usr/local /tmp/containerd-1.7.14-linux-amd64.tar.gz
# Download the systemd service file for containerd
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -P /etc/systemd/system/
# Reload systemd configuration and enable containerd to start on boot, then start it immediately
systemctl daemon-reload
systemctl enable --now containerd
# Download the runc binary
wget https://github.com/opencontainers/runc/releases/download/v1.1.13/runc.amd64 -P /tmp/
# Install the runc binary to /usr/local/sbin with appropriate permissions
install -m 755 /tmp/runc.amd64 /usr/local/sbin/runc
# Download the CNI plugins archive
wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz -P /tmp/
# Create the directory for CNI plugins and extract the downloaded archive there
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin /tmp/cni-plugins-linux-amd64-v1.5.1.tgz
# Create the containerd configuration directory
mkdir -p /etc/containerd
# Generate the default containerd configuration and save it to the config file
containerd config default | tee /etc/containerd/config.toml
We need to edit this file /etc/containerd/config.toml
in your favorite editor. Search for “SystemdCgroup” and change the value to true and then restart the service with systemctl restart containerd
Ensure that you are changing the correct value which is under “[plugins.“io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options]”
# Ensure that swap is turned off
swapoff -a
# Edit /etc/fstab and comment out any line that refers to swap
# This can be done manually using a text editor like nano or vim
# For example: nano /etc/fstab
# Update package lists and install required packages
apt update && sudo apt install -y apt-transport-https ca-certificates curl gpg
# Create the directory for storing apt keyrings with appropriate permissions
mkdir -p -m 755 /etc/apt/keyrings
# Download and store the Kubernetes APT repository key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the Kubernetes APT repository to the sources list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Update package lists again to include the Kubernetes repository
apt update
# Reboot the system to apply changes
reboot
Once the system has reboot, we shall continue with the installation
# Switch to the root user
sudo -i
# Install specific versions of kubelet, kubeadm, and kubectl
apt install -y kubelet=1.30.2-1.1 kubeadm=1.30.2-1.1 kubectl=1.30.2-1.1
# Prevent the installed versions of kubelet, kubeadm, and kubectl from being automatically updated
apt-mark hold kubelet kubeadm kubectl
Control Node Only
The following tasks are to be run on the control node only
Ensure that the kubernetes version is correct for what you installed and that the node-name value is set to the hostname of the Control Node.
kubeadm init --pod-network-cidr 10.10.0.0/16 --kubernetes-version 1.30.2 --node-name k8s-ctrlr
export KUBECONFIG=/etc/kubernetes/admin.conf
# For non-root user run the following commands
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Setup Calico CNI 3.28.0
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
Modify the custom-resources.yml
file to update the CIDR to match the POD CIDR set in the previous step using your favorite editor. The following is the modified YAML file
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
ipPools:
- name: default-ipv4-ippool
blockSize: 26
cidr: 10.10.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
Once the file has been modified we then apply the settings by running the command kubectl apply -f custom-resources.yaml
and validate that our system is operational by running the command kubectl get pods -A
and confirm that they are all in a ready state.
Worker Nodes
The next step is to join the Worker Nodes to the Cluster and for this we need to run the following command on the Control Node
kubeadm token create --print-join-command
This will print out a command that we need to run on the Worker Nodes and once we have added them we can verify by running kubectl get nodes
Deploy Test Application
On the Controller we will create a file called pod.yml
which will contain the following:
apiVersion: v1
kind: Pod
metadata:
name: nginx-example
labels:
app: nginx
spec:
containers:
- name: nginx
image: linuxserver/nginx
ports:
- containerPort: 80
name: "nginx-http"
We then apply the file to deploy the pod
david@k8s-ctrlr:~$ kubectl apply -f pod.yml
pod/nginx-example created
# Check to see that the Pod has been created
david@k8s-ctrlr:~ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-example 1/1 Running 0 85s
# View more details
david@k8s-ctrlr:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-example 1/1 Running 0 9m57s 10.10.109.66 k8s-node-1 <none> <none>
At present this is only accessible from within side the network and we can test that it deployed correctly by querying it with curl
from within K8s
david@k8s-ctrlr:~$ curl 10.10.109.66
<html>
<head>
<title>Welcome to our server</title>
<style>
body{
font-family: Helvetica, Arial, sans-serif;
}
.message{
width:330px;
padding:20px 40px;
margin:0 auto;
background-color:#f9f9f9;
border:1px solid #ddd;
}
center{
margin:40px 0;
}
h1{
font-size: 18px;
line-height: 26px;
}
p{
font-size: 12px;
}
</style>
</head>
<body>
<div class="message">
<h1>Welcome to our server</h1>
<p>The website is currently being setup under this address.</p>
<p>For help and support, please contact: <a href="me@example.com">me@example.com</a></p>
</div>
</body>
</html>
We need to create a Service to enable this to be accessible from outside the internal network, so we will create a file called service-nodeport
which will contain the following:
apiVersion: v1
kind: Service
metadata:
name: nginx-example
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 30080
targetPort: nginx-http
selector:
app: nginx
We then apply the file and verify that the service is running
david@k8s-ctrlr:~$ kubectl apply -f service-nodeport.yml
service/nginx-example created
# Check that the service is online
david@k8s-ctrlr:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
nginx-example NodePort 10.101.177.98 <none> 80:30080/TCP 11s
We can now navigate to any of the IP Addresses of our controller or nodes using the port 30080 that we specified and we have the website page
❯ curl http://192.168.8.81:30080/
<html>
<head>
<title>Welcome to our server</title>
<style>
body{
font-family: Helvetica, Arial, sans-serif;
}
.message{
width:330px;
padding:20px 40px;
margin:0 auto;
background-color:#f9f9f9;
border:1px solid #ddd;
}
center{
margin:40px 0;
}
h1{
font-size: 18px;
line-height: 26px;
}
p{
font-size: 12px;
}
</style>
</head>
<body>
<div class="message">
<h1>Welcome to our server</h1>
<p>The website is currently being setup under this address.</p>
<p>For help and support, please contact: <a href="me@example.com">me@example.com</a></p>
</div>
</body>
</html>
Delete Test Application
To delete the test application, we need to run the following commands
# Delete Pod
david@k8s-ctrlr:~$ kubectl delete pod nginx-example
pod "nginx-example" deleted
# Delete Service
david@k8s-ctrlr:~$ kubectl delete service nginx-example
service "nginx-example" deleted
We have now successfully installed Kubernetes.