Building a basic kubernetes cluster with kubeadm

Image alt

This is just a basic setup installing a 3 node Kubernetes setup on 3 nodes. The nodes can probably be anywhere, AWS, GCP, VMware etc as long as they are running Ubuntu 20.04.

One node needs to be designated as the master and the other 2 as workers.

To start with the pre-requisites need to be installed on all 3 nodes. This can be done via a script or by entering the commands below individually

Download and run the script

1curl https://raw.githubusercontent.com/narmitag/kubernetes/main/basic_setup/setup_machine.sh -o setup_machine.sh
2chmod +x setup_machine.sh
3sudo ./setup_machine.sh

Running the steps:

 1#Install containerd
 2cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf 
 3overlay 
 4br_netfilter 
 5EOF
 6 
 7sudo modprobe overlay 
 8sudo modprobe br_netfilter
 9
10cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf 
11net.bridge.bridge-nf-call-iptables = 1 
12net.ipv4.ip_forward = 1 
13net.bridge.bridge-nf-call-ip6tables = 1 
14EOF
15
16sudo sysctl --system
17sudo apt-get update && sudo apt-get install -y containerd
18
19sudo mkdir -p /etc/containerd
20sudo containerd config default | sudo tee /etc/containerd/config.toml
21sudo systemctl restart containerd
22
23#Disable swap 
24sudo swapoff -a
25sudo sed -e '/swap/ s/^#*/#/' -i /etc/fstab   
26curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
27
28#Install Kubernetes binaries
29cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
30deb https://apt.kubernetes.io/ kubernetes-xenial main
31EOF
32
33sudo apt-get update
34sudo apt-get install -y kubelet=1.24.0-00 kubeadm=1.24.0-00 kubectl=1.24.0-00
35sudo apt-mark hold kubelet kubeadm kubectl

The cluster now needs to be bootstrapped from the master

1$ sudo  kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.24.0
2
3$ mkdir -p $HOME/.kube 
4$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
5$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
6
7$ kubectl get no
8NAME   STATUS     ROLES           AGE   VERSION
9master   NotReady   control-plane   50s   v1.24.0

A network plugin needs to be installed to get the node ready (it may take a few minutes for the node to become ready). This will install calico

 1$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
 2
 3poddisruptionbudget.policy/calico-kube-controllers created
 4serviceaccount/calico-kube-controllers created
 5serviceaccount/calico-node created
 6configmap/calico-config created
 7customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
 8customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
 9customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
10customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
11customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
12customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
13customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
14customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
15customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
16customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
17customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
18customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
19customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
20customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
21customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
22customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
23customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
24clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
25clusterrole.rbac.authorization.k8s.io/calico-node created
26clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
27clusterrolebinding.rbac.authorization.k8s.io/calico-node created
28daemonset.apps/calico-node created
29deployment.apps/calico-kube-controllers created
30
31$ kubectl get no
32NAME   STATUS   ROLES           AGE   VERSION
33master   Ready    control-plane   87s   v1.24.0

The workers can now be connected, so get kubeadm to generate the required command on the master.

1$ kubeadm token create --print-join-command

The run the command generated above on each worker

1$ sudo kubeadmin join ............

The master should now show 3 nodes

1
2$ kubectl get no
3NAME   STATUS   ROLES           AGE   VERSION
4master   Ready    control-plane   120s  v1.24.0
5worker-1 Ready    <none>          77s   v1.24.0
6worker-2 Ready    <none>          97s   v1.24.0
comments powered by Disqus