author-pic

Amila Senadheera

Tech enthusiast

Let's build low budget AWS at home


Published on April 17, 2022

No, I'm not kidding. We gonna make it. If you ever wanted to host your web-based project, you might have definitely used the solutions provided by AWS, Google Cloud, Azure, Digital Ocean, or blah blah (there are plenty of them now). What about If we could build the hardware infrastructure on our own (relatively on a very small scale compared to cloud service providers) and host our projects in it. That is definitely going to be one of the coolest projects that a computer programmer/student should try. In this post, I will be explaining how I built my Raspberry pi cluster. There are many ways people have done it. One thing I realized is most of the decisions I made in the process of building the cluster most of the information is here and there, some things are from an article, a book or a youtube video, or a question on the Raspberry pi StackExchange site. This blog post is an outcome of all of that and effort.

Why did I build a Raspberry Pi cluster

The most important thing when we start doing something is we need to convince ourselves that "I should do it". The main reasons I did it can be pointed down as follows,

  1. Azure free credits got expired - I was learning Kubernetes. So I had to spin up a Kubernetes cluster from a cloud provider. I used Azure. But I was actively using it only on the weekends. Unfortunately, My free credits got expired after four weeks. Thereafter, I had to pay even to continue learning.
  2. "Hands-on experience" - I was so excited after watching some Raspberry pi videos on Youtube. There is a hell of things you can do in a Rasberry Pi cluster.
  3. Self-hosting web apps - I can host my projects there. You can even host your small home business website there. Not bad right?

Watch the following CNCF video if you are still not motivated enough. The Past, Present, and Future of Kubernetes on Raspberry Pi - Alex Ellis, OpenFaaS Ltd.

If you are reading this sentence I think we are good to go further in this post. Let's collect the parts we need.

Parts List

The last three items are only used for setting up each Node in the cluster. Other parts are dedicated to this project.

Now, It's shopping time!! and/or waiting time until delivery : )

Four of the Rasberry Pi boards are needed. The number of boards required is a decision that is totally up to you depending on what you gonna do with the cluster. But, at least three of them needed, it to be called a cluster really. When purchasing memory cards, size can be your choice but pick high-speed read/write cards. The Sandisk ones I have listed are supported for HD video recording in cameras and it is a perfect fit for our project.

When testing the cooling fan make sure, you keep the fan on a stable flat surface. Please attach the fan grill. The first fan got fell over while I was testing it in the vertical position. It lost four blades and I had to buy a new one again.

Assembling the cluster

First things first. Install the heat sinks on the different ICs on each of the Raspberry Pi boards. You can see this Youtube video as an aid for that task.

Next, You should assemble the Raspberry Pi boards one by one with the stack. Please make sure that there is a cut in the Acrylic boards on the side to ease inserting the memory cards. If you noticed it once after all the stack is built up. You will have to redo it. I'm just trying to save your time for even very small things.

Then connect each Raspberry Pi board to the network switch using ethernet cables. Again connect your home router to the switch using an ethernet cable. Then connect the USB Type-c cables to the ANKER Power port. Power up the switch and cluster. Check the indicators in the network switch that confirms the connectivity.

If everything seems ok then, we can set the cooling fan. This should be on the side of the stack where no output ports are available. Since the cluster has four boards and a 120mm size cooling fan fits perfectly well. You can tie it (use the cable ties we have purchased) to the stack with the cooling fan grill. Then power up the cooling fan.

Even though I stated above in a few sentences, it will be a few hours of work. Now we have our hardware ready. Let's jump into the next steps. I will wait here until you finish it.

Flashing SD cards & booting up

The mico SD cards you purchased going to be the hard disk counterpart for this small computer. Flashing is making the SD cards bootable when starting up a Raspberry Pi board. We need an operating system. There is a cool tool called Raspberry Pi Imager. Download and install it on your personal computer.

We are going to run everything in headless mode. So we don't need GUI for the OS. Therefore, we can use the Linux server image with the LTS version which is so lightweight for the Raspberry Pi boards. Download Ubuntu Server 20.04.4 LTS image and unzip it to a folder.

  1. Install the SD card to your personal computer
  2. Launch Raspberry Pi Imager
  3. Click on "Select OS" -> Select "Use Custom" -> Select the image downloaded above
  4. Click on "Choose Storage" -> Select the SD Card
  5. Click on "WRITE"

Now the magic will start to happen. It will take a few minutes. If you purchased a low-end SD Card (which I did not recommend), it will take 20 to 30 minutes. We have four Raspberry Pi boards in the cluster. Therefore you need to do the steps I explained in this section four times one after the other for each board. I will wait here until you finish it. You can complete it and come back.

Now install a flashed SD card to one of the Raspberry Pi boards. Connect the keyboard and the monitor to the same board. Power up the board and the monitor. You will see the booting up screen. It will take a few minutes for the first time. The default username and the default password is "ubuntu". It will ask you to change the password. Give a strong enough password. It is easy for someone to hack into your cluster as everyone knows the default username and password.

Hostname and static IP configuration

We need to label our nodes. Here onwards I will call a single Pi computer a "Node". Since our final goal is to set up a Kubernetes cluster, there should be a master node which is the control plane of the cluster. So let's label the first Node as "master". Run the following command to set the hostname for the master Node. You need to change other Nodes (workers) as "node-1", "node-2", and "node-3".

hostnamectl set-hostname master

Run the following to check the hostname configuration is successful:

hostnamectl

Output:

   Static hostname: master
         Icon name: computer
        Machine ID: 7b23f4d03f3c44038854a16e59a91060
           Boot ID: 275a266f15e74693a71f618998dccbae
  Operating System: Ubuntu 20.04.4 LTS
            Kernel: Linux 5.4.0-1058-raspi
      Architecture: arm64

Other than the static hostname it will output more information as well. Note that Architecture is arm64. Because we will have to build docker images for arm64 architecture.

Next, we need static private IPs for each Node. Normally when you reboot, the home Router will assign a dynamic IP address using DHCP. The ubuntu version we installed supports this with netplan. First of all, you need to know the IP address of your home router. It's 192.168.1.1 for me. You might have 10.0.0.1. Depending on that I can pick 192.168.1.104 as the IP address of the master Node. To configure static private IPs, you need to edit netplan configuration file.

Run the following to open that config file:

nano /etc/netplan/50-cloud-init.yaml

Edit it similar to below. The eth0 is your hardware network interface. We have disabled dynamic IP configuration by setting dhcp4: no. The gateway4 should be the IP address of your router. I have added google nameserver addresses under nameservers. If you have any preferred ones use them instead. Press Ctrl + o to save and Ctrl + x to save the nano editor.

network:
    ethernets:
        eth0:
            dhcp4: no
            gateway4: 192.168.1.1
            addresses: [192.168.1.104/24]
            nameservers:
                addresses:
                - 8.8.4.4
                - 8.8.8.8
                search:
                - domain.local
    version: 2

Now run following two commands to apply the changes and reboot the Node:

netplan generate
netplan apply

You need to do the above for each Node.

Next, we can add the hostnames and IPs to our /etc/hosts file. Run the follwoing commands:

sudo su
sudo nano /etc/hosts

Add the hostname as below. The same need to be done for each Node:

192.168.1.100 node-1
192.168.1.101 node-2
192.168.1.102 node-3
192.168.1.104 master

Now you can ping each other Node using hostname:

ping node-1

Now you should have internet access to each Node. It can be verified by running:

ping www.google.com

phew!! now it's time to keep the keyboard and the monitor aside. We can now ssh to our Nodes from the personal computer and do the rest of the configuration. Connect your personal computer to the home router using ethernet or wifi. Run ssh <username>@<node-ip-address> to start the secure shell connection to any Node:

ssh ubuntu@192.168.1.104

Installing Docker

  1. Update and Upgrade

Start by updating and upgrading the system:

sudo apt-get update && sudo apt-get upgrade

This ensures you install the latest version of the software.

  1. Install

We can install using the convenience script as described in docker documentation. Run the following to do the installation.

Download the installation script:

curl -fsSL https://get.docker.com -o get-docker.sh

Execute the script:

sudo sh get-docker.sh

This installs the required packages for your Raspbian Linux distribution.

  1. Check Docker Version

Check the version of Docker on Raspberry Pi Node by executing:

docker version

Installing Kubernetes

At this point, you should have all Nodes up, with static IP addresses, container runtime installed (Docker), and capable of accessing the internet. Now it's time to install Kubernetes on all of the nodes.

Using ssh, run the following commands on all nodes to install the kubelet and kubeadm tools. You will need to be root to execute these commands. Use sudo su to elevate to the root user.

  1. Add encryption key for the packages:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  1. Add the repository to your list of repositories:
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
  1. Update and Upgrade:
apt-get update && apt-get upgrade
  1. Install:
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Setting up the Cluster

Do ssh to the master node and run:

sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 192.168.1.104  --apiserver-cert-extra-sans kubernetes.cluster.home

Note that we use --pod-network-cidr 10.244.0.0/16 to define the address space for the Pods. And --apiserver-advertise-address 192.168.1.104 to advertise the IP address of the master Node.

Eventually, this will print out a command for joining other Nodes to the cluster. It will look something like:

kubeadm join --token=<token> 192.168.1.104

Now, ssh into each of the worker Nodes in your cluster and run that command.

Configure kubectl

Now, everything should be running but how can I check. To check that we should be able to interact with the cluster. You need to set up kubectl(The Kubernetes command-line tool) on your personal computer.

Follow the steps for your OS and install it,

Now you need the admin.conf file details to be set to your config file which is in .kube. ssh to the "master" Node. Run the following command to get the file content to the terminal.

cat /etc/kubernetes/admin.conf

You will get a output similar to below:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <cert-authority-data>
    server: https://192.168.1.104:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: <cert-data>
    client-key-data: <key-data>

If you have been working with cloud Kubernetes providers then your kubectl config file has other cluster information. If you haven't used kubectl with any cloud providers before, then you can replace the config file with the above output content.

If not, you have to update it as mentioned below,

  1. Add a new cluster entry in your clusters section:
clusters:
...
...
- cluster:
    certificate-authority-data: <cert-authority-data>
    server: https://192.168.1.104:6443
  name: kubernetes
  1. Add the user to the users section:
users:
...
...
- name: kubernetes-admin
  user:
    client-certificate-data: <cert-data>
    client-key-data: <key-data>
  1. Add the context under contexts section:
contexts:
...
...
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
  1. Set the current context to the context we created in the previous step which is the Raspberry Pi cluster:
current-context: kubernetes-admin@kubernetes

If you already had the same names used by any cluster or user entry, then you should go for new names. For example, if you had to change the name for the user then use the same name for the user section in the cluster you defined.

Now you can run the following command to see the Node details:

kubectl get nodes

Output:

NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   24d   v1.23.4
node-1   Ready    <none>                 24d   v1.23.4
node-2   Ready    <none>                 24d   v1.23.4
node-3   Ready    <none>                 24d   v1.23.4

If everything went well, you should see all the status as Ready. Otherwise, inspect what the issue is:

kubectl describe <node-name>

Pod to Pod networking

You have your node-level networking set up, but you need to set up the Pod-to-Pod networking. Since all of the nodes in your cluster are running on the same physical Ethernet network, you can simply set up the correct routing rules in the host kernels.

The easiest way to manage this is to use the Flannel tool created by CoreOS. Flannel supports a number of different routing modes; we will use the host-gw mode. You can download an example configuration from the Flannel project page:

The default configuration that CoreOS supplies uses vxlan mode instead, and also uses the AMD64 architecture instead of ARM.

Replace vxlan with host-gw and replace all instances of amd64 with arm. You can do this with the sed tool in place:

curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | sed "s/vxlan/host-gw/g" > kube-flannel.yaml

Once you have your updated kube-flannel.yaml file, you can create the Flannel networking setup with:

kubectl apply -f kube-flannel.yaml

This will create two objects, a ConfigMap used to configure Flannel and a DaemonSet that runs the actual Flannel daemon. You can inspect these with:

kubectl describe --namespace=kube-system configmaps/kube-flannel-cfg 
kubectl describe --namespace=kube-system daemonsets/kube-flannel-ds

Setting up Kubernetes Dashboard

Kubernetes ships with a rich GUI. You can install it by running:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

Get a token to access the Kubernetes Dashboard:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

Follow Create An Authentication Token (RBAC) to get an auth token.

To access Dashboard from your local personal computer, you must create a secure channel to your Kubernetes cluster. Run the following command:

kubectl proxy

Access dashboard at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. Use the token we retrieved when logging in.

That's all for this blog post. Keep in touch I will be publishing more. Happy Coding!!

If you like it, share it!


Created by potrace 1.16, written by Peter Selinger 2001-2019 © 2024 Developer Diary.

Made withusing Gatsby, served to your browser from a home grown Raspberry Pi cluster.
contact-me@developerdiary.me