Tag Archives: Kubernetes

Deploying K8s on Raspberry Pi4 with Hypriot and Cloud-Init

I was reading the most excellent “Kubernetes Up & Running” book by Brendon Burns, Joe Beda and Kelsey Hightower earlier this month and decided to build a small K8s cluster on Raspberry Pi 4 (2GB) to learn. In the appendix they have a short chapter on how to do this, but there is a fair amount of detail left for the reader to divine. What follows are my notes about my experience of deploying a four node system. I hope you find it informative and useful!

The authors recommend using Hypriot since it comes with Docker built in. You can download the bits off their GitHub site here: https://github.com/hypriot/image-builder-rpi/releases Since I was going to be building four boxes I wanted to use cloud-init to build the images. The process goes something like this.

  1. Download latest Hypriot image builder file (hypriotos-rpi-v1.11.5.img.zip as of this writing)
  2. Unzip and burn the image file to a Micro-SD card
  3. Create a specific cloud-init configuration for each system
  4. Replace the ‘user-data’ file in the root of the SD card with your specially crafted YML file
  5. Boot the Raspberry with modified SD card and if everything works out, you have a working, pre-configured system in about five minutes!

Step #3 is really the only that requires a significant amount of work πŸ˜‰

Building a cloud-init Script

So lets break down what I ended up with, section by section. The full example file is at the end of this article.

#cloud-config Each cloud-init file begins with the #cloud-config line. Some YAML linters will replace this top line with “—” in which case cloud-init will warn you “File user-data needs to begin with “#cloud-config”” when you try to run it.
hostname: What you want the entry in /etc/hostname to reflect
manage_etc_hosts: {true, false} By default this is set to in the Hypriot examples. I wanted to manually manage my hosts file so I set this to false. You can read up on how to work with the “true” setting in the cloud-init documentation.
package_update: {true, false} I have this set to false since I specifically invoke an apt update while installing the K8s tools.
package_upgrade: {true, false} I have this set to false so I can manually control what gets loaded on the machines.
runcmd: This specifies commands that cloud-init should invoke during system customization. The “systemctl restart avahi-daemon” and “ifup wlan 0” entries are there to get the wireless card up and running and the system name advertised out on the network. The “systemctl daemon-reload” was necessary to get the DNS information updated that I added to /etc/systemd/resolved.conf. The last four commands are there to get the administrative tools for K8s downloaded and installed so they are ready to rock.

#cloud-config
hostname: kubernetes
manage_etc_hosts: false
package_update: false
package_upgrade: false
runcmd: 
  - "systemctl restart avahi-daemon"
  - "ifup wlan0"
  - "ifup eth0"
  - "systemctl daemon-reload"
  - "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -"
  - "echo 'deb https://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list"
  - "apt-get update"
  - "apt-get install -y kubelet kubeadm kubectl"
 

The users section allows you to create users. You can read more about that here. I specified public ssh keys in this section so I can SSH into and between the K8s nodes without using a password. No, those aren’t my actual SSH keys πŸ˜‰

users: 
  - 
    chpasswd: 
      expire: false
    gecos: "Hypriot Pirate"
    groups: "users,docker,video"
    lock_passwd: false
    name: aaron
    plain_text_passwd: password
    shell: /bin/bash
    ssh-authorized-keys: 
      - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpVazw0Hsh4p9Uuq/pM3HP0A3tGJuTTO4sRxFluu7byVVDMevhMLFZ80yEIS809jfiDM5YLc/o96GJhMSbrJ4eGa3sn1k9jGvXEXGAvPKsZk92DQAhubueWwOns0Pd/NccFa8vlgcHzfrxKNuI6ZtXsESM+2aIBV8LfWYx0s/StNSH09LwUnGkVkWVPivxJSjGWGtA/YAt4URfUgbpYnm40iJWuJZbxh1g8qAEGt2uNPEi5OBQOBWfpX5Ud/VI3YvYKjn2/1LpaxNSsNts9UJ5163Y8kXTkUt/iZZT1atA+IV6FsoaMLYqBfsAH7ChTr0h9MgpcJLRmWP/uAGLrfvL [email protected]"
      - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7fvwjRVd3y1AKFdBq9i1ja3MvxXxalLC7D1Ml6CH1cPBpgxMFnJPZTUza5fdY1i+NhBs4EqM73K4j6iSSNry7qVd+sL0rgmY7lvuIqcAG87R73bPxq84lU/RsqIDbAnFsXRcZYEX6xa/GsP6bFVyU3w9wWtMV7eiLjzFwIIjjFNheVt1Badn+ZnYf7X/s1uriXcTkArA28aD8uv5HB3VRVgUiLGMg1bRcDNkL+/lTVTR28a3sz9qFGeiBkOKnw8ymKYp6jGzIlobqGciZlEImlcwDPGXT3CuD4yZ9IYlk1/Jd9UwxxgJgk1vD+0QRJrHP02jBQHdVFAfy8rix5erl azuread\[email protected]"
    ssh_pwauth: true
    sudo: "ALL=(ALL) NOPASSWD:ALL"

The write_files: section allows you to write out files to the system during customization. Each stanza is started with the “-” character followed by “content:” and then whatever you want written to the file. This is concluded with the “path:” statement indicating where in the file system the data should be written.

Since I’m using wireless for the “public” interfaces of my K8s nodes I needed to configure some details in three files to get wireless working. Those files are:

  • /etc/network/interfaces.d/wlan0
  • /etc/wpa_supplicant/wpa_supplicant.conf
  • /etc/network/interfaces

The first file I’m writing out is /etc/network/interfaces.d/wlan0. This just hot-plug on the interface and tells wpa-conf where to find its configuration file. You can read all about that here.

write_files: 
  - 
    content: |
        allow-hotplug wlan0
        wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
    path: /etc/network/interfaces.d/wlan0

The /etc/wpa_supplicant/wpa_supplicant.conf file contains all the details needed to connect to the wireless network. Details can be found here. You can use this Raspberry Pi WiFi Config Generator to get the correct details for the network={} details. Just for the sake of completeness:

proto could be either RSN (WPA2) or WPA (WPA1).
key_mgmt could be either WPA-PSK (most probably) or WPA-EAP (enterprise networks)
pairwise could be either CCMP (WPA2) or TKIP (WPA1)
auth_alg is most probably OPEN, other options are LEAP and SHARED

  - 
    content: |
        country=US
        ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
        update_config=1
        network={
        ssid="CasaDePatten"
        psk="wireless"
        proto=RSN
        key_mgmt=WPA-PSK
        pairwise=CCMP
        auth_alg=OPEN
        }
    path: /etc/wpa_supplicant/wpa_supplicant.conf

The /etc/network/interfaces file contains the static IP addresses I’m assigning for my wireless and wired networks.

  - 
    content: |
        auto wlan0
        allow-hotplug wlan0
        iface wlan0 inet static
        address 192.168.3.50
        netmask 255.255.255.0
        gateway 192.168.3.1
        wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
        
        auto eth0
        iface eth0 inet static
        address 10.0.0.50
        netmask 255.255.255.0
    path: /etc/network/interfaces

The /etc/systemd/resolved.conf file has my DNS settings specified. Since systemd is managing DNS, you can’t just edit the /etc/resolv.conf file directly or it will just get overwritten. You can read more here about systemd and DNS configuration.

I didn’t try this particular example, but this may be another way to do the same thing. https://cloudinit.readthedocs.io/en/latest/topics/examples.html#configure-an-instances-resolv-conf

  - 
    content: |
        [Resolve]
        DNS=192.168.3.1
        FallbackDNS=1.1.1.1
    path: /etc/systemd/resolved.conf

The last file I’m writing out is /etc/hosts. Remember that if you have manage_etc_hosts: true set in your user-data file, this will get overwritten.

  - 
    content: |
        127.0.0.1 localhost        
        10.0.0.50 kubernetes.cluster.home kubernetes
        10.0.0.51 node-1.cluster.home node-1
        10.0.0.52 node-2.cluster.home node-2
        10.0.0.53 node-3.cluster.home node-3
    path: /etc/hosts

Full cloud-init Example

So here is an example of the full cloud-init script I ended up building for my master node. Don’t worry, passwords and ssh keys have been changed to protect the innocent.

#cloud-config
hostname: kubernetes
manage_etc_hosts: false
package_update: false
package_upgrade: false
runcmd: 
  - "systemctl restart avahi-daemon"
  - "ifup wlan0"
  - "ifup eth0"
  - "systemctl daemon-reload"
  - "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -"
  - "echo 'deb https://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list"
  - "apt-get update"
  - "apt-get install -y kubelet kubeadm kubectl kubernetes-cni"

users: 
  - 
    chpasswd: 
      expire: false
    gecos: "Hypriot Pirate"
    groups: "users,docker,video"
    lock_passwd: false
    name: aaron
    plain_text_passwd: password
    shell: /bin/bash
    ssh-authorized-keys: 
      - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCpVazw0Hsh4p9Uuq/pM3HP0A3tGJuTTO4sRxFluu7byVVDMevhMLFZ80yEIS809jfiDM5YLc/o96GJhMSbrJ4eGa3sn1k9jGvXEXGAvPKsZk92DQAhubueWwOns0Pd/NccFa8vlgcHzfrxKNuI6ZtXsESM+2aIBV8LfWYx0s/StNSH09LwUnGkVkWVPivxJSjGWGtA/YAt4URfUgbpYnm40iJWuJZbxh1g8qAEGt2uNPEi5OBQOBWfpX5Ud/VI3YvYKjn2/1LpaxNSsNts9UJ5163Y8kXTkUt/iZZT1atA+IV6FsoaMLYqBfsAH7ChTr0h9MgpcJLRmWP/uAGLZRbL [email protected]"
      - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7fvwjRVd3y1AKFdBq9i1ja3MvxXxalLC7D1Ml6CH1cPBpgxMFnJPZTUza5fdY1i+NhBs4EqM73K4j6iSSNry7qVd+sL0rgmY7lvuIqcAG87R73bPxq84lU/RsqIDbAnFsXRcZYEX6xa/GsP6bFVyU3w9wWtMV7eiLjzFwIIjjFNheVt1Badn+ZnYf7X/s1uriXcTkArA28aD8uv5HB3VRVgUiLGMg1bRcDNkL+/lTVTR28a3sz9qFGeiBkOKnw8ymKYp6jGzIlobqGciZlEImlcwDPGXT3CuD4yZ9IYlk1/Jd9UwxxgJgk1vD+0QRJrHP02jBQHdVFAfy8rix5erl azuread\[email protected]"
      - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDR1sDqGyQFb35XO8NQQ+7VAzsLpV9v62uo1dSBFs4SHZ5Djwfl5mri/mxyqvbpg1PO8TiYd+ieNTdDFnxpCOz3uMTfHegbu9AFC5o78Qo16PHywiJSvhnGqdoitFkMek+qxCmOn3puCEAseDHJ+0q9eFNkM+7w8EOqEJ+2y94AOERj+dAhXRig4CDi1IO/gpPKl1w5SkQcu/+8Y6fAV8If1brkRAN0OW+jv41kD0cNPRbSxbZA+wADi8p9JlEYSY/vZyYCBQpE3pWwpZGC60O6RtTjJ8gKM+4BCQ3cjtTGaEB0zvNaRA3glS3w/Gv4M7kuedbOgCFq+bIw0UUFaKlD [email protected]"
    ssh_pwauth: true
    sudo: "ALL=(ALL) NOPASSWD:ALL"
write_files: 
  - 
    content: |
        allow-hotplug wlan0
        wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
    path: /etc/network/interfaces.d/wlan0
  - 
    content: |
        country=US
        ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
        update_config=1
        network={
        ssid="CasaDePatten"
        psk="wireless"
        proto=RSN
        key_mgmt=WPA-PSK
        pairwise=CCMP
        auth_alg=OPEN
        }
    path: /etc/wpa_supplicant/wpa_supplicant.conf
  - 
    content: |
        auto wlan0
        allow-hotplug wlan0
        iface wlan0 inet static
        address 192.168.3.50
        netmask 255.255.255.0
        gateway 192.168.3.1
        wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
        
        auto eth0
        iface eth0 inet static
        address 10.0.0.50
        netmask 255.255.255.0
    path: /etc/network/interfaces
  - 
    content: |
        [Resolve]
        DNS=192.168.3.1
        FallbackDNS=1.1.1.1
    path: /etc/systemd/resolved.conf
  - 
    content: |
        127.0.0.1 localhost        
        10.0.0.50 kubernetes.cluster.home kubernetes
        10.0.0.51 node-1.cluster.home node-1
        10.0.0.52 node-2.cluster.home node-2
        10.0.0.53 node-3.cluster.home node-3
    path: /etc/hosts

The images can be written to your SD cards using your favorite imaging tool. On Windows I use Win32DiskImager. On the Mac, I use the ‘flash’ utility provided by Hypriot. https://github.com/hypriot/flash

If you are using flash, you just grab the latest release and then run it like so:

[email protected] flash % flash --userdata ./configs/node-3.yml --bootconf ./sample/no-uart-config.txt ~/Downloads/hypriotos-rpi-v1.11.5.img.zip
Using cached image /tmp/hypriotos-rpi-v1.11.5.img

Is /dev/disk2 correct? y
Unmounting /dev/disk2 ...
Unmount of all volumes on disk2 was successful
Unmount of all volumes on disk2 was successful
Flashing /tmp/hypriotos-rpi-v1.11.5.img to /dev/rdisk2 ...
1.27GiB 0:00:27 [46.9MiB/s] [====================================================================================================>] 100%            
0+20800 records in
0+20800 records out
1363148800 bytes transferred in 27.655770 secs (49289852 bytes/sec)
Mounting Disk
Mounting /dev/disk2 to customize...
Copying ./sample/no-uart-config.txt to /Volumes/HypriotOS/config.txt ...
Copying cloud-init ./configs/node-3.yml to /Volumes/HypriotOS/user-data ...
Unmounting /dev/disk2 ...
"disk2" ejected.
Finished.

Troubleshooting

You will want to run your yaml file through a linter. Super handy for making sure your indentations are correct and that all the right markup is present. http://www.yamllint.com/ is a fine one but be advised, it replaces #cloud-config with “—” at the top of the file. You will need to manually change that back if you copy the YAML from this site.

You can also check the YAML out on the cli using the built in capabilities of cloud-init itself. You can ignore the “FutureWarning”. It’s a known bug in cloud-init 18.3 that has already been fixed in the 19.0 release.

$ cloud-init devel schema --config-file /boot/user-data
 /usr/lib/python3/dist-packages/cloudinit/config/cc_rsyslog.py:205: FutureWarning: Possible nested set at position 23
   r'^(?P[@]{0,2})'
 Valid cloud-config file /boot/user-data

Once the config file checks out, just burn a SD card for each node and replace the user-data file on each of them with a node-specific version you crafted. Power up the nodes and roughly five to ten minutes later you should have a functional set of Debian nodes ready to have Kubernetes installed.

For writing the SD cards, I use Win32 Disk Imager if running from a Windows box and if I’m on the Mac I use Hypriots ‘flash’ utility available here. https://github.com/hypriot/flash

Flash is easy to use, just grab the latest copy and run it like so:

% flash --userdata ./configs/node-3.yml --bootconf ./sample/no-uart-config.txt ~/Downloads/hypriotos-rpi-v1.11.5.img.zip
Using cached image /tmp/hypriotos-rpi-v1.11.5.img

Is /dev/disk2 correct? y
Unmounting /dev/disk2 ...
Unmount of all volumes on disk2 was successful
Unmount of all volumes on disk2 was successful
Flashing /tmp/hypriotos-rpi-v1.11.5.img to /dev/rdisk2 ...
1.27GiB 0:00:27 [46.9MiB/s] [====================================================================================================>] 100%            
0+20800 records in
0+20800 records out
1363148800 bytes transferred in 27.655770 secs (49289852 bytes/sec)
Mounting Disk
Mounting /dev/disk2 to customize...
Copying ./sample/no-uart-config.txt to /Volumes/HypriotOS/config.txt ...
Copying cloud-init ./configs/node-3.yml to /Volumes/HypriotOS/user-data ...
Unmounting /dev/disk2 ...
"disk2" ejected.
Finished.

Now, if something goes off the rails and you have to fix something in your user-data file, you can actually make the change and then re-run cloud-init instead of reflashing the SD card πŸ™‚

$ sudo cloud-init clean
$ sudo cloud-init init

Remember that cloud-init only runs at first boot, so after re-initializing you will need to reboot in order for the changes to be applied.

Once you have everything ironed out get some tmux/iTerm2 loving going on and work on all your boxes at the same time ;-). I have a simple little tmux script I run from a WSL prompt on my Windows 10 box for this.

tmux new-session \; \
 select-pane -t 0 \; \
 split-window -v \; \
 split-window -h \; \
 select-pane -t 0 \; \
 split-window -h \; \
 select-pane -t 0 \; \
 send-keys 'ssh [email protected]' \; \
 select-pane -t 1 \; \
 send-keys 'ssh [email protected]' \; \
 select-pane -t 2 \; \
 send-keys 'ssh [email protected]' \; \
 select-pane -t 3 \; \
 send-keys 'ssh [email protected]' \; \
 select-pane -t 0 \;
 bind C-p setw synchronize-panes
tmux panes for my four nodes

Installing Kubernetes

From here out, I’m just parroting the last page of instructions from the book.

#use kubeadm to bootstrap the K8s cluster
sudo kubeadm init --pod-network-cidr 10.0.0.0/24 \
--apiserver-advertise-address 10.0.0.50 \
--apiserver-cert-extra-sans kubernetes.cluster.home

<snip>
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.50:6443 --token ti443b.z9opmdg3dokqvcjw \
    --discovery-token-ca-cert-hash sha256:08273e133da472e7611a5ec537fd83c94619c015ca9a63b327e8393162ff6d15

$ sudo kubeadm join 10.0.0.50:6443 --token ti443b.z9opmdg3dokqvcjw --discovery-token-ca-cert-hash sha256:08273e133da472e7611a5ec537fd83c94619c015ca9a63b327e8393162ff6d15
<snip>
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#I then installed Flannel per the books instructions:  
curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml > kube-flannel.yaml

#Replace amd64 and vxlan with RPI friendly values
sed -i 's/amd64/arm/g' kube-flannel.yaml
sed -i 's/vxlan/host-gw/g' kube-flannel.yaml

#Apply the config map
kubectl apply -f kube-flannel.yaml

$ kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
kubernetes   Ready   master   13m     v1.17.0
node-1       Ready   <none>   5m38s   v1.17.0
node-2       Ready   <none>   5m17s   v1.17.0
node-3       Ready   <none>   5m18s   v1.17.0

And there we have it.