Recently I was looking for a use for a bunch of ARM based machines that I had from multiple cloud providers and wondered if I could use them for hosting a bunch of side projects. This multi-part series outlines the steps I took to set up this infrastructure and the challenges I encountered along the way.
In this first post, we'll set up the machines, establish the cluster, and configure the VPN needed to connect the different cloud providers.
Getting Started
To begin with we need to map out what we are actually trying to achieve, to do this I created a diagram in excallidraw of what the end setup should look like.
You can see that the cluster involves three nodes with one being a master and the two other nodes with them all being in different cloud providers. A VPN will be needed for each of the machines to access each other without having to share ports publicly.
As the machines are all arm based and running a lower spec we will use K3s due to it being better suited to the task. With the VPN we will use WireGuard as its already installed within newer versions of the linux kernel and doesn't require any outside services. I ended up using Alma Linux on the master and Oracle Linux on the others which are both RHEL based.
Some considerations that need to be made before attempting this include keeping each of the nodes close to each other geographically and the security concerns.
DISCLAIMER: The Infrastructure is not production ready, please carefully think about any issues before using this in a production environment.
The VPN
Lets start by setting up the WireGuard VPN so each of the nodes can talk to each other like they are on the same local network. To do this we will need to enable IP forwarding on each of the nodes. This can be done with the following commands that need to be executed on each of the nodes.
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.ipv4.conf.all.proxy_arp = 1" >> /etc/sysctl.conf sysctl -p /etc/sysctl.conf
Once that has been done we will need to setup the hostnames for each of the nodes, this can be done with the following command (replace HOSTNAME with each nodes hostnames, I used k3s-master, k3s-node-1, and k3s-node-2).
hostnamectl set-hostname HOSTNAME
Next we need to add some values to the iptables (ADD TO THIS)
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i wg0 -o wg0 -m conntrack --ctstate NEW -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.1.1/24 -o eth0 -j MASQUERADE
After that we will need to install wireguard-tools which we use to generate the private and public keys.
yum install wireguard-tools
We can now generate the public and private keys we will use in each of the WireGuard config files.
wg genkey | tee privatekey | wg pubkey > publickey
Once the keys are generated we can get them with cat > publickey
and cat > privatekey
, we will need these from each machine for the next step.
Next we setup the Wireguard config for each node which is different per node in the cluster, to do this use a editor like nano or vim to create a file at /etc/wireguard/wg0.conf
[Interface] PrivateKey = MASTER_PRIVATE_KEY Address = 192.168.1.1 ListenPort = 5418 [Peer] PublicKey = NODE_1_PUBLIC_KEY EndPoint = NODE_1_IP:5418 AllowedIPs = 192.168.1.2/32 [Peer] PublicKey = NODE_2_PUBLIC_KEY EndPoint = NODE_2_IP:5418 AllowedIPs = 192.168.1.3/32
[Interface] PrivateKey = NODE_1_PRIVATE_KEY Address = 192.168.1.2 ListenPort = 5418 [Peer] PublicKey = MASTER_PUBLIC_KEY EndPoint = MASTER_IP:5418 AllowedIPs = 192.168.1.1/32 [Peer] PublicKey = NODE_2_PUBLIC_KEY EndPoint = NODE_2_IP:5418 AllowedIPs = 192.168.1.3/32
[Interface] PrivateKey = NODE_2_PRIVATE_KEY Address = 192.168.1.3 ListenPort = 5418 [Peer] PublicKey = MASTER_PUBLIC_KEY EndPoint = 42.xxx.xx.16:5418 AllowedIPs = 192.168.1.1/32 [Peer] PublicKey = NODE_1_PUBLIC_KEY EndPoint = 122.xx.xx.155:5418 AllowedIPs = 192.168.1.2/32
Once the configurations have been added to each of the servers and configured the following command needs to be run on every machine, which will start Wireguard with the wg0 configuration.
wg-quick up wg0
Now we can try pinging each of the machines that are in the mesh VPN network, with the following commands. Make sure you do this on every machine and if there are any machines that aren't working check the firewall for port 5418 over UDP in both the OS and in your cloud providers portal.
ping 192.168.1.2 ping 192.168.1.3
Now that we can see our VPN is working we need to set the VPN to run on boot and make sure it hot reloads any updates to the configuration files which are done with these commands.
systemctl enable wg-quick@wg0 wg syncconf wg0 <(wg-quick strip wg0)
Setup K3S
Master Node Install:
curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_START=true sh -s - --node-external-ip {SERVER_EXTERNAL_IP} --advertise-address {SERVER_EXTERNAL_IP} --node-ip 192.168.1.1 --flannel-iface wg0
Flags used:
--node-external-ip
This is set to the external IP used by the machine, specifying this allows us to ensure that if the IP is not directly bound to the network card it still works correctly.
--advertise-address
This flag is also set to the external IP address of the machine and is used to advertise the server to other members of the cluster or kubectl.
--node-ip
This is set to the IP address of the machine that we set in the WireGaurd networking above.
--flannel-iface
This is where the network interface used between devices is specified, we are using wg0 as that is what our WireGuard VPN is on.
Once you have run the command and k3s has installed, run systemctl status k3s
to check that k3s is running and if it is run the following comamnd and check that the state of the pods are Running.
kubectl get pods -A
On the master node you will also need to get the k3s token that we will use when installing k3s on the other machines this can be done with the following command.
cat /var/lib/rancher/k3s/server/node-token
Node 1 Install:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.1:6443 K3S_TOKEN={K3S_TOKEN} sh -s - --node-external-ip {SERVER_EXTERNAL_IP} --node-ip 192.168.1.2 --flannel-iface wg0
Flags used:
K3S_URL
This will be the local WireGaurd URL of the master node in our K3S cluster, usually on port 6443.
K3S_TOKEN
This token was extracted from the master node in a earlier step.
--node-external-ip
This is set to the external IP used by the machine, specifying this allows us to ensure that if the IP is not directly bound to the network card it still works correctly.
--node-ip
This is set to the IP address of the machine that we set in the WireGaurd networking above.
--flannel-iface
This is where the network interface used between devices is specified, we are using wg0 as that is what our WireGuard VPN is on.
Node 2 Install:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.1:6443 K3S_TOKEN={K3S_TOKEN} sh -s - --node-external-ip {SERVER_EXTERNAL_IP} --node-ip 192.168.1.3 --flannel-iface wg0
This should be similar to the node 2 install just make sure that the external IP and node IPs are correct with your WireGuard setup.
Once each of nodes has been installed you should be able to go back to the master node and run the following command to check the status of the cluster
kubectl get nodes -o wide
If you see all the machines as Ready and the correct IPs, congratulations you now have a working cluster. In the next post we will go over setting up ArgoCD on the cluster so we can follow a GitOps flow.
Extras:
Litestream can also be used to backup the k3s SQLite, in the example below I use Backblaze B2 instead of S3 but check out the litestream site for how to setup other storage providers (https://litestream.io/guides/)
# Configure Litestream to backup and restore to S3 cat > /etc/litestream.yml << END access-key-id: {Backblaze Key ID} secret-access-key: {Backblaze Application Key } dbs: - path: /var/lib/rancher/k3s/server/db/state.db replicas: - type: s3 bucket: {Backblaze Bucket Name} path: db endpoint: {Backblaze Endpoint} force-path-style: true END # Install Litestream curl -o /tmp/litestream.deb -fsSL https://github.com/benbjohnson/litestream/releases/download/v0.3.4/litestream-v0.3.4-linux-amd64.deb tar xvf /tmp/litestream.tar.gz mv /tmp/litestream /usr/local/bin cat > /etc/systemd/system/litestream.service << END [Unit] Description=Litestream [Service] Restart=always ExecStart=/usr/local/bin/litestream replicate [Install] WantedBy=multi-user.target END litestream restore -if-replica-exists /var/lib/rancher/k3s/server/db/state.db # Add litestream to systemctl systemctl start k3s # Start Litestream systemctl enable litestream systemctl start litestream