Elemental the command line way
Follow this guide to have an auto-deployed cluster via rke2/k3s and managed by Rancher with the only help of an Elemental Teal ISO.
Prerequisites​
- A Rancher server (v2.7.0 or later) configured (server-url set)
- To configure the Rancher
server-url
please check the Rancher docs
- To configure the Rancher
- A machine (bare metal or virtualized) with TPM 2.0
- Hint 1: Libvirt allows setting virtual TPMs for virtual machines example here
- Hint 2: You can enable TPM emulation on bare metal machines missing the TPM 2.0 module example here
- Hint 3: Make sure you're using UEFI (not BIOS) on x86-64, or the ISO won't boot
- Hint 4: A minimum volume size of 25 GB is recommended. See the Elemental Teal partition table for more details
- Hint 5: CPU and RAM requirements depend on the Kubernetes version installed, for example K3s or RKE2
- Helm Package Manager (https://helm.sh/)
- For ARM (aarch64) - One SD-card (32 GB or more, must be fast - 40MB/s write speed is acceptable) and a USB-stick for installation
Install Elemental Operator​
elemental-operator
is the management endpoint, running the management
cluster and taking care of creating inventories, registrations for machines and much more.
We will use the Helm package manager to install the elemental-operator chart into our cluster.
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator-crds oci://registry.suse.com/rancher/elemental-operator-crds-chart
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator oci://registry.suse.com/rancher/elemental-operator-chart
Now after a few seconds you should see the operator pod appear on the cattle-elemental-system
namespace:
kubectl get pods -n cattle-elemental-system
NAME READY STATUS RESTARTS AGE
elemental-operator-64f88fc695-b8qhn 1/1 Running 0 16s
The Elemental Operator chart is distributed via an OCI registry: Helm correctly supports OCI based registries starting from the v3.8.0 release.
When upgrading from an elemental-operator release embedding the Elemental CRDs (version < 1.2.4) the elemental-operator-crds chart installation will fail. You will need to upgrade the elemental-operator chart first, and only then install the elemental-operator-crds chart.
Non-stable installations​
Besides the Helm charts listed above, there are two other non-stable
versions available.
-
Staging: refers to the latest tagged release from Github. This is documented in the Next pages.
-
Development: refers to the 'tip of HEAD' from Github. This is the ongoing development version and changes constantly.
- Staging version (x86-64, ARM64 (Raspberry Pi 4))
- Development version (x86-64, ARM64 (Raspberry Pi 4))
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator-crds oci://registry.opensuse.org/isv/rancher/elemental/staging/charts/rancher/elemental-operator-crds-chart
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator oci://registry.opensuse.org/isv/rancher/elemental/staging/charts/rancher/elemental-operator-chart
The development version is not recommended for production environments. We welcome feedback via Slack or Github issues, but it could be unstable and contain experimental features that can be dropped without notice.
helm upgrade --create-namespace -n cattle-elemental-system --install elemental-operator-crds oci://registry.opensuse.org/isv/rancher/elemental/dev/charts/rancher/elemental-operator-crds-chart
helm upgrade --create-namespace -n cattle-elemental-system --install --set image.imagePullPolicy=Always elemental-operator oci://registry.opensuse.org/isv/rancher/elemental/dev/charts/rancher/elemental-operator-chart
Installation options​
There are a few options that can be set in the chart install but that is out of scope for this document. You can see all the values on the chart values.yaml.
Prepare your kubernetes resources​
Node deployment starts with a MachineRegistration
, identifying a set of machines sharing the same configuration (disk drives, network, etc.).
The MachineRegistration
is needed to perform the deployment of the Elemental OS on the target hosts. When booting up, each host registers to the Elemental Operator which tracks the new host with a MachineInventory
resource.
Then it continues with having a Cluster resource that uses a MachineInventorySelectorTemplate
to know which machines are for that cluster.
This selector is a simple matcher based on labels set in the MachineInventory
, so if your selector is matching on the label cluster-id
with a value cluster-id-val
and your MachineInventory
has that same cluster-id
:cluster-id-val
label, it will match and be bootstrapped as part of the cluster.
In this quickstart we are going to deploy the resources to provision a cluster named volcano that will match on MachineInventory
s with the label element:fire.
- Manually creating the resource yamls
- Using quickstart files from Elemental docs repo directly
You will need to create the following files:
apiVersion: elemental.cattle.io/v1beta1
kind: MachineInventorySelectorTemplate
metadata:
name: fire-machine-selector
namespace: fleet-default
spec:
template:
spec:
selector:
matchExpressions:
- key: element
operator: In
values: [ 'fire' ]
As you can see this is a very simple selector that looks for MachineInventory
s having a label with the key element
and the value fire
.
kind: Cluster
apiVersion: provisioning.cattle.io/v1
metadata:
name: volcano
namespace: fleet-default
spec:
rkeConfig:
machineGlobalConfig:
etcd-expose-metrics: false
profile: null
machinePools:
- controlPlaneRole: true
etcdRole: true
machineConfigRef:
apiVersion: elemental.cattle.io/v1beta1
kind: MachineInventorySelectorTemplate
name: fire-machine-selector
name: fire-pool
quantity: 1
unhealthyNodeTimeout: 0s
workerRole: true
machineSelectorConfig:
- config:
protect-kernel-defaults: false
registries: {}
kubernetesVersion: v1.24.8+k3s1
As you can see the machineConfigRef
is of kind MachineInventorySelectorTemplate
with the name fire-machine-selector
: it matches the selector we created.
You can get more information about cluster options like machineGlobalConfig
or machineSelectorConfig
directly in the Rancher Manager documentation.
- Registration
- Registration for Raspberry Pi
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
metadata:
name: fire-nodes
namespace: fleet-default
spec:
config:
cloud-config:
users:
- name: root
passwd: root
elemental:
install:
reboot: true
device: /dev/sda
debug: true
machineInventoryLabels:
element: fire
manufacturer: "${System Information/Manufacturer}"
productName: "${System Information/Product Name}"
serialNumber: "${System Information/Serial Number}"
machineUUID: "${System Information/UUID}"
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
metadata:
name: fire-nodes
namespace: fleet-default
spec:
config:
cloud-config:
users:
- name: root
passwd: root
elemental:
install:
reboot: true
device: /dev/mmcblk0
debug: true
disable-boot-entry: true
registration:
emulate-tpm: true
machineInventoryLabels:
element: fire
manufacturer: "${System Information/Manufacturer}"
productName: "${System Information/Product Name}"
serialNumber: "${System Information/Serial Number}"
machineUUID: "${System Information/UUID}"
For deployment on Raspberry Pi, you need to enable emulated TPM
(except you have a hardware TPM for Raspberry Pi).
You also need to disable writing to the EFI store (since Raspberry Pi doesn't have one) via disable-boot-entry: true
.
The MachineRegistration
defines the registration and installation configuration. Once created, the Elemental operator exposes a unique URL to be used with the elemental-register
binary to reach out to the management cluster and register the machine during installation: if the registration is successful, the operator creates a MachineInventory
tracking the machine, which can be used to provision the machine as a node of our cluster.
We define the label matching our selector here, although it can also be added later to the created MachineInventory
s.
Make sure to modify the registration.yaml above to set the proper install device to point to a valid device based on your node configuration (i.e. /dev/sda, /dev/vda, /dev/nvme0, etc...).
The SD-card on a Raspberry Pi is usually /dev/mmcblk0
.
- Seed Image (x86_64)
- Seed Image for Raspberry Pi
apiVersion: elemental.cattle.io/v1beta1
kind: SeedImage
metadata:
name: fire-img
namespace: fleet-default
spec:
baseImage: registry.suse.com/suse/sle-micro-iso/5.5:2.0.2
registrationRef:
apiVersion: elemental.cattle.io/v1beta1
kind: MachineRegistration
name: fire-nodes
namespace: fleet-default
The SeedImage
is required to generate the seed image (like a bootable ISO) that will boot and start the Elemental provisioning on the target machines.
Now that we have defined all the configuration files let's apply them to create the proper resources in Kubernetes:
kubectl apply -f selector.yaml
kubectl apply -f cluster.yaml
kubectl apply -f registration.yaml
kubectl apply -f seedimage.yaml
The SeedImage
resource, which automates the creation of an Elemental bootable image (the seed image), does not support Raspberry Pi yet.
We will generate a seed image manually in the next section.
Now that we have defined all the configuration files let's apply them to create the proper resources in Kubernetes:
kubectl apply -f selector.yaml
kubectl apply -f cluster.yaml
kubectl apply -f registration.yaml
You can directly apply the quickstart example resource files from the Elemental docs repository.
The quickstart example resource files assume the default storage of the target host to be mapped to the /dev/sda
.
If your host storage device file is different, you have to change the registration.yaml file before applying it, changing the config.elemental.install.device
accordingly.
kubectl apply -f https://raw.githubusercontent.com/rancher/elemental-docs/main/examples/quickstart/selector.yaml
kubectl apply -f https://raw.githubusercontent.com/rancher/elemental-docs/main/examples/quickstart/cluster.yaml
kubectl apply -f https://raw.githubusercontent.com/rancher/elemental-docs/main/examples/quickstart/registration.yaml
kubectl apply -f https://raw.githubusercontent.com/rancher/elemental-docs/main/examples/quickstart/seedimage.yaml (not for aarch64 yet)
Preparing the installation (seed) image​
This is the last step: you need an Elemental Teal seed image that includes the initial registration config, so it can be auto registered, installed and fully deployed as part of your cluster.
The initial registration config file is generated when you create a Machine Registration
.
You can download it with:
wget --no-check-certificate `kubectl get machineregistration -n fleet-default fire-nodes -o jsonpath="{.status.registrationURL}"` -O initial-registration.yaml
The contents of the registration config file are nothing more than the registration URL that the node needs to register, the proper server certificate and few options for the registration process.
Once generated, a seed image can be used to provision any number of machines.
- Downloading the quickstart ISO
- Preparing the seed image (x86_64) manually
- Preparing the seed image (aarch64) manually
The seed image created by the SeedImage
resource above can be downloaded as an ISO via the following script:
kubectl wait --for=condition=ready pod -n fleet-default fire-img
wget --no-check-certificate `kubectl get seedimage -n fleet-default fire-img -o jsonpath="{.status.downloadURL}"` -O elemental-teal.x86_64.iso
The first command waits for the ISO to be built and ready, the second one downloads it in the current directory with the name elemental-teal-x86_64.iso
.
If you created a customized ISO,
you can use the elemental-iso-add-registration
script to add the registration config file
elemental-iso-add-registration initial-registration.yaml my-customized.iso
Elemental's support for Raspberry Pi is primarily for demonstration purposes at this point. Therefore the installation process is modelled similar to x86-64. You boot from a seed image (an USB stick in this case) and install to a storage medium (SD-card for Raspberry Pi).
Retrieving the prebuilt seed image​
wget -q https://download.opensuse.org/repositories/isv:/Rancher:/Elemental:/Stable/containers/rpi.raw
Verifying the download​
In order to verify the integrity of the downloaded artifacts, you should do a checksum verification:
wget -q https://download.opensuse.org/repositories/isv:/Rancher:/Elemental:/Stable/containers/rpi.raw.sha256
sha256sum -c rpi.raw.sha256
This should print rpi.raw: OK
as output.
Injecting the registration information​
Adding the initial-registration.yaml
isn't scripted yet. This is still a manual process:
The written USB stick will have two partitions. RPI_BOOT
contains the boot loader files and COS_LIVE
the Elemental files.
Mount the COS_LIVE
partition and write initial-registration.yaml
as livecd-cloud-config.yaml
to this partition.
If you've mounted the USB stick with a file manager, this command should work to copy the registration information:
sudo cp initial-registration.yaml /run/media/$USER/COS_LIVE/livecd-cloud-config.yaml
If you prefer using some CLI tools:
IMAGE=rpi.raw
DEST=$(mktemp -d)
SECTORSIZE=$(sfdisk -J ${IMAGE} | jq '.partitiontable.sectorsize')
DATAPARTITIONSTART=$(sfdisk -J ${IMAGE} | jq '.partitiontable.partitions[1].start')
sudo mount -o rw,loop,offset=$((${SECTORSIZE}*${DATAPARTITIONSTART})) ${IMAGE} ${DEST}
sudo cp initial-registration.yaml ${DEST}/livecd-cloud-config.yaml
sudo umount ${DEST}
rmdir ${DEST}
Writing the seed image to a USB stick​
The .raw
image needs to be written to a USB stick to boot from. This can be done with dd
on the Linux command line if you're comfortable with this command.
openSUSE has nice instructions on how to write an image to a storage medium for Linux,
Windows, and OS X.
Booting the Raspberry Pi​
Now unmount the USB stick and plug it into your Raspberry Pi.
Plug a large (32 GB or more) and fast (!!) micro SD-card into the respective slot.
Connect the system to ethernet.
A powercycle will reboot the Pi. Everything else is identical to x86-64.
Make sure the micro SD-card is unpartitioned. Otherwise the Pi bootloader will try to boot from it and fail.
You can now boot your nodes with this image and they will:
- Register with the registrationURL given and create a per-machine
MachineInventory
- Install Elemental Teal to the given device
- Reboot
Selecting the right machines to join a cluster​
The MachineInventorySelectorTemplate
selects the machines needed to provision the cluster from the MachineInventory
s having the element:fire label.
We have added the element:fire label in the MachineRegistration
machineInventoryLabels
map, so all the MachineInventory
s originated from it already have the label.
One could anyway skip the label from the MachineRegistration
and add it later:
kubectl -n fleet-default label machineinventory $(kubectl get machineinventory -n fleet-default --no-headers -o custom-columns=":metadata.name") element=fire
As soon as MachineInventory
s with the element:fire are present, the corresponding machines auto-deploy the cluster via the chosen provider (k3s/rke).
After a few minutes your new cluster will be fully provisioned!!
How can I choose the kubernetes version and deployer for the cluster?​
In your cluster.yaml file there is a key in the Spec
called kubernetesVersion
. That sets the version and deployer that will be used for the cluster,
for example Kubernetesv1.24.8
for rke2 would be v1.24.8+rke2r1
and for k3s v1.24.8+k3s1
.
To see all compatible versions check the Rancher Support Matrix PDF for rke/rke2/k3s versions and their components.
You can also check our Version doc to know how to obtain those versions.
Check our Cluster Spec page for more info about the Cluster
resource.
How can I follow what is going on behind the scenes?​
You should be able to follow along what the machine is doing via:
- During ISO boot:
- ssh into the machine (user/pass: root/ros):
- running
journalctl -f -t elemental
shows you the progress of the registration (elemental-register) and the installation of Elemental (elemental install).
- running
- ssh into the machine (user/pass: root/ros):
- Once the system is installed:
- On the Rancher UI ->
Cluster Management
allows you to see your new cluster and theProvisioning Log
in the cluster details - ssh into the machine (user/pass: Whatever your configured on the registration.yaml under
Spec.config.cloud-config.users
):- running
journalctl -f -u elemental-system-agent
shows the output of the initial elemental config and the installation of therancher-system-agent
- running
journalctl -f -u rancher-system-agent
shows the output of the boostrap of cluster components like k3s - running
journalctl -f -u k3s
shows the logs of the k3s deployment
- running
- On the Rancher UI ->