<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Sidero Metal – Guides</title><link>/v0.6/guides/</link><description>Recent content in Guides on Sidero Metal</description><generator>Hugo -- gohugo.io</generator><atom:link href="/v0.6/guides/index.xml" rel="self" type="application/rss+xml"/><item><title>V0.6: Bootstrapping</title><link>/v0.6/guides/bootstrapping/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/bootstrapping/</guid><description>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>Imagine a scenario in which you have shown up to a datacenter with only a laptop and your task is to transition a rack of bare metal machines into an HA management plane and multiple Kubernetes clusters created by that management plane.
In this guide, we will go through how to create a bootstrap cluster using a Docker-based Talos cluster, provision the management plane, and pivot over to it.
Guides around post-pivoting setup and subsequent cluster creation should also be found in the &amp;ldquo;Guides&amp;rdquo; section of the sidebar.&lt;/p>
&lt;p>Because of the design of Cluster API, there is inherently a &amp;ldquo;chicken and egg&amp;rdquo; problem with needing a Kubernetes cluster in order to provision the management plane.
Talos Systems and the Cluster API community have created tools to help make this transition easier.&lt;/p>
&lt;h2 id="prerequisites">Prerequisites&lt;/h2>
&lt;p>First, you need to install the latest &lt;code>talosctl&lt;/code> by running the following script:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>curl -Lo /usr/local/bin/talosctl https://github.com/talos-systems/talos/releases/latest/download/talosctl-&lt;span style="color:#719e07">$(&lt;/span>uname -s | tr &lt;span style="color:#2aa198">&amp;#34;[:upper:]&amp;#34;&lt;/span> &lt;span style="color:#2aa198">&amp;#34;[:lower:]&amp;#34;&lt;/span>&lt;span style="color:#719e07">)&lt;/span>-amd64
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>chmod +x /usr/local/bin/talosctl
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>You can read more about Talos and &lt;code>talosctl&lt;/code> at &lt;a href="https://www.talos.dev/latest">talos.dev&lt;/a>.&lt;/p>
&lt;p>Next, there are two big prerequisites involved with bootstrapping Sidero: routing and DHCP setup.&lt;/p>
&lt;p>From the routing side, the laptop from which you are bootstrapping &lt;em>must&lt;/em> be accessible by the bare metal machines that we will be booting.
In the datacenter scenario described above, the easiest way to achieve this is probably to hook the laptop onto the server rack&amp;rsquo;s subnet by plugging it into the top-of-rack switch.
This is needed for TFTP, PXE booting, and for the ability to register machines with the bootstrap plane.&lt;/p>
&lt;p>DHCP configuration is needed to tell the metal servers what their &amp;ldquo;next server&amp;rdquo; is when PXE booting.
The configuration of this is different for each environment and each DHCP server, thus it&amp;rsquo;s impossible to give an easy guide.
However, here is an example of the configuration for an Ubiquti EdgeRouter that uses vyatta-dhcpd as the DHCP service:&lt;/p>
&lt;p>This block shows the subnet setup, as well as the extra &amp;ldquo;subnet-parameters&amp;rdquo; that tell the DHCP server to include the ipxe-metal.conf file.&lt;/p>
&lt;blockquote>
&lt;p>These commands are run under the &lt;code>configure&lt;/code> option in EdgeRouter&lt;/p>
&lt;/blockquote>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ show service dhcp-server shared-network-name MetalDHCP
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> authoritative &lt;span style="color:#b58900">enable&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> subnet 192.168.254.0/24 &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> default-router 192.168.254.1
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> dns-server 192.168.1.200
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> lease &lt;span style="color:#2aa198">86400&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> start 192.168.254.2 &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> stop 192.168.254.252
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> subnet-parameters &lt;span style="color:#2aa198">&amp;#34;include &amp;amp;quot;/config/ipxe-metal.conf&amp;amp;quot;;&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Here is the &lt;code>ipxe-metal.conf&lt;/code> file.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ cat /config/ipxe-metal.conf
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>allow bootp;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>allow booting;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>next-server 192.168.1.150;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>filename &lt;span style="color:#2aa198">&amp;#34;snp.efi&amp;#34;&lt;/span>; &lt;span style="color:#586e75"># use &amp;#34;undionly.kpxe&amp;#34; for BIOS netboot or &amp;#34;snp.efi&amp;#34; for UEFI netboot&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>host talos-mgmt-0 &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> fixed-address 192.168.254.2;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> hardware ethernet d0:50:99:d3:33:60;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>If you want to boot multiple architectures, you can use the &lt;em>DHCP Option 93&lt;/em> to specify the architecture.&lt;/p>
&lt;/blockquote>
&lt;p>First we need to define &lt;em>option 93&lt;/em> in the DHCP server configuration.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">set&lt;/span> service dhcp-server global-parameters &lt;span style="color:#2aa198">&amp;#34;option system-arch code 93 = unsigned integer 16;&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now we can specify condition based on &lt;em>option 93&lt;/em> in &lt;code>ipxe-metal.conf&lt;/code> file&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ cat /config/ipxe-metal.conf
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>allow bootp;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>allow booting;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>next-server 192.168.1.150;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">if&lt;/span> option system-arch &lt;span style="color:#719e07">=&lt;/span> 00:0b &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;snp-arm64.efi&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span> &lt;span style="color:#719e07">else&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;snp.efi&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>host talos-mgmt-0 &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> fixed-address 192.168.254.2;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> hardware ethernet d0:50:99:d3:33:60;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Notice that it sets a static address for the management node that I&amp;rsquo;ll be booting, in addition to providing the &amp;ldquo;next server&amp;rdquo; info.
This &amp;ldquo;next server&amp;rdquo; IP address will match references to &lt;code>PUBLIC_IP&lt;/code> found below in this guide.&lt;/p>
&lt;h2 id="create-a-local-cluster">Create a Local Cluster&lt;/h2>
&lt;p>The &lt;code>talosctl&lt;/code> CLI tool has built-in support for spinning up Talos in docker containers.
Let&amp;rsquo;s use this to our advantage as an easy Kubernetes cluster to start from.&lt;/p>
&lt;p>Set an environment variable called &lt;code>PUBLIC_IP&lt;/code> which is the &amp;ldquo;public&amp;rdquo; IP of your machine.
Note that &amp;ldquo;public&amp;rdquo; is a bit of a misnomer.
We&amp;rsquo;re really looking for the IP of your machine, not the IP of the node on the docker bridge (ex: &lt;code>192.168.1.150&lt;/code>).&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">PUBLIC_IP&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#34;192.168.1.150&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>We can now create our Docker cluster.
Issue the following to create a single-node cluster:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl cluster create &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --kubernetes-version 1.29.0 &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> -p 69:69/udp,8081:8081/tcp,51821:51821/udp &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --workers &lt;span style="color:#2aa198">0&lt;/span> &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --endpoint &lt;span style="color:#268bd2">$PUBLIC_IP&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Note that there are several ports mentioned in the command above.
These allow us to access the services that will get deployed on this node.&lt;/p>
&lt;p>Once the cluster create command is complete, issue &lt;code>talosctl kubeconfig /desired/path&lt;/code> to fetch the kubeconfig for this cluster.
You should then set your &lt;code>KUBECONFIG&lt;/code> environment variable to the path of this file.&lt;/p>
&lt;h2 id="untaint-control-plane">Untaint Control Plane&lt;/h2>
&lt;p>Because this is a single node cluster, we need to remove the &amp;ldquo;NoSchedule&amp;rdquo; taint on the node to make sure non-controlplane components can be scheduled.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl taint node talos-default-controlplane-1 node-role.kubernetes.io/control-plane:NoSchedule-
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="install-sidero">Install Sidero&lt;/h2>
&lt;p>To install Sidero and the other Talos providers, simply issue:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_HOST_NETWORK&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#b58900">true&lt;/span> &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY&lt;/span>&lt;span style="color:#719e07">=&lt;/span>Recreate &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#268bd2">$PUBLIC_IP&lt;/span> &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> clusterctl init -b talos -c talos -i sidero
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>We will now want to ensure that the Sidero services that got created are publicly accessible across our subnet.
These variables above will allow the metal machines to speak to these services later.&lt;/p>
&lt;h2 id="register-the-servers">Register the Servers&lt;/h2>
&lt;p>At this point, any servers on the same network as Sidero should PXE boot using the Sidero PXE service.
To register a server with Sidero, simply turn it on and Sidero will do the rest.
Once the registration is complete, you should see the servers registered with &lt;code>kubectl get servers&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ kubectl get servers -o wide
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>NAME HOSTNAME ACCEPTED ALLOCATED CLEAN
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>00000000-0000-0000-0000-d05099d33360 192.168.254.2 &lt;span style="color:#b58900">false&lt;/span> &lt;span style="color:#b58900">false&lt;/span> &lt;span style="color:#b58900">false&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="setting-up-ipmi">Setting up IPMI&lt;/h2>
&lt;p>Sidero can use IPMI information to control Server power state, reboot servers and set boot order.
IPMI information will be, by default, setup automatically if possible as part of the acceptance process.
See &lt;a href="../../resource-configuration/servers/#ipmi">IPMI&lt;/a> for more information.&lt;/p>
&lt;p>IPMI connection information can also be set manually in the Server spec after initial registration:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl patch server 00000000-0000-0000-0000-d05099d33360 --type&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;json&amp;#39;&lt;/span> -p&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;[{&amp;#34;op&amp;#34;: &amp;#34;add&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/spec/bmc&amp;#34;, &amp;#34;value&amp;#34;: {&amp;#34;endpoint&amp;#34;: &amp;#34;192.168.88.9&amp;#34;, &amp;#34;user&amp;#34;: &amp;#34;ADMIN&amp;#34;, &amp;#34;pass&amp;#34;:&amp;#34;ADMIN&amp;#34;}}]&amp;#39;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>If IPMI info is not set, servers should be configured to boot first from network, then from disk.&lt;/p>
&lt;h2 id="configuring-the-installation-disk">Configuring the installation disk&lt;/h2>
&lt;p>Note that for bare-metal setup, you would need to specify an installation disk.
See &lt;a href="../../resource-configuration/servers/#installation-disk">Installation Disk&lt;/a> for details on how to do this.
You should configure this before accepting the server.&lt;/p>
&lt;h2 id="accept-the-servers">Accept the Servers&lt;/h2>
&lt;p>Note in the output above that the newly registered servers are not &lt;code>accepted&lt;/code>.
In order for a server to be eligible for consideration, it &lt;em>must&lt;/em> be marked as &lt;code>accepted&lt;/code>.
Before a &lt;code>Server&lt;/code> is accepted, no write action will be performed against it.
Servers can be accepted by issuing a patch command like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl patch server 00000000-0000-0000-0000-d05099d33360 --type&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;json&amp;#39;&lt;/span> -p&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;[{&amp;#34;op&amp;#34;: &amp;#34;replace&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/spec/accepted&amp;#34;, &amp;#34;value&amp;#34;: true}]&amp;#39;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>For more information on server acceptance, see the &lt;a href="../../resource-configuration/servers">server docs&lt;/a>.&lt;/p>
&lt;h2 id="create-management-plane">Create Management Plane&lt;/h2>
&lt;p>We are now ready to template out our management plane.
Using clusterctl, we can create a cluster manifest with:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>clusterctl generate cluster management-plane -i sidero &amp;gt; management-plane.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Note that there are several variables that should be set in order for the templating to work properly:&lt;/p>
&lt;ul>
&lt;li>&lt;code>CONTROL_PLANE_ENDPOINT&lt;/code> and &lt;code>CONTROL_PLANE_PORT&lt;/code>: The endpoint (IP address or hostname) and the port used for the Kubernetes API server
(e.g. for &lt;code>https://1.2.3.4:6443&lt;/code>: &lt;code>CONTROL_PLANE_ENDPOINT=1.2.3.4&lt;/code> and &lt;code>CONTROL_PLANE_PORT=6443&lt;/code>).
This is the equivalent of the &lt;code>endpoint&lt;/code> you would specify in &lt;code>talosctl gen config&lt;/code>.
There are a variety of ways to configure a control plane endpoint.
Some common ways for an HA setup are to use DNS, a load balancer, or BGP.
A simpler method is to use the IP of a single node.
This has the disadvantage of being a single point of failure, but it can be a simple way to get running.&lt;/li>
&lt;li>&lt;code>CONTROL_PLANE_SERVERCLASS&lt;/code>: The server class to use for control plane nodes.&lt;/li>
&lt;li>&lt;code>WORKER_SERVERCLASS&lt;/code>: The server class to use for worker nodes.&lt;/li>
&lt;li>&lt;code>KUBERNETES_VERSION&lt;/code>: The version of Kubernetes to deploy (e.g. &lt;code>v1.29.0&lt;/code>).&lt;/li>
&lt;li>&lt;code>CONTROL_PLANE_PORT&lt;/code>: The port used for the Kubernetes API server (port 6443)&lt;/li>
&lt;li>&lt;code>TALOS_VERSION&lt;/code>: This should correspond to the minor version of Talos that you will be deploying (e.g. &lt;code>v1.6.1&lt;/code>).
This value is used in determining the fields present in the machine configuration that gets generated for Talos nodes.&lt;/li>
&lt;/ul>
&lt;p>For instance:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">CONTROL_PLANE_SERVERCLASS&lt;/span>&lt;span style="color:#719e07">=&lt;/span>any
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">WORKER_SERVERCLASS&lt;/span>&lt;span style="color:#719e07">=&lt;/span>any
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">TALOS_VERSION&lt;/span>&lt;span style="color:#719e07">=&lt;/span>v1.6.1
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">KUBERNETES_VERSION&lt;/span>&lt;span style="color:#719e07">=&lt;/span>v1.29.0
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">CONTROL_PLANE_PORT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">6443&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">CONTROL_PLANE_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>1.2.3.4
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>clusterctl generate cluster management-plane -i sidero &amp;gt; management-plane.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>In addition, you can specify the replicas for control-plane &amp;amp; worker nodes in management-plane.yaml manifest for TalosControlPlane and MachineDeployment objects.
Also, they can be scaled if needed (after applying the &lt;code>management-plane.yaml&lt;/code> manifest):&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl get taloscontrolplane
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl get machinedeployment
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl scale taloscontrolplane management-plane-cp --replicas&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">3&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now that we have the manifest, we can simply apply it:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl apply -f management-plane.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>&lt;strong>NOTE: The templated manifest above is meant to act as a starting point.&lt;/strong>
&lt;strong>If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.&lt;/strong>&lt;/p>
&lt;p>Once the management plane is setup, you can fetch the talosconfig by using the cluster label.
Be sure to update the cluster name and issue the following command:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl get talosconfig &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> -l cluster.x-k8s.io/cluster-name&lt;span style="color:#719e07">=&lt;/span>&amp;lt;CLUSTER NAME&amp;gt; &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> -o yaml -o &lt;span style="color:#268bd2">jsonpath&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;{.items[0].status.talosConfig}&amp;#39;&lt;/span> &amp;gt; management-plane-talosconfig.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>With the talosconfig in hand, the management plane&amp;rsquo;s kubeconfig can be fetched with &lt;code>talosctl --talosconfig management-plane-talosconfig.yaml kubeconfig&lt;/code>&lt;/p>
&lt;h2 id="pivoting">Pivoting&lt;/h2>
&lt;p>Once we have the kubeconfig for the management cluster, we now have the ability to pivot the cluster from our bootstrap.
Using clusterctl, issue:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>clusterctl init --kubeconfig&lt;span style="color:#719e07">=&lt;/span>/path/to/management-plane/kubeconfig -i sidero -b talos -c talos
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Followed by:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>clusterctl move --to-kubeconfig&lt;span style="color:#719e07">=&lt;/span>/path/to/management-plane/kubeconfig
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Upon completion of this command, we can now tear down our bootstrap cluster with &lt;code>talosctl cluster destroy&lt;/code> and begin using our management plane as our point of creation for all future clusters!.&lt;/p></description></item><item><title>V0.6: Building A Management Plane with ISO Image</title><link>/v0.6/guides/iso/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/iso/</guid><description>
&lt;p>This guide will provide some very basic detail about how you can also build a Sidero management plane using the Talos ISO image instead of following the Docker-based process that we detail in our Getting Started tutorials.&lt;/p>
&lt;p>Using the ISO is a perfectly valid way to build a Talos cluster, but this approach is not recommended for Sidero as it avoids the &amp;ldquo;pivot&amp;rdquo; step detailed &lt;a href="../../getting-started/pivot">here&lt;/a>.
Skipping this step means that the management plane does not become &amp;ldquo;self-hosted&amp;rdquo;, in that it cannot be upgraded and scaled using the Sidero processes we follow for workload clusters.
For folks who are willing to take care of their management plane in other ways, however, this approach will work fine.&lt;/p>
&lt;p>The rough outline of this process is very short and sweet, as it relies on other documentation:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>For each management plane node, boot the ISO and install Talos using the &amp;ldquo;apply-config&amp;rdquo; process mentioned in our Talos &lt;a href="https://www.talos.dev/latest/introduction/getting-started/">Getting Started&lt;/a> docs.
These docs go into heavy detail on using the ISO, so they will not be recreated here.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>With a Kubernetes cluster now in hand (and with access to it via &lt;code>talosctl&lt;/code> and &lt;code>kubectl&lt;/code>), you can simply pickup the Getting Started tutorial at the &amp;ldquo;Install Sidero&amp;rdquo; section &lt;a href="../../getting-started/install-clusterapi">here&lt;/a>.
Keep in mind, however, that you will be unable to do the &amp;ldquo;pivoting&amp;rdquo; section of the tutorial, so just skip that step when you reach the end of the tutorial.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>Note: It may also be of interest to view the prerequisite guides on &lt;a href="../../getting-started/prereq-cli-tools">CLI&lt;/a> and &lt;a href="../../getting-started/prereq-dhcp">DHCP&lt;/a> setup, as they will still apply to this method.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>For long-term maintenance of a management plane created in this way, refer to the Talos documentation for upgrading &lt;a href="https://www.talos.dev/latest/kubernetes-guides/upgrading-kubernetes/">Kubernetes&lt;/a> and &lt;a href="https://www.talos.dev/latest/talos-guides/upgrading-talos/">Talos&lt;/a> itself.&lt;/li>
&lt;/ul></description></item><item><title>V0.6: Decommissioning Servers</title><link>/v0.6/guides/decommissioning/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/decommissioning/</guid><description>
&lt;p>This guide will detail the process for removing a server from Sidero.
The process is fairly simple with a few pieces of information.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>For the given server, take note of any serverclasses that are configured to match the server.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Take note of any clusters that make use of aforementioned serverclasses.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>For each matching cluster, edit the cluster resource with &lt;code>kubectl edit cluster&lt;/code> and set &lt;code>.spec.paused&lt;/code> to &lt;code>true&lt;/code>.
Doing this ensures that no new machines will get created for these servers during the decommissioning process.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>If you want to mark a server to be not allocated after it&amp;rsquo;s accepted into the cluster, set the &lt;code>.spec.cordoned&lt;/code> field to &lt;code>true&lt;/code>.
This will prevent the server from being allocated to any new clusters (still allowing it to be wiped).&lt;/p>
&lt;/li>
&lt;li>
&lt;p>If the server is already part of a cluster (&lt;code>kubectl get serverbindings -o wide&lt;/code> should provide this info), you can now delete the machine that corresponds with this server via &lt;code>kubectl delete machine &amp;lt;machine_name&amp;gt;&lt;/code>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>With the machine deleted, Sidero will reboot the machine and wipe its disks.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Once the disk wiping is complete and the server is turned off, you can finally delete the server from Sidero with &lt;code>kubectl delete server &amp;lt;server_name&amp;gt;&lt;/code> and repurpose the server for something else.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Finally, unpause any clusters that were edited in step 3 by setting &lt;code>.spec.paused&lt;/code> to &lt;code>false&lt;/code>.&lt;/p>
&lt;/li>
&lt;/ul></description></item><item><title>V0.6: Creating Your First Cluster</title><link>/v0.6/guides/first-cluster/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/first-cluster/</guid><description>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>This guide will detail the steps needed to provision your first bare metal Talos cluster after completing the bootstrap and pivot steps detailed in the previous guide.
There will be two main steps in this guide: reconfiguring the Sidero components now that they have been pivoted and the actual cluster creation.&lt;/p>
&lt;h2 id="reconfigure-sidero">Reconfigure Sidero&lt;/h2>
&lt;h3 id="patch-services">Patch Services&lt;/h3>
&lt;p>In this guide, we will convert the services to use host networking.
This is also necessary because some protocols like TFTP don&amp;rsquo;t allow for port configuration.
Along with some nodeSelectors and a scale up of the metal controller manager deployment, creating the services this way allows for the creation of DNS names that point to all management plane nodes and provide an HA experience if desired.
It should also be noted, however, that there are many options for achieving this functionality.
Users can look into projects like MetalLB or KubeRouter with BGP and ECMP if they desire something else.&lt;/p>
&lt;p>Metal Controller Manager:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#586e75">## Use host networking&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl patch deploy -n sidero-system sidero-controller-manager --type&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;json&amp;#39;&lt;/span> -p&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;[{&amp;#34;op&amp;#34;: &amp;#34;add&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/spec/template/spec/hostNetwork&amp;#34;, &amp;#34;value&amp;#34;: true}]&amp;#39;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h4 id="update-environment">Update Environment&lt;/h4>
&lt;!-- textlint-disable -->
&lt;p>Sidero by default appends &lt;code>talos.config&lt;/code> kernel argument with based on the flags &lt;code>--api-endpoint&lt;/code> and &lt;code>--api-port&lt;/code> to the &lt;code>sidero-controller-manager&lt;/code>:
&lt;code>talos.config=http://$API_ENDPOINT:$API_PORT/configdata?uuid=&lt;/code>.&lt;/p>
&lt;!-- textlint-enable -->
&lt;p>If this default value doesn&amp;rsquo;t apply, edit the environment with &lt;code>kubectl edit environment default&lt;/code> and add the &lt;code>talos.config&lt;/code> kernel arg with the IP of one of the management plane nodes (or the DNS entry you created).&lt;/p>
&lt;h3 id="update-dhcp">Update DHCP&lt;/h3>
&lt;p>The DHCP options configured in the previous guide should now be updated to point to your new management plane IP or to the DNS name if it was created.&lt;/p>
&lt;p>A revised ipxe-metal.conf file looks like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>allow bootp;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>allow booting;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>next-server 192.168.254.2;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">if&lt;/span> exists user-class and option user-class &lt;span style="color:#719e07">=&lt;/span> &lt;span style="color:#2aa198">&amp;#34;iPXE&amp;#34;&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;http://192.168.254.2:8081/boot.ipxe&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span> &lt;span style="color:#719e07">else&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">if&lt;/span> substring &lt;span style="color:#719e07">(&lt;/span>option vendor-class-identifier, 15, 5&lt;span style="color:#719e07">)&lt;/span> &lt;span style="color:#719e07">=&lt;/span> &lt;span style="color:#2aa198">&amp;#34;00000&amp;#34;&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#586e75"># BIOS&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">if&lt;/span> substring &lt;span style="color:#719e07">(&lt;/span>option vendor-class-identifier, 0, 10&lt;span style="color:#719e07">)&lt;/span> &lt;span style="color:#719e07">=&lt;/span> &lt;span style="color:#2aa198">&amp;#34;HTTPClient&amp;#34;&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> option vendor-class-identifier &lt;span style="color:#2aa198">&amp;#34;HTTPClient&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;http://192.168.254.2:8081/tftp/undionly.kpxe&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span> &lt;span style="color:#719e07">else&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;undionly.kpxe&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span> &lt;span style="color:#719e07">else&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#586e75"># UEFI&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">if&lt;/span> substring &lt;span style="color:#719e07">(&lt;/span>option vendor-class-identifier, 0, 10&lt;span style="color:#719e07">)&lt;/span> &lt;span style="color:#719e07">=&lt;/span> &lt;span style="color:#2aa198">&amp;#34;HTTPClient&amp;#34;&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> option vendor-class-identifier &lt;span style="color:#2aa198">&amp;#34;HTTPClient&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;http://192.168.254.2:8081/tftp/snp.efi&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span> &lt;span style="color:#719e07">else&lt;/span> &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &lt;span style="color:#2aa198">&amp;#34;snp.efi&amp;#34;&lt;/span>;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>host talos-mgmt-0 &lt;span style="color:#719e07">{&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> fixed-address 192.168.254.2;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> hardware ethernet d0:50:99:d3:33:60;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#719e07">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>There are multiple ways to boot the via iPXE:&lt;/p>
&lt;ul>
&lt;li>if the node has built-in iPXE, direct URL to the iPXE script can be used: &lt;code>http://192.168.254.2:8081/boot.ipxe&lt;/code>.&lt;/li>
&lt;li>depending on the boot mode (BIOS or UEFI), either &lt;code>snp.efi&lt;/code> or &lt;code>undionly.kpxe&lt;/code> can be used (these images contain embedded iPXE scripts).&lt;/li>
&lt;li>iPXE binaries can be delivered either over TFTP or HTTP (HTTP support depends on node firmware).&lt;/li>
&lt;/ul>
&lt;h2 id="register-the-servers">Register the Servers&lt;/h2>
&lt;p>At this point, any servers on the same network as Sidero should PXE boot using the Sidero PXE service.
To register a server with Sidero, simply turn it on and Sidero will do the rest.
Once the registration is complete, you should see the servers registered with &lt;code>kubectl get servers&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ kubectl get servers -o wide
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>NAME HOSTNAME ACCEPTED ALLOCATED CLEAN
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>00000000-0000-0000-0000-d05099d33360 192.168.254.2 &lt;span style="color:#b58900">false&lt;/span> &lt;span style="color:#b58900">false&lt;/span> &lt;span style="color:#b58900">false&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="accept-the-servers">Accept the Servers&lt;/h2>
&lt;p>Note in the output above that the newly registered servers are not &lt;code>accepted&lt;/code>.
In order for a server to be eligible for consideration, it &lt;em>must&lt;/em> be marked as &lt;code>accepted&lt;/code>.
Before a &lt;code>Server&lt;/code> is accepted, no write action will be performed against it.
Servers can be accepted by issuing a patch command like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl patch server 00000000-0000-0000-0000-d05099d33360 --type&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;json&amp;#39;&lt;/span> -p&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;[{&amp;#34;op&amp;#34;: &amp;#34;replace&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/spec/accepted&amp;#34;, &amp;#34;value&amp;#34;: true}]&amp;#39;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>For more information on server acceptance, see the &lt;a href="../../resource-configuration/servers">server docs&lt;/a>.&lt;/p>
&lt;h2 id="create-the-cluster">Create the Cluster&lt;/h2>
&lt;p>The cluster creation process should be identical to what was detailed in the previous guide.
Using clusterctl, we can create a cluster manifest with:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>clusterctl generate cluster workload-cluster -i sidero &amp;gt; workload-cluster.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Note that there are several variables that should be set in order for the templating to work properly:&lt;/p>
&lt;ul>
&lt;li>&lt;code>CONTROL_PLANE_ENDPOINT&lt;/code> and &lt;code>CONTROL_PLANE_PORT&lt;/code>: The endpoint (IP address or hostname) and the port used for the Kubernetes API server
(e.g. for &lt;code>https://1.2.3.4:6443&lt;/code>: &lt;code>CONTROL_PLANE_ENDPOINT=1.2.3.4&lt;/code> and &lt;code>CONTROL_PLANE_PORT=6443&lt;/code>).
This is the equivalent of the &lt;code>endpoint&lt;/code> you would specify in &lt;code>talosctl gen config&lt;/code>.
There are a variety of ways to configure a control plane endpoint.
Some common ways for an HA setup are to use DNS, a load balancer, or BGP.
A simpler method is to use the IP of a single node.
This has the disadvantage of being a single point of failure, but it can be a simple way to get running.&lt;/li>
&lt;li>&lt;code>CONTROL_PLANE_SERVERCLASS&lt;/code>: The server class to use for control plane nodes.&lt;/li>
&lt;li>&lt;code>WORKER_SERVERCLASS&lt;/code>: The server class to use for worker nodes.&lt;/li>
&lt;li>&lt;code>KUBERNETES_VERSION&lt;/code>: The version of Kubernetes to deploy (e.g. &lt;code>v1.19.4&lt;/code>).&lt;/li>
&lt;li>&lt;code>TALOS_VERSION&lt;/code>: This should correspond to the minor version of Talos that you will be deploying (e.g. &lt;code>v0.10&lt;/code>).
This value is used in determining the fields present in the machine configuration that gets generated for Talos nodes.
Note that the default is currently &lt;code>v0.13&lt;/code>.&lt;/li>
&lt;/ul>
&lt;p>Now that we have the manifest, we can simply apply it:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl apply -f workload-cluster.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>&lt;strong>NOTE: The templated manifest above is meant to act as a starting point.&lt;/strong>
&lt;strong>If customizations are needed to ensure proper setup of your Talos cluster, they should be added before applying.&lt;/strong>&lt;/p>
&lt;p>Once the workload cluster is setup, you can fetch the talosconfig with a command like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl get talosconfig -o yaml workload-cluster-cp-xxx -o &lt;span style="color:#268bd2">jsonpath&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;{.status.talosConfig}&amp;#39;&lt;/span> &amp;gt; workload-cluster-talosconfig.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then the workload cluster&amp;rsquo;s kubeconfig can be fetched with &lt;code>talosctl --talosconfig workload-cluster-talosconfig.yaml kubeconfig /desired/path&lt;/code>.&lt;/p></description></item><item><title>V0.6: Patching</title><link>/v0.6/guides/patching/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/patching/</guid><description>
&lt;p>Server resources can be updated by using the &lt;code>configPatches&lt;/code> section of the custom resource.
Any field of the &lt;a href="https://www.talos.dev/latest/reference/configuration/">Talos machine config&lt;/a>
can be overridden on a per-machine basis using this method.
The format of these patches is based on &lt;a href="http://jsonpatch.com/">JSON 6902&lt;/a> that you may be used to in tools like kustomize.&lt;/p>
&lt;p>Any patches specified in the server resource are processed by the Sidero controller before it returns a Talos machine config for a given server at boot time.&lt;/p>
&lt;p>A set of patches may look like this:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">apiVersion&lt;/span>: metal.sidero.dev/v1alpha2
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">kind&lt;/span>: Server
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">metadata&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">name&lt;/span>: &lt;span style="color:#2aa198">00000000-0000-0000-0000&lt;/span>-d05099d33360
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">spec&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">configPatches&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">op&lt;/span>: replace
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">path&lt;/span>: /machine/install
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">value&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">disk&lt;/span>: /dev/sda
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">op&lt;/span>: replace
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">path&lt;/span>: /cluster/network/cni
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">value&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">name&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;custom&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">urls&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#2aa198">&amp;#34;http://192.168.1.199/assets/cilium.yaml&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="testing-configuration-patches">Testing Configuration Patches&lt;/h2>
&lt;p>While developing config patches it is usually convenient to test generated config with patches
before actual server is provisioned with the config.&lt;/p>
&lt;p>This can be achieved by querying the metadata server endpoint directly:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>$ curl http://&lt;span style="color:#268bd2">$PUBLIC_IP&lt;/span>:8081/configdata?uuid&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#268bd2">$SERVER_UUID&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>version: v1alpha1
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>...
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Replace &lt;code>$PUBLIC_IP&lt;/code> with the Sidero IP address and &lt;code>$SERVER_UUID&lt;/code> with the name of the &lt;code>Server&lt;/code> to test
against.&lt;/p>
&lt;p>If metadata endpoint returns an error on applying JSON patches, make sure config subtree being patched exists in the config.
If it doesn&amp;rsquo;t exist, create it with the &lt;code>op: add&lt;/code> above the &lt;code>op: replace&lt;/code> patch.&lt;/p>
&lt;h2 id="combining-patches-from-multiple-sources">Combining Patches from Multiple Sources&lt;/h2>
&lt;p>Config patches might be combined from multiple sources (&lt;code>Server&lt;/code>, &lt;code>ServerClass&lt;/code>, &lt;code>TalosControlPlane&lt;/code>, &lt;code>TalosConfigTemplate&lt;/code>), which is explained in details
in &lt;a href="../../resource-configuration/metadata/">Metadata&lt;/a> section.&lt;/p></description></item><item><title>V0.6: Provisioning Flow</title><link>/v0.6/guides/flow/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/flow/</guid><description>
&lt;pre tabindex="0">&lt;code class="language-mermaid" data-lang="mermaid">graph TD;
Start(Start);
End(End);
%% Decisions
IsOn{Is server is powered on?};
IsRegistered{Is server is registered?};
IsAccepted{Is server is accepted?};
IsClean{Is server is clean?};
IsAllocated{Is server is allocated?};
%% Actions
DoPowerOn[Power server on];
DoPowerOff[Power server off];
DoBootAgentEnvironment[Boot agent];
DoBootEnvironment[Boot environment];
DoRegister[Register server];
DoWipe[Wipe server];
%% Chart
Start--&amp;gt;IsOn;
IsOn--Yes--&amp;gt;End;
IsOn--No--&amp;gt;DoPowerOn;
DoPowerOn---&amp;gt;IsRegistered;
IsRegistered--Yes---&amp;gt;IsAccepted;
IsRegistered--No---&amp;gt;DoBootAgentEnvironment--&amp;gt;DoRegister;
DoRegister--&amp;gt;IsRegistered;
IsAccepted--Yes---&amp;gt;IsAllocated;
IsAccepted--No---&amp;gt;End;
IsAllocated--Yes---&amp;gt;DoBootEnvironment;
IsAllocated--No---&amp;gt;IsClean;
IsClean--No---&amp;gt;DoWipe--&amp;gt;DoPowerOff;
IsClean--Yes---&amp;gt;DoPowerOff;
DoBootEnvironment--&amp;gt;End;
DoPowerOff--&amp;gt;End;
&lt;/code>&lt;/pre>&lt;h2 id="installation-flow">Installation Flow&lt;/h2>
&lt;pre tabindex="0">&lt;code class="language-mermaid" data-lang="mermaid">graph TD;
Start(Start);
End(End);
%% Decisions
IsInstalled{Is installed};
%% Actions
DoInstall[Install];
DoReboot[Reboot];
%% Chart
Start--&amp;gt;IsInstalled;
IsInstalled--Yes--&amp;gt;End;
IsInstalled--No--&amp;gt;DoInstall;
DoInstall--&amp;gt;DoReboot;
DoReboot--&amp;gt;IsInstalled;
&lt;/code>&lt;/pre></description></item><item><title>V0.6: Raspberry Pi4 as Servers</title><link>/v0.6/guides/rpi4-as-servers/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/rpi4-as-servers/</guid><description>
&lt;p>This guide will explain on how to use Sidero to manage Raspberrypi-4&amp;rsquo;s as
servers.
This guide goes hand in hand with the &lt;a href="../../guides/bootstrapping">bootstrapping
guide&lt;/a>.&lt;/p>
&lt;p>From the bootstrapping guide, reach &amp;ldquo;Install Sidero&amp;rdquo; and come back to this
guide.
Once you finish with this guide, you will need to go back to the
bootstrapping guide and continue with &amp;ldquo;Register the servers&amp;rdquo;.&lt;/p>
&lt;p>The rest of this guide goes with the assumption that you&amp;rsquo;ve a cluster setup with
Sidero and ready to accept servers.
This guide will explain the changes that needs to be made to be able to accept RPI4 as server.&lt;/p>
&lt;h2 id="rpi4-boot-process">RPI4 boot process&lt;/h2>
&lt;p>To be able to boot talos on the Pi4 via network, we need to undergo a 2-step boot process.
The Pi4 has an EEPROM which contains code to boot up the Pi.
This EEPROM expects a specific boot folder structure as explained on
&lt;a href="https://www.raspberrypi.org/documentation/configuration/boot_folder.md">this&lt;/a> page.
We will use the EEPROM to boot into UEFI, which we will then use to PXE and iPXE boot into sidero &amp;amp; talos.&lt;/p>
&lt;h2 id="prerequisites">Prerequisites&lt;/h2>
&lt;h3 id="update-eeprom">Update EEPROM&lt;/h3>
&lt;p>&lt;em>NOTE:&lt;/em> If you&amp;rsquo;ve updated the EEPROM with the image that was referenced on &lt;a href="https://www.talos.dev/latest/talos-guides/install/single-board-computers/rpi_4/#updating-the-eeprom">the talos docs&lt;/a>,
you can either flash it with the one mentioned below, or visit &lt;a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2711_bootloader_config.md">the EEPROM config docs&lt;/a>
and change the boot order of EEPROM to &lt;code>0xf21&lt;/code>.
Which means try booting from SD first, then try network.&lt;/p>
&lt;p>To enable the EEPROM on the Pi to support network booting, we must update it to
the latest version.
Visit the &lt;a href="https://github.com/raspberrypi/rpi-eeprom/releases">release&lt;/a> page and grab the
latest &lt;code>rpi-boot-eeprom-recovery-*-network.zip&lt;/code> (as of time of writing,
v2021.0v.29-138a1 was used).
Put this on a SD card and plug it into the Pi.
The
Pi&amp;rsquo;s status light will flash rapidly after a few seconds, this indicates that
the EEPROM has been updated.&lt;/p>
&lt;p>This operation needs to be done once per Pi.&lt;/p>
&lt;h3 id="serial-number">Serial number&lt;/h3>
&lt;p>Power on the Pi without an SD card in it and hook it up to a monitor, you will
be greeted with the boot screen.
On this screen you will find some information
about the Pi.
For this guide, we are only interested in the serial number.
The
first line under the Pi logo will be something like the following:&lt;/p>
&lt;p>&lt;code>board: xxxxxx &amp;lt;serial&amp;gt; &amp;lt;MAC address&amp;gt;&lt;/code>&lt;/p>
&lt;p>Write down the 8 character serial.&lt;/p>
&lt;h3 id="talos-systemspkg">talos-systems/pkg&lt;/h3>
&lt;p>Clone the &lt;a href="https://github.com/talos-systems/pkgs">talos-systems/pkg&lt;/a> repo.
Create a new folder called &lt;code>raspberrypi4-uefi&lt;/code> and &lt;code>raspberrypi4-uefi/serials&lt;/code>.
Create a file &lt;code>raspberrypi4-uefi/pkg.yaml&lt;/code> containing the following:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">name&lt;/span>: raspberrypi4-uefi
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">variant&lt;/span>: alpine
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">install&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - unzip
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">steps&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#586e75"># {{ if eq .ARCH &amp;#34;aarch64&amp;#34; }} This in fact is YAML comment, but Go templating instruction is evaluated by bldr restricting build to arm64 only&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">sources&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">url&lt;/span>: https://github.com/pftf/RPi4/releases/download/v1.26/RPi4_UEFI_Firmware_v1.26.zip &lt;span style="color:#586e75"># &amp;lt;-- update version NR accordingly.&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">destination&lt;/span>: RPi4_UEFI_Firmware.zip
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">sha256&lt;/span>: d6db87484dd98dfbeb64eef203944623130cec8cb71e553eab21f8917e0285f7
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">sha512&lt;/span>: 96a71086cdd062b51ef94726ebcbf15482b70c56262555a915499bafc04aff959d122410af37214760eda8534b58232a64f6a8a0a8bb99aba6de0f94c739fe98
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">prepare&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - |&lt;span style="color:#2aa198">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> unzip RPi4_UEFI_Firmware.zip
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> rm RPi4_UEFI_Firmware.zip
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> mkdir /rpi4
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> mv ./* /rpi4&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">install&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - |&lt;span style="color:#2aa198">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> mkdir /tftp
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> ls /pkg/serials | while read serial; do mkdir /tftp/$serial &amp;amp;&amp;amp; cp -r /rpi4/* /tftp/$serial &amp;amp;&amp;amp; cp -r /pkg/serials/$serial/* /tftp/$serial/; done&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#586e75"># {{ else }}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">install&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - |&lt;span style="color:#2aa198">
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#2aa198"> &lt;/span> mkdir -p /tftp
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#586e75"># {{ end }}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">finalize&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">from&lt;/span>: /
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">to&lt;/span>: /
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="uefi--rpi4">UEFI / RPi4&lt;/h2>
&lt;p>Now that the EEPROM can network boot, we need to prepare the structure of our
boot folder.
Essentially what the bootloader will do is look for this folder
on the network rather than on the SD card.&lt;/p>
&lt;p>Visit the &lt;a href="https://github.com/pftf/RPi4/releases">release page of RPi4&lt;/a> and grab
the latest &lt;code>RPi4_UEFI_Firmware_v*.zip&lt;/code> (at the time of writing, v1.26 was used).
Extract the zip into a folder, the structure will look like the following:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>.
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── RPI_EFI.fd
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── RPi4_UEFI_Firmware_v1.26.zip
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── Readme.md
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── bcm2711-rpi-4-b.dtb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── bcm2711-rpi-400.dtb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── bcm2711-rpi-cm4.dtb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── config.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── firmware
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── LICENCE.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── Readme.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── brcmfmac43455-sdio.bin
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── brcmfmac43455-sdio.clm_blob
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   └── brcmfmac43455-sdio.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── fixup4.dat
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── overlays
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   └── miniuart-bt.dtbo
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>└── start4.elf
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>As a one time operation, we need to configure UEFI to do network booting by
default, remove the 3gb mem limit if it&amp;rsquo;s set and optionally set the CPU clock to
max.
Take these files and put them on the SD card and boot the Pi.
You will see the Pi logo, and the option to hit &lt;code>esc&lt;/code>.&lt;/p>
&lt;h3 id="remove-3gb-mem-limit">Remove 3GB mem limit&lt;/h3>
&lt;ol>
&lt;li>From the home page, visit &amp;ldquo;Device Manager&amp;rdquo;.&lt;/li>
&lt;li>Go down to &amp;ldquo;Raspberry Pi Configuration&amp;rdquo; and open that menu.&lt;/li>
&lt;li>Go to &amp;ldquo;Advanced Configuration&amp;rdquo;.&lt;/li>
&lt;li>Make sure the option &amp;ldquo;Limit RAM to 3 GB&amp;rdquo; is set to &lt;code>Disabled&lt;/code>.&lt;/li>
&lt;/ol>
&lt;h3 id="change-cpu-to-max-optionally">Change CPU to Max (optionally)&lt;/h3>
&lt;ol>
&lt;li>From the home page, visit &amp;ldquo;Device Manager&amp;rdquo;.&lt;/li>
&lt;li>Go down to &amp;ldquo;Raspberry Pi Configuration&amp;rdquo; and open that menu.&lt;/li>
&lt;li>Go to &amp;ldquo;CPU Configuration&amp;rdquo;.&lt;/li>
&lt;li>Change CPU clock to &lt;code>Max&lt;/code>.&lt;/li>
&lt;/ol>
&lt;h2 id="change-boot-order">Change boot order&lt;/h2>
&lt;ol>
&lt;li>From the home page, visit &amp;ldquo;Boot Maintenance Manager&amp;rdquo;.&lt;/li>
&lt;li>Go to &amp;ldquo;Boot Options&amp;rdquo;.&lt;/li>
&lt;li>Go to &amp;ldquo;Change Boot Order&amp;rdquo;.&lt;/li>
&lt;li>Make sure that &lt;code>UEFI PXEv4&lt;/code> is the first boot option.&lt;/li>
&lt;/ol>
&lt;h3 id="persisting-changes">Persisting changes&lt;/h3>
&lt;p>Now that we have made the changes above, we need to persist these changes.
Go back to the home screen and hit &lt;code>reset&lt;/code> to save the changes to disk.&lt;/p>
&lt;p>When you hit &lt;code>reset&lt;/code>, the settings will be saved to the &lt;code>RPI_EFI.fd&lt;/code> file on the
SD card.
This is where we will run into a limitation that is explained in the
following issue: &lt;a href="https://github.com/pftf/RPi4/issues/59">pftf/RPi4#59&lt;/a>.
What this mean is that we need to create a &lt;code>RPI_EFI.fd&lt;/code> file for each Pi that we want to use as server.
This is because the MAC address is also stored in the &lt;code>RPI_EFI.fd&lt;/code> file,
which makes it invalid when you try to use it in a different Pi.&lt;/p>
&lt;p>Plug the SD card back into your computer and extract the &lt;code>RPI_EFI.fd&lt;/code> file from
it and place it into the &lt;code>raspberrypi4-uefi/serials/&amp;lt;serial&amp;gt;/&lt;/code>.
The dir should look like this:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>raspberrypi4-uefi/
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── pkg.yaml
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>└── serials
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> └─── XXXXXXXX
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> └── RPI_EFI.fd
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="build-the-image-with-the-boot-folder-contents">Build the image with the boot folder contents&lt;/h2>
&lt;p>Now that we have the &lt;code>RPI_EFI.fd&lt;/code> of our Pi in the correct location, we must now
build a docker image containing the boot folder for the EEPROM.
To do this, run the following command in the pkgs repo:&lt;/p>
&lt;p>&lt;code>make PLATFORM=linux/arm64 USERNAME=$USERNAME PUSH=true TARGETS=raspberrypi4-uefi&lt;/code>&lt;/p>
&lt;p>This will build and push the following image:
&lt;code>ghcr.io/$USERNAME/raspberrypi4-uefi:&amp;lt;tag&amp;gt;&lt;/code>&lt;/p>
&lt;p>&lt;em>If you need to change some other settings like registry etc, have a look in the
Makefile to see the available variables that you can override.&lt;/em>&lt;/p>
&lt;p>The content of the &lt;code>/tftp&lt;/code> folder in the image will be the following:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>XXXXXXXX
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── RPI_EFI.fd
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── Readme.md
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── bcm2711-rpi-4-b.dtb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── bcm2711-rpi-400.dtb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── bcm2711-rpi-cm4.dtb
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── config.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── firmware
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── LICENCE.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── Readme.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── brcmfmac43455-sdio.bin
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   ├── brcmfmac43455-sdio.clm_blob
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   └── brcmfmac43455-sdio.txt
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── fixup4.dat
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>├── overlays
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>│   └── miniuart-bt.dtbo
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>└── start4.elf
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="patch-metal-controller">Patch metal controller&lt;/h2>
&lt;p>To enable the 2 boot process, we need to include this EEPROM boot folder into
the sidero&amp;rsquo;s tftp folder.
To achieve this, we will use an init container using
the image we created above to copy the contents of it into the tftp folder.&lt;/p>
&lt;p>Create a file &lt;code>patch.yaml&lt;/code> with the following contents:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">spec&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">template&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">spec&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">volumes&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">name&lt;/span>: tftp-folder
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">emptyDir&lt;/span>: {}
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">initContainers&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">image&lt;/span>: ghcr.io/&amp;lt;USER&amp;gt;/raspberrypi4-uefi:v&amp;lt;TAG&amp;gt; &lt;span style="color:#586e75"># &amp;lt;-- change accordingly.&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">imagePullPolicy&lt;/span>: Always
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">name&lt;/span>: tftp-folder-setup
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">command&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - cp
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">args&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - -r
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - /tftp
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - /var/lib/sidero/
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">volumeMounts&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">mountPath&lt;/span>: /var/lib/sidero/tftp
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">name&lt;/span>: tftp-folder
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">containers&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">name&lt;/span>: manager
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">volumeMounts&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">mountPath&lt;/span>: /var/lib/sidero/tftp
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">name&lt;/span>: tftp-folder
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Followed by this command to apply the patch:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl -n sidero-system patch deployments.apps sidero-controller-manager --patch &lt;span style="color:#2aa198">&amp;#34;&lt;/span>&lt;span style="color:#719e07">$(&lt;/span>cat patch.yaml&lt;span style="color:#719e07">)&lt;/span>&lt;span style="color:#2aa198">&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="configure-bootfromdiskmethod">Configure BootFromDiskMethod&lt;/h2>
&lt;p>By default, Sidero will use iPXE&amp;rsquo;s &lt;code>exit&lt;/code> command to attempt to force boot from disk.
On Raspberry Pi, this will drop you into the bootloader interface, and you will need to connect a keyboard and manually select the disk to boot from.&lt;/p>
&lt;p>The BootFromDiskMethod can be configured on individual &lt;a href="../../resource-configuration/servers/#bootfromdiskmethod">Servers&lt;/a>, on &lt;a href="../../resource-configuration/serverclasses/#bootfromdiskmethod">ServerClasses&lt;/a>, or as a command-line argument to the Sidero metal controller itself (&lt;code>--boot-from-disk-method=&amp;lt;value&amp;gt;&lt;/code>).
In order to force the Pi to use the configured bootloader order, the BootFromDiskMethod needs to be set to &lt;code>ipxe-sanboot&lt;/code>.&lt;/p>
&lt;h2 id="profit">Profit&lt;/h2>
&lt;p>With the patched metal controller, you should now be able to register the Pi4 to
sidero by just connecting it to the network.
From this point you can continue with the &lt;a href="../../guides/bootstrapping#register-the-servers">bootstrapping guide&lt;/a>.&lt;/p></description></item><item><title>V0.6: Sidero on Raspberry Pi 4</title><link>/v0.6/guides/sidero-on-rpi4/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.6/guides/sidero-on-rpi4/</guid><description>
&lt;p>Sidero doesn&amp;rsquo;t require a lot of computing resources, so SBCs are a perfect fit to run
the Sidero management cluster.
In this guide, we are going to install Talos on Raspberry Pi4, deploy Sidero and other CAPI components.&lt;/p>
&lt;h2 id="prerequisites">Prerequisites&lt;/h2>
&lt;p>Please see Talos documentation for additional information on &lt;a href="https://www.talos.dev/latest/talos-guides/install/single-board-computers/rpi_generic/">installing Talos on Raspberry Pi4&lt;/a>.&lt;/p>
&lt;p>Download the &lt;code>clusterctl&lt;/code> CLI from &lt;a href="https://github.com/kubernetes-sigs/cluster-api/releases">CAPI releases&lt;/a>.
The minimum required version is 1.5.0.&lt;/p>
&lt;h2 id="installing-talos">Installing Talos&lt;/h2>
&lt;p>Prepare the SD card with the Talos RPi4 image, and boot the RPi4.
Talos should drop into maintenance mode printing the acquired IP address.
Record the IP address as the environment variable &lt;code>SIDERO_ENDPOINT&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">SIDERO_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>192.168.x.x
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>Note: it makes sense to transform DHCP lease for RPi4 into a static reservation so that RPi4 always has the same IP address.&lt;/p>
&lt;/blockquote>
&lt;p>Generate Talos machine configuration for a single-node cluster:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl gen config --config-patch&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;[{&amp;#34;op&amp;#34;: &amp;#34;add&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/cluster/allowSchedulingOnControlPlanes&amp;#34;, &amp;#34;value&amp;#34;: true},{&amp;#34;op&amp;#34;: &amp;#34;replace&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/machine/install/disk&amp;#34;, &amp;#34;value&amp;#34;: &amp;#34;/dev/mmcblk0&amp;#34;}]&amp;#39;&lt;/span> rpi4-sidero https://&lt;span style="color:#2aa198">${&lt;/span>&lt;span style="color:#268bd2">SIDERO_ENDPOINT&lt;/span>&lt;span style="color:#2aa198">}&lt;/span>:6443/
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Submit the generated configuration to Talos:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl apply-config --insecure -n &lt;span style="color:#2aa198">${&lt;/span>&lt;span style="color:#268bd2">SIDERO_ENDPOINT&lt;/span>&lt;span style="color:#2aa198">}&lt;/span> -f controlplane.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Merge client configuration &lt;code>talosconfig&lt;/code> into default &lt;code>~/.talos/config&lt;/code> location:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl config merge talosconfig
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Update default endpoint and nodes:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl config endpoints &lt;span style="color:#2aa198">${&lt;/span>&lt;span style="color:#268bd2">SIDERO_ENDPOINT&lt;/span>&lt;span style="color:#2aa198">}&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>talosctl config nodes &lt;span style="color:#2aa198">${&lt;/span>&lt;span style="color:#268bd2">SIDERO_ENDPOINT&lt;/span>&lt;span style="color:#2aa198">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>You can verify that Talos has booted by running:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ talosctl version
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>talosctl version
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Client:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> Tag: v0.10.3
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> SHA: 21018f28
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> Built:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> Go version: go1.16.3
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> OS/Arch: linux/amd64
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Server:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> NODE: 192.168.0.31
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> Tag: v0.10.3
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> SHA: 8f90c6a8
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> Built:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> Go version: go1.16.3
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> OS/Arch: linux/arm64
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Bootstrap the etcd cluster:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl bootstrap
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>At this point, Kubernetes is bootstrapping, and it should be available once all the images are fetched.&lt;/p>
&lt;p>Fetch the &lt;code>kubeconfig&lt;/code> from the cluster with:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl kubeconfig
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>You can watch the bootstrap progress by running:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl dmesg -f
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Once Talos prints &lt;code>[talos] boot sequence: done&lt;/code>, Kubernetes should be up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl get nodes
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="installing-sidero">Installing Sidero&lt;/h2>
&lt;p>Install Sidero with host network mode, exposing the endpoints on the node&amp;rsquo;s address:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_HOST_NETWORK&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#b58900">true&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY&lt;/span>&lt;span style="color:#719e07">=&lt;/span>Recreate &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">${&lt;/span>&lt;span style="color:#268bd2">SIDERO_IP&lt;/span>&lt;span style="color:#2aa198">}&lt;/span> clusterctl init -i sidero -b talos -c talos
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Watch the progress of installation with:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>watch -n &lt;span style="color:#2aa198">2&lt;/span> kubectl get pods -A
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Once images are downloaded, all pods should be in running state:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ kubectl get pods -A
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>NAMESPACE NAME READY STATUS RESTARTS AGE
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>cabpt-system cabpt-controller-manager-6458494888-d7lnm 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 29m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>cacppt-system cacppt-controller-manager-f98854db8-qgkf9 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 29m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>capi-system capi-controller-manager-58f797cb65-8dwpz 2/2 Running &lt;span style="color:#2aa198">0&lt;/span> 30m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>capi-webhook-system cabpt-controller-manager-85fd964c9c-ldzb6 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 29m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>capi-webhook-system cacppt-controller-manager-75c479b7f-5hw89 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 29m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>capi-webhook-system capi-controller-manager-7d596cc4cb-kjrfk 2/2 Running &lt;span style="color:#2aa198">0&lt;/span> 30m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>capi-webhook-system caps-controller-manager-79664cf677-zqbvw 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 29m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>cert-manager cert-manager-86cb5dcfdd-v86wr 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 31m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>cert-manager cert-manager-cainjector-84cf775b89-swk25 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 31m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>cert-manager cert-manager-webhook-7f9f4f8dcb-29xm4 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 31m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system coredns-fcc4c97fb-wkxkg 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 35m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system coredns-fcc4c97fb-xzqzj 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 35m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system kube-apiserver-talos-192-168-0-31 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 33m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system kube-controller-manager-talos-192-168-0-31 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 33m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system kube-flannel-qmlw6 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 34m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system kube-proxy-j24hg 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 34m
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kube-system kube-scheduler-talos-192-168-0-31 1/1 Running &lt;span style="color:#2aa198">0&lt;/span> 33m
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Verify Sidero installation and network setup with:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ curl -I http://&lt;span style="color:#2aa198">${&lt;/span>&lt;span style="color:#268bd2">SIDERO_ENDPOINT&lt;/span>&lt;span style="color:#2aa198">}&lt;/span>:8081/tftp/ipxe.efi
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>HTTP/1.1 &lt;span style="color:#2aa198">200&lt;/span> OK
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Accept-Ranges: bytes
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Content-Length: &lt;span style="color:#2aa198">1020416&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Content-Type: application/octet-stream
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Last-Modified: Thu, &lt;span style="color:#2aa198">03&lt;/span> Jun &lt;span style="color:#2aa198">2021&lt;/span> 15:40:58 GMT
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Date: Thu, &lt;span style="color:#2aa198">03&lt;/span> Jun &lt;span style="color:#2aa198">2021&lt;/span> 15:41:51 GMT
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now Sidero is installed, and it is ready to be used.
Configure your DHCP server to PXE boot your bare metal servers from &lt;code>$SIDERO_ENDPOINT&lt;/code> (see &lt;a href="../bootstrapping/">Bootstrapping guide&lt;/a> on DHCP configuration).&lt;/p>
&lt;h2 id="backup-and-recovery">Backup and Recovery&lt;/h2>
&lt;p>SD cards are not very reliable, so make sure you are taking regular &lt;a href="https://www.talos.dev/latest/advanced/disaster-recovery/#backup">etcd backups&lt;/a>,
so that you can &lt;a href="https://www.talos.dev/latest/advanced/disaster-recovery/#recovery">recover&lt;/a> your Sidero installation in case of data loss.&lt;/p></description></item></channel></rss>