<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Sidero Metal – Getting Started</title><link>/v0.5/getting-started/</link><description>Recent content in Getting Started on Sidero Metal</description><generator>Hugo -- gohugo.io</generator><atom:link href="/v0.5/getting-started/index.xml" rel="self" type="application/rss+xml"/><item><title>V0.5: Prerequisite: CLI tools</title><link>/v0.5/getting-started/prereq-cli-tools/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/prereq-cli-tools/</guid><description>
&lt;p>You will need three CLI tools installed on your workstation in order to interact
with Sidero:&lt;/p>
&lt;ul>
&lt;li>&lt;code>kubectl&lt;/code>&lt;/li>
&lt;li>&lt;code>clusterctl&lt;/code>&lt;/li>
&lt;li>&lt;code>talosctl&lt;/code>&lt;/li>
&lt;/ul>
&lt;h2 id="install-kubectl">Install &lt;code>kubectl&lt;/code>&lt;/h2>
&lt;p>Since &lt;code>kubectl&lt;/code> is the standard Kubernetes control tool, many distributions
already exist for it.
Feel free to check your own package manager to see if it is available natively.&lt;/p>
&lt;p>Otherwise, you may install it directly from the main distribution point.
The main article for this can be found
&lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl">here&lt;/a>.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>sudo curl -Lo /usr/local/bin/kubectl &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &lt;span style="color:#2aa198">&amp;#34;https://dl.k8s.io/release/&lt;/span>&lt;span style="color:#719e07">$(&lt;/span>&lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> curl -L -s https://dl.k8s.io/release/stable.txt&lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &lt;span style="color:#719e07">)&lt;/span>&lt;span style="color:#2aa198">/bin/linux/amd64/kubectl&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>sudo chmod +x /usr/local/bin/kubectl
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="install-clusterctl">Install &lt;code>clusterctl&lt;/code>&lt;/h2>
&lt;p>The &lt;code>clusterctl&lt;/code> tool is the standard control tool for ClusterAPI (CAPI).
It is less common, so it is also less likely to be in package managers.&lt;/p>
&lt;p>The main article for installing &lt;code>clusterctl&lt;/code> can be found
&lt;a href="https://cluster-api.sigs.k8s.io/user/quick-start.html#install-clusterctl">here&lt;/a>.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>sudo curl -Lo /usr/local/bin/clusterctl &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &lt;span style="color:#2aa198">&amp;#34;https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.1.1/clusterctl-&lt;/span>&lt;span style="color:#719e07">$(&lt;/span>uname -s | tr &lt;span style="color:#2aa198">&amp;#39;[:upper:]&amp;#39;&lt;/span> &lt;span style="color:#2aa198">&amp;#39;[:lower:]&amp;#39;&lt;/span>&lt;span style="color:#719e07">)&lt;/span>&lt;span style="color:#2aa198">-amd64&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>sudo chmod +x /usr/local/bin/clusterctl
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>Note: This version of Sidero is only compatible with CAPI v1beta1,
so please install the latest version of &lt;code>clusterctl&lt;/code> v1.x.&lt;/p>
&lt;/blockquote>
&lt;h2 id="install-talosctl">Install &lt;code>talosctl&lt;/code>&lt;/h2>
&lt;p>The &lt;code>talosctl&lt;/code> tool is used to interact with the Talos (our Kubernetes-focused
operating system) API.
The latest version can be found on our
&lt;a href="https://github.com/talos-systems/talos/releases">Releases&lt;/a> page.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>sudo curl -Lo /usr/local/bin/talosctl &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &lt;span style="color:#2aa198">&amp;#34;https://github.com/talos-systems/talos/releases/latest/download/talosctl-&lt;/span>&lt;span style="color:#719e07">$(&lt;/span>uname -s | tr &lt;span style="color:#2aa198">&amp;#39;[:upper:]&amp;#39;&lt;/span> &lt;span style="color:#2aa198">&amp;#39;[:lower:]&amp;#39;&lt;/span>&lt;span style="color:#719e07">)&lt;/span>&lt;span style="color:#2aa198">-amd64&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>sudo chmod +x /usr/local/bin/talosctl
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>V0.5: Prerequisite: Kubernetes</title><link>/v0.5/getting-started/prereq-kubernetes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/prereq-kubernetes/</guid><description>
&lt;p>In order to run Sidero, you first need a Kubernetes &amp;ldquo;cluster&amp;rdquo;.
There is nothing special about this cluster.
It can be, for example:&lt;/p>
&lt;ul>
&lt;li>a Kubernetes cluster you already have&lt;/li>
&lt;li>a single-node cluster running in Docker on your laptop&lt;/li>
&lt;li>a cluster running inside a virtual machine stack such as VMWare&lt;/li>
&lt;li>a Talos Kubernetes cluster running on a spare machine&lt;/li>
&lt;/ul>
&lt;p>Two important things are needed in this cluster:&lt;/p>
&lt;ul>
&lt;li>Kubernetes &lt;code>v1.19&lt;/code> or later&lt;/li>
&lt;li>Ability to expose TCP and UDP Services to the workload cluster machines&lt;/li>
&lt;/ul>
&lt;p>For the purposes of this tutorial, we will create this cluster in Docker on a
workstation, perhaps a laptop.&lt;/p>
&lt;p>If you already have a suitable Kubernetes cluster, feel free to skip this step.&lt;/p>
&lt;h2 id="create-a-local-management-cluster">Create a Local Management Cluster&lt;/h2>
&lt;p>The &lt;code>talosctl&lt;/code> CLI tool has built-in support for spinning up Talos in docker containers.
Let&amp;rsquo;s use this to our advantage as an easy Kubernetes cluster to start from.&lt;/p>
&lt;p>Issue the following to create a single-node Docker-based Kubernetes cluster:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">HOST_IP&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#34;192.168.1.150&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>talosctl cluster create &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --name sidero-demo &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> -p 69:69/udp,8081:8081/tcp,51821:51821/udp &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --workers &lt;span style="color:#2aa198">0&lt;/span> &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --config-patch &lt;span style="color:#2aa198">&amp;#39;[{&amp;#34;op&amp;#34;: &amp;#34;add&amp;#34;, &amp;#34;path&amp;#34;: &amp;#34;/cluster/allowSchedulingOnMasters&amp;#34;, &amp;#34;value&amp;#34;: true}]&amp;#39;&lt;/span> &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --endpoint &lt;span style="color:#268bd2">$HOST_IP&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The &lt;code>192.168.1.150&lt;/code> IP address should be changed to the IP address of your Docker
host.
This is &lt;em>not&lt;/em> the Docker bridge IP but the standard IP address of the
workstation.&lt;/p>
&lt;p>Note that there are three ports mentioned in the command above.
The first (69) is
for TFTP.
The second (8081) is for the web server (which serves netboot
artifacts and configuration).
The third (51821) is for the SideroLink Wireguard network.&lt;/p>
&lt;p>Exposing them here allows us to access the services that will get deployed on this node.
In turn, we will be running our Sidero services with &lt;code>hostNetwork: true&lt;/code>,
so the Docker host will forward these to the Docker container,
which will in turn be running in the same namespace as the Sidero Kubernetes components.
A full separate management cluster will likely approach this differently,
with a load balancer or a means of sharing an IP address across multiple nodes (such as with MetalLB).&lt;/p>
&lt;p>Finally, the &lt;code>--config-patch&lt;/code> is optional,
but since we are running a single-node cluster in this Tutorial,
adding this will allow Sidero to run on the controlplane.
Otherwise, you would need to add worker nodes to this management plane cluster to be
able to run the Sidero components on it.&lt;/p>
&lt;h2 id="access-the-cluster">Access the cluster&lt;/h2>
&lt;p>Once the cluster create command is complete, you can retrieve the kubeconfig for it using the Talos API:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl kubeconfig
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>Note: by default, Talos will merge the kubeconfig for this cluster into your
standard kubeconfig under the context name matching the cluster name your
created above.
If this name conflicts, it will be given a &lt;code>-1&lt;/code>, a &lt;code>-2&lt;/code> or so
on, so it is generally safe to run.
However, if you would prefer to not modify your standard kubeconfig, you can
supply a directory name as the third parameter, which will cause a new
kubeconfig to be created there instead.
Remember that if you choose to not use the standard location, your should set
your &lt;code>KUBECONFIG&lt;/code> environment variable or pass the &lt;code>--kubeconfig&lt;/code> option to
tell the &lt;code>kubectl&lt;/code> client the name of the &lt;code>kubeconfig&lt;/code> file.&lt;/p>
&lt;/blockquote></description></item><item><title>V0.5: Prerequisite: DHCP service</title><link>/v0.5/getting-started/prereq-dhcp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/prereq-dhcp/</guid><description>
&lt;p>In order to network boot Talos, we need to set up our DHCP server to supply the
network boot parameters to our servers.
For maximum flexibility, Sidero makes use of iPXE to be able to reference
artifacts via HTTP.
Some modern servers support direct UEFI HTTP boot, but most existing servers
still rely on the old, slow TFTP-based PXE boot first.
Therefore, we need to tell our DHCP server to find the iPXE binary on a TFTP
server.&lt;/p>
&lt;p>Conveniently, Sidero comes with a TFTP server which will serve the appropriate
files.
We need only set up our DHCP server to point to it.&lt;/p>
&lt;p>The tricky bit is that at different phases, we need to serve different assets,
but they all use the same DHCP metadata key.&lt;/p>
&lt;p>In fact, we have as many as six different client types:&lt;/p>
&lt;ul>
&lt;li>Legacy BIOS-based PXE boot (undionly.kpxe via TFTP)&lt;/li>
&lt;li>UEFI-based PXE boot (ipxe.efi via TFTP)&lt;/li>
&lt;li>UEFI HTTP boot (ipxe.efi via HTTP URL)&lt;/li>
&lt;li>iPXE (boot.ipxe via HTTP URL)&lt;/li>
&lt;li>UEFI-based PXE arm64 boot (ipxe-arm64.efi via TFTP)&lt;/li>
&lt;li>UEFI HTTP boot on arm64 (ipxe-arm64.efi via HTTP URL)&lt;/li>
&lt;/ul>
&lt;h2 id="common-client-types">Common client types&lt;/h2>
&lt;p>If you are lucky and all of the machines in a given DHCP zone can use the same
network boot client mechanism, your DHCP server only needs to provide two
options:&lt;/p>
&lt;ul>
&lt;li>&lt;code>Server-Name&lt;/code> (option 66) with the IP of the Sidero TFTP service&lt;/li>
&lt;li>&lt;code>Bootfile-Name&lt;/code> (option 67) with the appropriate value for the boot client type:
&lt;ul>
&lt;li>Legacy BIOS PXE boot: &lt;code>undionly.kpxe&lt;/code>&lt;/li>
&lt;li>UEFI-based PXE boot: &lt;code>ipxe.efi&lt;/code>&lt;/li>
&lt;li>UEFI HTTP boot: &lt;code>http://sidero-server-url/tftp/ipxe.efi&lt;/code>&lt;/li>
&lt;li>iPXE boot: &lt;code>http://sidero-server-url/boot.ipxe&lt;/code>&lt;/li>
&lt;li>arm64 UEFI PXE boot: &lt;code>ipxe-arm64.efi&lt;/code>&lt;/li>
&lt;li>arm64 UEFI HTTP boot: &lt;code>http://sidero-server-url/tftp/ipxe-arm64.efi&lt;/code>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>In the ISC DHCP server, these options look like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span>next-server 172.16.199.50;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>filename &amp;#34;ipxe.efi&amp;#34;;
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="multiple-client-types">Multiple client types&lt;/h2>
&lt;p>Any given server will usually use only one of those, but if you have a mix of
machines, you may need a combination of them.
In this case, you would need a way to provide different images for different
client or machine types.&lt;/p>
&lt;p>Both ISC DHCP server and dnsmasq provide ways to supply such conditional responses.
In this tutorial, we are working with ISC DHCP.&lt;/p>
&lt;p>For modularity, we are breaking the conditional statements into a separate file
and using the &lt;code>include&lt;/code> statement to load them into the main &lt;code>dhcpd.conf&lt;/code> file.&lt;/p>
&lt;p>In our example below, &lt;code>172.16.199.50&lt;/code> is the IP address of our Sidero service.&lt;/p>
&lt;p>&lt;code>ipxe-metal.conf&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span>allow bootp;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>allow booting;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span># IP address for PXE-based TFTP methods
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>next-server 172.16.199.50;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span># Configuration for iPXE clients
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>class &amp;#34;ipxeclient&amp;#34; {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> match if exists user-class and (option user-class = &amp;#34;iPXE&amp;#34;);
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &amp;#34;http://172.16.199.50/boot.ipxe&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>}
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span># Configuration for legacy BIOS-based PXE boot
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>class &amp;#34;biosclients&amp;#34; {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> match if not exists user-class and substring (option vendor-class-identifier, 15, 5) = &amp;#34;00000&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &amp;#34;undionly.kpxe&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>}
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span># Configuration for UEFI-based PXE boot
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>class &amp;#34;pxeclients&amp;#34; {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> match if not exists user-class and substring (option vendor-class-identifier, 0, 9) = &amp;#34;PXEClient&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &amp;#34;ipxe.efi&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>}
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span># Configuration for UEFI-based HTTP boot
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>class &amp;#34;httpclients&amp;#34; {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> match if not exists user-class and substring (option vendor-class-identifier, 0, 10) = &amp;#34;HTTPClient&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> option vendor-class-identifier &amp;#34;HTTPClient&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> filename &amp;#34;http://172.16.199.50/tftp/ipxe.efi&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>}
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Once this file is created, we can include it from our main &lt;code>dhcpd.conf&lt;/code> inside a
&lt;code>subnet&lt;/code> section.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span>shared-network sidero {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> subnet 172.16.199.0 netmask 255.255.255.0 {
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> option domain-name-servers 8.8.8.8, 1.1.1.1;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> option routers 172.16.199.1;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> include &amp;#34;/config/ipxe-metal.conf&amp;#34;;
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> }
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>}
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Since we use a number of Ubiquiti EdgeRouter devices especially in our home test
networks, it is worth mentioning the curious syntax gymnastics we must go
through there.
Essentially, the quotes around the path need to be entered as HTML entities:
&lt;code>&amp;amp;quot;&lt;/code>.&lt;/p>
&lt;p>Ubiquiti EdgeRouter configuration statement:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-text" data-lang="text">&lt;span style="display:flex;">&lt;span>set service dhcp-server shared-network-name sidero \
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> subnet 172.16.199.1 \
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> subnet-parameters &amp;#34;include &amp;amp;quot;/config/ipxe-metal.conf&amp;amp;quot;;&amp;#34;
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Also note the fact that there are two semicolons at the end of the line.
The first is part of the HTML-encoded &lt;strong>&amp;quot;&lt;/strong> (&lt;code>&amp;amp;quot;&lt;/code>) and the second is the actual terminating semicolon.&lt;/p>
&lt;h2 id="troubleshooting">Troubleshooting&lt;/h2>
&lt;p>Getting the netboot environment is tricky and debugging it is difficult.
Once running, it will generally stay running;
the problem is nearly always one of a missing or incorrect configuration, since
the process involves several different components.&lt;/p>
&lt;p>We are working toward integrating as much as possible into Sidero, to provide as
much intelligence and automation as can be had, but until then, you will likely
need to figure out how to begin hunting down problems.&lt;/p>
&lt;p>See the Sidero &lt;a href="../troubleshooting">Troubleshooting&lt;/a> guide for more assistance.&lt;/p></description></item><item><title>V0.5: Install Sidero</title><link>/v0.5/getting-started/install-clusterapi/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/install-clusterapi/</guid><description>
&lt;p>Sidero is included as a default infrastructure provider in &lt;code>clusterctl&lt;/code>, so the
installation of both Sidero and the Cluster API (CAPI) components is as simple
as using the &lt;code>clusterctl&lt;/code> tool.&lt;/p>
&lt;blockquote>
&lt;p>Note: Because Cluster API upgrades are &lt;em>stateless&lt;/em>, it is important to keep all Sidero
configuration for reuse during upgrades.&lt;/p>
&lt;/blockquote>
&lt;p>Sidero has a number of configuration options which should be supplied at install
time, kept, and reused for upgrades.
These can also be specified in the &lt;code>clusterctl&lt;/code> configuration file
(&lt;code>$HOME/.cluster-api/clusterctl.yaml&lt;/code>).
You can reference the &lt;code>clusterctl&lt;/code>
&lt;a href="https://cluster-api.sigs.k8s.io/clusterctl/configuration.html#clusterctl-configuration-file">docs&lt;/a>
for more information on this.&lt;/p>
&lt;p>For our purposes, we will use environment variables for our configuration
options.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_HOST_NETWORK&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#b58900">true&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>192.168.1.150
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>192.168.1.150
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>clusterctl init -b talos -c talos -i sidero
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>First, we are telling Sidero to use &lt;code>hostNetwork: true&lt;/code> so that it binds its
ports directly to the host, rather than being available only from inside the
cluster.
There are many ways of exposing the services, but this is the simplest
path for the single-node management cluster.
When you scale the management cluster, you will need to use an alternative
method, such as an external load balancer or something like
&lt;a href="https://metallb.universe.tf">MetalLB&lt;/a>.&lt;/p>
&lt;p>The &lt;code>192.168.1.150&lt;/code> IP address is the IP address or DNS hostname as seen from the workload
clusters.
In our case, this should be the main IP address of your Docker
workstation.&lt;/p>
&lt;blockquote>
&lt;p>Note: If you encounter the following error, this is caused by a rename of our GitHub org from &lt;code>talos-systems&lt;/code> to &lt;code>siderolabs&lt;/code>.&lt;/p>
&lt;/blockquote>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ clusterctl init -b talos -c talos -i sidero
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Fetching providers
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Error: failed to get provider components &lt;span style="color:#719e07">for&lt;/span> the &lt;span style="color:#2aa198">&amp;#34;talos&amp;#34;&lt;/span> provider: target namespace can&amp;#39;t be defaulted. Please specify a target namespace
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;blockquote>
&lt;p>This can be worked around by adding the following to &lt;code>~/.cluster-api/clusterctl.yaml&lt;/code> and rerunning the init command:&lt;/p>
&lt;/blockquote>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-yaml" data-lang="yaml">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#268bd2">providers&lt;/span>:
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">name&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;talos&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">url&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;https://github.com/siderolabs/cluster-api-bootstrap-provider-talos/releases/latest/bootstrap-components.yaml&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">type&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;BootstrapProvider&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">name&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;talos&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">url&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;https://github.com/siderolabs/cluster-api-control-plane-provider-talos/releases/latest/control-plane-components.yaml&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">type&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;ControlPlaneProvider&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> - &lt;span style="color:#268bd2">name&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;sidero&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">url&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;https://github.com/siderolabs/sidero/releases/latest/infrastructure-components.yaml&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> &lt;span style="color:#268bd2">type&lt;/span>: &lt;span style="color:#2aa198">&amp;#34;InfrastructureProvider&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>V0.5: Expose Sidero Services</title><link>/v0.5/getting-started/expose-services/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/expose-services/</guid><description>
&lt;blockquote>
&lt;p>If you built your cluster as specified in the [Prerequisite: Kubernetes] section in this tutorial, your services are already exposed and you can skip this section.&lt;/p>
&lt;/blockquote>
&lt;p>There are three external Services which Sidero serves and which must be made
reachable by the servers which it will be driving.&lt;/p>
&lt;p>For most servers, TFTP (port 69/udp) will be needed.
This is used for PXE booting, both BIOS and UEFI.
Being a primitive UDP protocol, many load balancers do not support TFTP.
Instead, solutions such as &lt;a href="https://metallb.universe.tf">MetalLB&lt;/a> may be used to expose TFTP over a known IP address.
For servers which support UEFI HTTP Network Boot, TFTP need not be used.&lt;/p>
&lt;p>The kernel, initrd, and all configuration assets are served from the HTTP service
(port 8081/tcp).
It is needed for all servers, but since it is HTTP-based, it
can be easily proxied, load balanced, or run through an ingress controller.&lt;/p>
&lt;p>Overlay Wireguard SideroLink network requires UDP port 51821 to be open.
Same as TFTP, many load balancers do not support Wireguard UDP protocol.
Instead, use MetalLB.&lt;/p>
&lt;p>The main thing to keep in mind is that the services &lt;strong>MUST&lt;/strong> match the IP or
hostname specified by the &lt;code>SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/code> and
&lt;code>SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT&lt;/code> environment
variables (or configuration parameters) when you installed Sidero.&lt;/p>
&lt;p>It is a good idea to verify that the services are exposed as you think they
should be.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ curl -I http://192.168.1.150:8081/tftp/ipxe.efi
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>HTTP/1.1 &lt;span style="color:#2aa198">200&lt;/span> OK
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Accept-Ranges: bytes
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Content-Length: &lt;span style="color:#2aa198">1020416&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>Content-Type: application/octet-stream
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>V0.5: Import Workload Machines</title><link>/v0.5/getting-started/import-machines/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/import-machines/</guid><description>
&lt;p>At this point, any servers on the same network as Sidero should network boot from Sidero.
To register a server with Sidero, simply turn it on and Sidero will do the rest.
Once the registration is complete, you should see the servers registered with &lt;code>kubectl get servers&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ kubectl get servers -o wide
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>NAME HOSTNAME ACCEPTED ALLOCATED CLEAN
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>00000000-0000-0000-0000-d05099d33360 192.168.1.201 &lt;span style="color:#b58900">false&lt;/span> &lt;span style="color:#b58900">false&lt;/span> &lt;span style="color:#b58900">false&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="accept-the-servers">Accept the Servers&lt;/h2>
&lt;p>Note in the output above that the newly registered servers are not &lt;code>accepted&lt;/code>.
In order for a server to be eligible for consideration, it &lt;em>must&lt;/em> be marked as &lt;code>accepted&lt;/code>.
Before a &lt;code>Server&lt;/code> is accepted, no write action will be performed against it.
This default is for safety (don&amp;rsquo;t accidentally delete something just because it
was plugged in) and security (make sure you know the machine before it is given
credentials to communicate).&lt;/p>
&lt;blockquote>
&lt;p>Note: if you are running in a safe environment, you can configure Sidero to
automatically accept new machines.&lt;/p>
&lt;/blockquote>
&lt;p>For more information on server acceptance, see the &lt;a href="../../resource-configuration/servers/#server-acceptance">server docs&lt;/a>.&lt;/p>
&lt;h2 id="create-serverclasses">Create ServerClasses&lt;/h2>
&lt;p>By default, Sidero comes with a single ServerClass &lt;code>any&lt;/code> which matches any
(accepted) server.
This is sufficient for this demo, but you may wish to have
more flexibility by defining your own ServerClasses.&lt;/p>
&lt;p>ServerClasses allow you to group machines which are sufficiently similar to
allow for unnamed allocation.
This is analogous to cloud providers using such classes as &lt;code>m3.large&lt;/code> or
&lt;code>c2.small&lt;/code>, but the names are free-form and only need to make sense to you.&lt;/p>
&lt;p>For more information on ServerClasses, see the &lt;a href="../../resource-configuration/serverclasses/">ServerClass
docs&lt;/a>.&lt;/p>
&lt;h2 id="hardware-differences">Hardware differences&lt;/h2>
&lt;p>In baremetal systems, there are commonly certain small features and
configurations which are unique to the hardware.
In many cases, such small variations may not require special configurations, but
others do.&lt;/p>
&lt;p>If hardware-specific differences do mandate configuration changes, we need a way
to keep those changes local to the hardware specification so that at the higher
level, a Server is just a Server (or a server in a ServerClass is just a Server
like all the others in that Class).&lt;/p>
&lt;p>The most common variations seem to be the installation disk and the console
serial port.&lt;/p>
&lt;p>Some machines have NVMe drives, which show up as something like &lt;code>/dev/nvme0n1&lt;/code>.
Others may be SATA or SCSI, which show up as something like &lt;code>/dev/sda&lt;/code>.
Some machines use &lt;code>/dev/ttyS0&lt;/code> for the serial console; others &lt;code>/dev/ttyS1&lt;/code>.&lt;/p>
&lt;p>Configuration patches can be applied to either Servers or ServerClasses, and
those patches will be applied to the final machine configuration for those
nodes without having to know anything about those nodes at the allocation level.&lt;/p>
&lt;p>For examples of install disk patching, see the &lt;a href="../../resource-configuration/servers/#installation-disk">Installation Disk
doc&lt;/a>.&lt;/p>
&lt;p>For more information about patching in general, see the &lt;a href="../../guides/patching">Patching
Guide&lt;/a>.&lt;/p></description></item><item><title>V0.5: Create a Workload Cluster</title><link>/v0.5/getting-started/create-workload/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/create-workload/</guid><description>
&lt;p>Once created and accepted, you should see the servers that make up your ServerClasses appear as &amp;ldquo;available&amp;rdquo;:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>$ kubectl get serverclass
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>NAME AVAILABLE IN USE
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>any &lt;span style="color:#719e07">[&lt;/span>&lt;span style="color:#2aa198">&amp;#34;00000000-0000-0000-0000-d05099d33360&amp;#34;&lt;/span>&lt;span style="color:#719e07">]&lt;/span> &lt;span style="color:#719e07">[]&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="generate-cluster-manifests">Generate Cluster Manifests&lt;/h2>
&lt;p>We are now ready to generate the configuration manifest templates for our first workload
cluster.&lt;/p>
&lt;p>There are several configuration parameters that should be set in order for the templating to work properly:&lt;/p>
&lt;ul>
&lt;li>&lt;code>CONTROL_PLANE_ENDPOINT&lt;/code>: The endpoint used for the Kubernetes API server (e.g. &lt;code>https://1.2.3.4:6443&lt;/code>).
This is the equivalent of the &lt;code>endpoint&lt;/code> you would specify in &lt;code>talosctl gen config&lt;/code>.
There are a variety of ways to configure a control plane endpoint.
Some common ways for an HA setup are to use DNS, a load balancer, or BGP.
A simpler method is to use the IP of a single node.
This has the disadvantage of being a single point of failure, but it can be a simple way to get running.&lt;/li>
&lt;li>&lt;code>CONTROL_PLANE_SERVERCLASS&lt;/code>: The server class to use for control plane nodes.&lt;/li>
&lt;li>&lt;code>WORKER_SERVERCLASS&lt;/code>: The server class to use for worker nodes.&lt;/li>
&lt;li>&lt;code>KUBERNETES_VERSION&lt;/code>: The version of Kubernetes to deploy (e.g. &lt;code>v1.21.1&lt;/code>).&lt;/li>
&lt;li>&lt;code>CONTROL_PLANE_PORT&lt;/code>: The port used for the Kubernetes API server (port 6443)&lt;/li>
&lt;/ul>
&lt;p>For instance:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">CONTROL_PLANE_SERVERCLASS&lt;/span>&lt;span style="color:#719e07">=&lt;/span>any
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">WORKER_SERVERCLASS&lt;/span>&lt;span style="color:#719e07">=&lt;/span>any
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">TALOS_VERSION&lt;/span>&lt;span style="color:#719e07">=&lt;/span>v0.14.0
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">KUBERNETES_VERSION&lt;/span>&lt;span style="color:#719e07">=&lt;/span>v1.22.2
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">CONTROL_PLANE_PORT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">6443&lt;/span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">CONTROL_PLANE_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>1.2.3.4
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>clusterctl generate cluster cluster-0 -i sidero &amp;gt; cluster-0.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Take a look at this new &lt;code>cluster-0.yaml&lt;/code> manifest and make any changes as you
see fit.
Feel free to adjust the &lt;code>replicas&lt;/code> field of the &lt;code>TalosControlPlane&lt;/code> and &lt;code>MachineDeployment&lt;/code> objects to match the number of machines you want in your controlplane and worker sets, respecively.
&lt;code>MachineDeployment&lt;/code> (worker) count is allowed to be 0.&lt;/p>
&lt;p>Of course, these may also be scaled up or down &lt;em>after&lt;/em> they have been created,
as well.&lt;/p>
&lt;h2 id="create-the-cluster">Create the Cluster&lt;/h2>
&lt;p>When you are satisfied with your configuration, go ahead and apply it to Sidero:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl apply -f cluster-0.yaml
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>At this point, Sidero will allocate Servers according to the requests in the
cluster manifest.
Once allocated, each of those machines will be installed with Talos, given their
configuration, and form a cluster.&lt;/p>
&lt;p>You can watch the progress of the Servers being selected:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>watch kubectl --context&lt;span style="color:#719e07">=&lt;/span>sidero-demo &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> get servers,machines,clusters
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>First, you should see the Cluster created in the &lt;code>Provisioning&lt;/code> phase.
Once the Cluster is &lt;code>Provisioned&lt;/code>, a Machine will be created in the
&lt;code>Provisioning&lt;/code> phase.&lt;/p>
&lt;p>&lt;img src="/images/sidero-cluster-start.png" alt="machine provisioning">&lt;/p>
&lt;p>During the &lt;code>Provisioning&lt;/code> phase, a Server will become allocated, the hardware
will be powered up, Talos will be installed onto it, and it will be rebooted
into Talos.
Depending on the hardware involved, this may take several minutes.&lt;/p>
&lt;p>Eventually, the Machine should reach the &lt;code>Running&lt;/code> phase.&lt;/p>
&lt;p>&lt;img src="/images/sidero-cluster-up.png" alt="machine_running">&lt;/p>
&lt;p>The initial controlplane Machine will always be started first.
Any additional nodes will be started after that and will join the cluster when
they are ready.&lt;/p>
&lt;h2 id="retrieve-the-talosconfig">Retrieve the Talosconfig&lt;/h2>
&lt;p>In order to interact with the new machines (outside of Kubernetes), you will
need to obtain the &lt;code>talosctl&lt;/code> client configuration, or &lt;code>talosconfig&lt;/code>.
You can do this by retrieving the secret from the Sidero
management cluster:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl --context&lt;span style="color:#719e07">=&lt;/span>sidero-demo &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> get secret &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> cluster-0-talosconfig &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> -o &lt;span style="color:#268bd2">jsonpath&lt;/span>&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">&amp;#39;{.data.talosconfig}&amp;#39;&lt;/span> &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> | base64 -d &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> &amp;gt; cluster-0-talosconfig
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="retrieve-the-kubeconfig">Retrieve the Kubeconfig&lt;/h2>
&lt;p>With the talosconfig obtained, the workload cluster&amp;rsquo;s kubeconfig can be retrieved in the normal Talos way:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl --talosconfig cluster-0-talosconfig --nodes &amp;lt;CONTROL_PLANE_IP&amp;gt; kubeconfig
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="check-access">Check access&lt;/h2>
&lt;p>Now, you should have two cluster available: you management cluster
(&lt;code>sidero-demo&lt;/code>) and your workload cluster (&lt;code>cluster-0&lt;/code>).&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl --context&lt;span style="color:#719e07">=&lt;/span>sidero-demo get nodes
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl --context&lt;span style="color:#719e07">=&lt;/span>cluster-0 get nodes
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>V0.5: Scale the Workload Cluster</title><link>/v0.5/getting-started/scale-workload/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/scale-workload/</guid><description>
&lt;p>If you have more machines available, you can scale both the controlplane
(&lt;code>TalosControlPlane&lt;/code>) and the workers (&lt;code>MachineDeployment&lt;/code>) for any cluster
after it has been deployed.
This is done just like normal Kubernetes &lt;code>Deployments&lt;/code>.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>kubectl scale taloscontrolplane cluster-0-cp --replicas&lt;span style="color:#719e07">=&lt;/span>&lt;span style="color:#2aa198">3&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>V0.5: Optional: Pivot management cluster</title><link>/v0.5/getting-started/pivot/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/pivot/</guid><description>
&lt;p>Having the Sidero cluster running inside a Docker container is not the most
robust place for it, but it did make for an expedient start.&lt;/p>
&lt;p>Conveniently, you can create a Kubernetes cluster in Sidero and then &lt;em>pivot&lt;/em> the
management plane over to it.&lt;/p>
&lt;p>Start by creating a workload cluster as you have already done.
In this example, this new cluster is called &lt;code>management&lt;/code>.&lt;/p>
&lt;p>After the new cluster is available, install Sidero onto it as we did before,
making sure to set all the environment variables or configuration parameters for
the &lt;em>new&lt;/em> management cluster first.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>sidero.mydomain.com
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#b58900">export&lt;/span> &lt;span style="color:#268bd2">SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT&lt;/span>&lt;span style="color:#719e07">=&lt;/span>sidero.mydomain.com
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>clusterctl init &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --kubeconfig-context&lt;span style="color:#719e07">=&lt;/span>management
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> -i sidero -b talos -c talos
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Now, you can move the database from &lt;code>sidero-demo&lt;/code> to &lt;code>management&lt;/code>:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>clusterctl move &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --kubeconfig-context&lt;span style="color:#719e07">=&lt;/span>sidero-demo &lt;span style="color:#cb4b16">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#cb4b16">&lt;/span> --to-kubeconfig-context&lt;span style="color:#719e07">=&lt;/span>management
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h2 id="delete-the-old-docker-management-cluster">Delete the old Docker Management Cluster&lt;/h2>
&lt;p>If you created your &lt;code>sidero-demo&lt;/code> cluster using Docker as described in this
tutorial, you can now remove it:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>talosctl cluster destroy --name sidero-demo
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>V0.5: Troubleshooting</title><link>/v0.5/getting-started/troubleshooting/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.5/getting-started/troubleshooting/</guid><description>
&lt;p>The first thing to do in troubleshooting problems with the Sidero installation
and operation is to figure out &lt;em>where&lt;/em> in the process that failure is occurring.&lt;/p>
&lt;p>Keep in mind the general flow of the pieces.
For instance:&lt;/p>
&lt;ol>
&lt;li>A server is configured by its BIOS/CMOS to attempt a network boot using the PXE firmware on
its network card(s).&lt;/li>
&lt;li>That firmware requests network and PXE boot configuration via DHCP.&lt;/li>
&lt;li>DHCP points the firmware to the Sidero TFTP or HTTP server (depending on the firmware type).&lt;/li>
&lt;li>The second stage boot, iPXE, is loaded and makes an HTTP request to the
Sidero metadata server for its configuration, which contains the URLs for
the kernel and initrd images.&lt;/li>
&lt;li>The kernel and initrd images are downloaded by iPXE and boot into the Sidero
agent software (if the machine is not yet known and assigned by Sidero).&lt;/li>
&lt;li>The agent software reports to the Sidero metadata server via HTTP the hardware information of the machine.&lt;/li>
&lt;li>A (usually human or external API) operator verifies and accepts the new
machine into Sidero.&lt;/li>
&lt;li>The agent software reboots and wipes the newly-accepted machine, then powers
off the machine to wait for allocation into a cluster.&lt;/li>
&lt;li>The machine is allocated by Sidero into a Kubernetes Cluster.&lt;/li>
&lt;li>Sidero tells the machine, via IPMI, to boot into the OS installer
(following all the same network boot steps above).&lt;/li>
&lt;li>The machine downloads its configuration from the Sidero metadata server via
HTTP.&lt;/li>
&lt;li>The machine applies its configuration, installs a bootloader, and reboots.&lt;/li>
&lt;li>The machine, upon reboot from its local disk, joins the Kubernetes cluster
and continues until Sidero tells it to leave the cluster.&lt;/li>
&lt;li>Sidero tells the machine to leave the cluster and reboots it into network
boot mode, via IPMI.&lt;/li>
&lt;li>The machine netboots into wipe mode, wherein its disks are again wiped to
come back to the &amp;ldquo;clean&amp;rdquo; state.&lt;/li>
&lt;li>The machine again shuts down and waits to be needed.&lt;/li>
&lt;/ol>
&lt;h2 id="device-firmware-pxe-boot">Device firmware (PXE boot)&lt;/h2>
&lt;p>The worst place to fail is also, unfortunately, the most common.
This is the firmware phase, where the network card&amp;rsquo;s built-in firmware attempts
to initiate the PXE boot process.
This is the worst place because the firmware is completely opaque, with very
little logging, and what logging &lt;em>does&lt;/em> appear frequently is wiped from the
console faster than you can read it.&lt;/p>
&lt;p>If you fail here, the problem will most likely be with your DHCP configuration,
though it &lt;em>could&lt;/em> also be in the Sidero TFTP service configuration.&lt;/p>
&lt;h2 id="validate-sidero-tftp-service">Validate Sidero TFTP service&lt;/h2>
&lt;p>The easiest to validate is to use a &lt;code>tftp&lt;/code> client to validate that the Sidero
TFTP service is available at the IP you are advertising via DHCP.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span> $ atftp 172.16.199.50
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span> tftp&amp;gt; get ipxe.efi
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>TFTP is an old, slow protocol with very little feedback or checking.
Your only real way of telling if this fails is by timeout.
Over a local network, this &lt;code>get&lt;/code> command should take a few seconds.
If it takes longer than 30 seconds, it is probably not working.&lt;/p>
&lt;p>Success is also not usually indicated:
you just get a prompt returned, and the file should show up in your current
directory.&lt;/p>
&lt;p>If you are failing to connect to TFTP, the problem is most likely with your
Sidero Service exposure:
how are you exposing the TFTP service in your management cluster to the outside
world?
This normally involves either setting host networking on the Deployment or
installing and using something like MetalLB.&lt;/p></description></item></channel></rss>