<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Sidero Metal – Overview</title><link>/v0.4/overview/</link><description>Recent content in Overview on Sidero Metal</description><generator>Hugo -- gohugo.io</generator><atom:link href="/v0.4/overview/index.xml" rel="self" type="application/rss+xml"/><item><title>V0.4: Introduction</title><link>/v0.4/overview/introduction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.4/overview/introduction/</guid><description>
&lt;p>Sidero (&amp;ldquo;Iron&amp;rdquo; in Greek) is a project created by the &lt;a href="https://www.SideroLabs.com/">Sidero Labs&lt;/a> team.
Sidero Metal provides lightweight, composable tools that can be used to create bare-metal &lt;a href="https://www.talos.dev">Talos Linux&lt;/a> + Kubernetes clusters.
These tools are built around the Cluster API project.&lt;/p>
&lt;p>Because of the design of Cluster API, there is inherently a &amp;ldquo;chicken and egg&amp;rdquo; problem: you need an existing Kubernetes cluster in order to provision the management plane, that can then provision more clusters.
The initial management plane cluster that runs the Sidero Metal provider does not need to be based on Talos Linux - although it is recommended for security and stability reasons.
The &lt;a href="../../getting-started/">Getting Started&lt;/a> guide will walk you through installing Sidero Metal either on an existing cluster, or by quickly creating a docker based cluster used to bootstrap the process.&lt;/p>
&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Sidero Metal is currently made up of two components:&lt;/p>
&lt;ul>
&lt;li>Metal Controller Manager: Provides custom resources and controllers for managing the lifecycle of metal machines, iPXE server, metadata service, and gRPC API service&lt;/li>
&lt;li>Cluster API Provider Sidero (CAPS): A Cluster API infrastructure provider that makes use of the pieces above to spin up Kubernetes clusters&lt;/li>
&lt;/ul>
&lt;p>Sidero Metal also needs these co-requisites in order to be useful:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/kubernetes-sigs/cluster-api">Cluster API&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/talos-systems/cluster-api-control-plane-provider-talos">Cluster API Control Plane Provider Talos&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/talos-systems/cluster-api-bootstrap-provider-talos">Cluster API Bootstrap Provider Talos&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>All components mentioned above can be installed using Cluster API&amp;rsquo;s &lt;code>clusterctl&lt;/code> tool.
See the &lt;a href="../../getting-started/">Getting Started&lt;/a> for more details.&lt;/p></description></item><item><title>V0.4: Installation</title><link>/v0.4/overview/installation/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.4/overview/installation/</guid><description>
&lt;p>As of Cluster API version 0.3.9, Sidero is included as a default infrastructure provider in &lt;code>clusterctl&lt;/code>.&lt;/p>
&lt;p>To install Sidero and the other Talos providers, simply issue:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="color:#93a1a1;background-color:#002b36;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-bash" data-lang="bash">&lt;span style="display:flex;">&lt;span>clusterctl init -b talos -c talos -i sidero
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Sidero supports several variables to configure the installation, these variables can be set either as environment
variables or as variables in the &lt;code>clusterctl&lt;/code> configuration:&lt;/p>
&lt;ul>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_HOST_NETWORK&lt;/code> (&lt;code>false&lt;/code>): run &lt;code>sidero-controller-manager&lt;/code> on host network&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/code> (empty): specifies the IP address controller manager can be reached on, defaults to the node IP&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_API_PORT&lt;/code> (8081): specifies the port controller manager can be reached on&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_CONTAINER_API_PORT&lt;/code> (8081): specifies the controller manager internal container port&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_EXTRA_AGENT_KERNEL_ARGS&lt;/code> (empty): specifies additional Linux kernel arguments for the Sidero agent (for example, different console settings)&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_AUTO_ACCEPT_SERVERS&lt;/code> (&lt;code>false&lt;/code>): automatically accept discovered servers, by default &lt;code>.spec.accepted&lt;/code> should be changed to &lt;code>true&lt;/code> to accept the server&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_AUTO_BMC_SETUP&lt;/code> (&lt;code>true&lt;/code>): automatically attempt to configure the BMC with a &lt;code>sidero&lt;/code> user that will be used for all IPMI tasks.&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_INSECURE_WIPE&lt;/code> (&lt;code>true&lt;/code>): wipe only the first megabyte of each disk on the server, otherwise wipe the full disk&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_SERVER_REBOOT_TIMEOUT&lt;/code> (&lt;code>20m&lt;/code>): timeout for the server reboot (how long it might take for the server to be rebooted before Sidero retries an IPMI reboot operation)&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_IPMI_PXE_METHOD&lt;/code> (&lt;code>uefi&lt;/code>): IPMI boot from PXE method: &lt;code>uefi&lt;/code> for UEFI boot or &lt;code>bios&lt;/code> for BIOS boot&lt;/li>
&lt;li>&lt;code>SIDERO_CONTROLLER_MANAGER_BOOT_FROM_DISK_METHOD&lt;/code> (&lt;code>ipxe-exit&lt;/code>): configures the way Sidero forces server to boot from disk when server hits iPXE server after initial install: &lt;code>ipxe-exit&lt;/code> returns iPXE script with &lt;code>exit&lt;/code> command, &lt;code>http-404&lt;/code> returns HTTP 404 Not Found error, &lt;code>ipxe-sanboot&lt;/code> uses iPXE &lt;code>sanboot&lt;/code> command to boot from the first hard disk&lt;/li>
&lt;/ul>
&lt;p>Sidero provides two endpoints which should be made available to the infrastructure:&lt;/p>
&lt;ul>
&lt;li>TCP port 8081 which provides combined iPXE, metadata and gRPC service (external endpoint should be passed to Sidero as &lt;code>SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/code> and &lt;code>SIDERO_CONTROLLER_MANAGER_API_PORT&lt;/code>)&lt;/li>
&lt;li>UDP port 69 for the TFTP service (DHCP server should point the nodes to PXE boot from that IP)&lt;/li>
&lt;/ul>
&lt;p>These endpoints could be exposed to the infrastructure using different strategies:&lt;/p>
&lt;ul>
&lt;li>running &lt;code>sidero-controller-manager&lt;/code> on the host network.&lt;/li>
&lt;li>using Kubernetes load balancers (e.g. MetalLB), ingress controllers, etc.&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>Note: If you want to run &lt;code>sidero-controller-manager&lt;/code> on the host network using port different from &lt;code>8081&lt;/code> you should set both &lt;code>SIDERO_CONTROLLER_MANAGER_API_PORT&lt;/code> and &lt;code>SIDERO_CONTROLLER_MANAGER_CONTAINER_API_PORT&lt;/code> to the same value.&lt;/p>
&lt;/blockquote></description></item><item><title>V0.4: Architecture</title><link>/v0.4/overview/architecture/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.4/overview/architecture/</guid><description>
&lt;p>The overarching architecture of Sidero centers around a &amp;ldquo;management plane&amp;rdquo;.
This plane is expected to serve as a single interface upon which administrators can create, scale, upgrade, and delete Kubernetes clusters.
At a high level view, the management plane + created clusters should look something like:&lt;/p>
&lt;p>&lt;img src="./images/dc-view.png" alt="Alternative text">&lt;/p></description></item><item><title>V0.4: Resources</title><link>/v0.4/overview/resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.4/overview/resources/</guid><description>
&lt;p>Sidero, the Talos bootstrap/controlplane providers, and Cluster API each provide several custom resources (CRDs) to Kubernetes.
These CRDs are crucial to understanding the connections between each provider and in troubleshooting problems.
It may also help to look at the &lt;a href="https://github.com/talos-systems/sidero/blob/master/templates/cluster-template.yaml">cluster template&lt;/a> to get an idea of the relationships between these.&lt;/p>
&lt;hr>
&lt;h2 id="cluster-api-capi">Cluster API (CAPI)&lt;/h2>
&lt;p>It&amp;rsquo;s worth defining the most basic resources that CAPI provides first, as they are related to several subsequent resources below.&lt;/p>
&lt;h3 id="cluster">&lt;code>Cluster&lt;/code>&lt;/h3>
&lt;p>&lt;code>Cluster&lt;/code> is the highest level CAPI resource.
It allows users to specify things like network layout of the cluster, as well as contains references to the infrastructure and control plane resources that will be used to create the cluster.&lt;/p>
&lt;h3 id="machines">&lt;code>Machines&lt;/code>&lt;/h3>
&lt;p>&lt;code>Machine&lt;/code> represents an infrastructure component hosting a Kubernetes node.
Allows for specification of things like Kubernetes version, as well as contains reference to the infrastructure resource that relates to this machine.&lt;/p>
&lt;h3 id="machinedeployments">&lt;code>MachineDeployments&lt;/code>&lt;/h3>
&lt;p>&lt;code>MachineDeployments&lt;/code> are similar to a &lt;code>Deployment&lt;/code> and their relationship to &lt;code>Pods&lt;/code> in Kubernetes primitives.
A &lt;code>MachineDeployment&lt;/code> allows for specification of a number of Machine replicas with a given specification.&lt;/p>
&lt;hr>
&lt;h2 id="cluster-api-bootstrap-provider-talos-cabpt">Cluster API Bootstrap Provider Talos (CABPT)&lt;/h2>
&lt;h3 id="talosconfigs">&lt;code>TalosConfigs&lt;/code>&lt;/h3>
&lt;p>The &lt;code>TalosConfig&lt;/code> resource allows a user to specify the type (init, controlplane, join) for a given machine.
The bootstrap provider will then generate a Talos machine configuration for that machine.
This resource also provides the ability to pass a full, pre-generated machine configuration.
Finally, users have the ability to pass &lt;code>configPatches&lt;/code>, which are applied to edit a generate machine configuration with user-defined settings.
The &lt;code>TalosConfig&lt;/code> corresponds to the &lt;code>bootstrap&lt;/code> sections of Machines, &lt;code>MachineDeployments&lt;/code>, and the &lt;code>controlPlaneConfig&lt;/code> section of &lt;code>TalosControlPlanes&lt;/code>.&lt;/p>
&lt;h3 id="talosconfigtemplates">&lt;code>TalosConfigTemplates&lt;/code>&lt;/h3>
&lt;p>&lt;code>TalosConfigTemplates&lt;/code> are similar to the &lt;code>TalosConfig&lt;/code> above, but used when specifying a bootstrap reference in a &lt;code>MachineDeployment&lt;/code>.&lt;/p>
&lt;hr>
&lt;h2 id="cluster-api-control-plane-provider-talos-cacppt">Cluster API Control Plane Provider Talos (CACPPT)&lt;/h2>
&lt;h3 id="taloscontrolplanes">&lt;code>TalosControlPlanes&lt;/code>&lt;/h3>
&lt;p>The control plane provider presents a single CRD, the &lt;code>TalosControlPlane&lt;/code>.
This resource is similar to &lt;code>MachineDeployments&lt;/code>, but is targeted exclusively for the Kubernetes control plane nodes.
The &lt;code>TalosControlPlane&lt;/code> allows for specification of the number of replicas, version of Kubernetes for the control plane nodes, references to the infrastructure resource to use (&lt;code>infrastructureTemplate&lt;/code> section), as well as the configuration of the bootstrap data via the &lt;code>controlPlaneConfig&lt;/code> section.
This resource is referred to by the CAPI Cluster resource via the &lt;code>controlPlaneRef&lt;/code> section.&lt;/p>
&lt;hr>
&lt;h2 id="sidero">Sidero&lt;/h2>
&lt;h3 id="cluster-api-provider-sidero-caps">Cluster API Provider Sidero (CAPS)&lt;/h3>
&lt;h4 id="metalclusters">&lt;code>MetalClusters&lt;/code>&lt;/h4>
&lt;p>A &lt;code>MetalCluster&lt;/code> is Sidero&amp;rsquo;s view of the cluster resource.
This resource allows users to define the control plane endpoint that corresponds to the Kubernetes API server.
This resource corresponds to the &lt;code>infrastructureRef&lt;/code> section of Cluster API&amp;rsquo;s &lt;code>Cluster&lt;/code> resource.&lt;/p>
&lt;h4 id="metalmachines">&lt;code>MetalMachines&lt;/code>&lt;/h4>
&lt;p>A &lt;code>MetalMachine&lt;/code> is Sidero&amp;rsquo;s view of a machine.
Allows for reference of a single server or a server class from which a physical server will be picked to bootstrap.&lt;/p>
&lt;h4 id="metalmachinetemplates">&lt;code>MetalMachineTemplates&lt;/code>&lt;/h4>
&lt;p>A &lt;code>MetalMachineTemplate&lt;/code> is similar to a &lt;code>MetalMachine&lt;/code> above, but serves as a template that is reused for resources like &lt;code>MachineDeployments&lt;/code> or &lt;code>TalosControlPlanes&lt;/code> that allocate multiple &lt;code>Machines&lt;/code> at once.&lt;/p>
&lt;h4 id="serverbindings">&lt;code>ServerBindings&lt;/code>&lt;/h4>
&lt;p>&lt;code>ServerBindings&lt;/code> represent a one-to-one mapping between a Server resource and a &lt;code>MetalMachine&lt;/code> resource.
A &lt;code>ServerBinding&lt;/code> is used internally to keep track of servers that are allocated to a Kubernetes cluster and used to make decisions on cleaning and returning servers to a &lt;code>ServerClass&lt;/code> upon deallocation.&lt;/p>
&lt;h3 id="metal-controller-manager">Metal Controller Manager&lt;/h3>
&lt;h4 id="environments">&lt;code>Environments&lt;/code>&lt;/h4>
&lt;p>These define a desired deployment environment for Talos, including things like which kernel to use, kernel args to pass, and the initrd to use.
Sidero allows you to define a default environment, as well as other environments that may be specific to a subset of nodes.
Users can override the environment at the &lt;code>ServerClass&lt;/code> or &lt;code>Server&lt;/code> level, if you have requirements for different kernels or kernel parameters.&lt;/p>
&lt;p>See the &lt;a href="../../resource-configuration/environments/">Environments&lt;/a> section of our Configuration docs for examples and more detail.&lt;/p>
&lt;h4 id="servers">&lt;code>Servers&lt;/code>&lt;/h4>
&lt;p>These represent physical machines as resources in the management plane.
These &lt;code>Servers&lt;/code> are created when the physical machine PXE boots and completes a &amp;ldquo;discovery&amp;rdquo; process in which it registers with the management plane and provides SMBIOS information such as the CPU manufacturer and version, and memory information.&lt;/p>
&lt;p>See the &lt;a href="../../resource-configuration/servers/">Servers&lt;/a> section of our Configuration docs for examples and more detail.&lt;/p>
&lt;h4 id="serverclasses">&lt;code>ServerClasses&lt;/code>&lt;/h4>
&lt;p>&lt;code>ServerClasses&lt;/code> are a grouping of the &lt;code>Servers&lt;/code> mentioned above, grouped to create classes of servers based on Memory, CPU or other attributes.
These can be used to compose a bank of &lt;code>Servers&lt;/code> that are eligible for provisioning.&lt;/p>
&lt;p>See the &lt;a href="../../resource-configuration/serverclasses/">ServerClasses&lt;/a> section of our Configuration docs for examples and more detail.&lt;/p>
&lt;h3 id="sidero-controller-manager">Sidero Controller Manager&lt;/h3>
&lt;p>While the controller does not present unique CRDs within Kubernetes, it&amp;rsquo;s important to understand the metadata resources that are returned to physical servers during the boot process.&lt;/p>
&lt;h4 id="metadata">Metadata&lt;/h4>
&lt;p>The Sidero controller manager server may be familiar to you if you have used cloud environments previously.
Using Talos machine configurations created by the Talos Cluster API bootstrap provider, along with patches specified by editing &lt;code>Server&lt;/code>/&lt;code>ServerClass&lt;/code> resources or &lt;code>TalosConfig&lt;/code>/&lt;code>TalosControlPlane&lt;/code> resources, metadata is returned to servers who query the controller manager at boot time.&lt;/p>
&lt;p>See the &lt;a href="../../resource-configuration/metadata/">Metadata&lt;/a> section of our Configuration docs for examples and more detail.&lt;/p></description></item><item><title>V0.4: System Requirements</title><link>/v0.4/overview/minimum-requirements/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>/v0.4/overview/minimum-requirements/</guid><description>
&lt;h2 id="system-requirements">System Requirements&lt;/h2>
&lt;p>Most of the time, Sidero does very little, so it needs very few resources.
However, since it is in charge of any number of workload clusters, it &lt;strong>should&lt;/strong>
be built with redundancy.
It is also common, if the cluster is single-purpose,
to combine the controlplane and worker node roles.
Virtual machines are also
perfectly well-suited for this role.&lt;/p>
&lt;p>Minimum suggested dimensions:&lt;/p>
&lt;ul>
&lt;li>Node count: 3&lt;/li>
&lt;li>Node RAM: 4GB&lt;/li>
&lt;li>Node CPU: ARM64 or x86-64 class&lt;/li>
&lt;li>Node storage: 32GB storage on system disk&lt;/li>
&lt;/ul></description></item></channel></rss>