Skip to content

Commit

Permalink
Update main docs
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Mar 12, 2024
1 parent 4d0764d commit d9730c8
Show file tree
Hide file tree
Showing 3 changed files with 87 additions and 42 deletions.
66 changes: 47 additions & 19 deletions static/docs/main/_sources/contents/glossary.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,30 +3,58 @@ Glossary
========

Container
Containers are used by Warewulf as the template for the VNFS
image. Warewulf containers can be any type of OCI or Singularity
standard image formats but maintained on disk as an "OCI
bundle". Warewulf integrates with Docker, Docker Hub, any OCI
registery, Singularity, standard chroots, etc.
Warewulf containers are the node images that it manages and provisions.
The use of the term "container" alludes to Warewulf's support for importing OCI containers, OCI container archives, and Apptainer sandboxes to initialize its node images.

Warewulf containers are maintained as an uncompressed "virtual node file system" or VNFS, (sometimes also referred to as a "chroot").
These containers are then built as images which may then be used to provision a node.

It is important to note, however, that Warewulf does not provision virtualized or nested "containers" in the common sense;
Warewulf nodes run a decompressed image on "bare metal" loaded directly into system memory.

Controller
The controller node(s) are the resources responsible for
management, control, and administration of the
cluster. Historically these systems have been called "master",
"head", or "administrative" nodes, but we feel the term
"controller" is more appropriate and descriptive of the role of
this system.
The Warewulf controller runs the Warewulf daemon (``warewulfd``) and is responsible for the management, control, and administration of the cluster.
This system may also sometimes be referred to as the "master," "head," or "admin" node.

Initramfs
A typical Warewulf controller also runs a DHCP service and a TFTP service, and often an NFS service;
though these services may be managed separately and on separate servers.

Kernel
In addition to a container, Warewulf also requires a kernel (typically a Linux kernel) in order to provision a node.

Kernels may be imported independently into Warewulf, either from the controller or from a container;
however, recent versions of Warewulf (after v4.3.0) support automatically provisioning with a kernel detected and extracted from the container itself.
In most cases, kernels may be installed in the container using normal system packages, and no special consideration is necessary.

Node
Warewulf nodes are the systems that are being provisioned by Warewulf.
The roles of these systems could be "compute", "storage", "GPU", "IO", etc.

nodes.conf
One of two primary Warewulf configuration files, ``nodes.conf`` is a YAML document which records all configuration parameters for Warewulf's nodes and profiles.
It does not contain the containers or overlays, but refers to them by name.

Overlay
Warewulf overlays provide customization for the provisioned container image.
Overlays may be configured on nodes or profiles, as either **system** or **runtime** overlays.

**System overlays** are applied only once, when a node is first provisioned.

**Runtime overlays** are applied when a node is first provisioned and periodically during the runtime of the node. (The default period is 1 minute.)

Profile
Warewulf profiles are abstract nodes that carry the same configuration attributes but do not provision any specific node.
Warewulf nodes may then refer to one or more such profiles for their configuration.
In this way, profiles provide a simple mechanism for applying configuration to a group of nodes,
and this configuration may be mixed with configuration from other profiles.

Overlays
wwctl
The main administrative interface for Warewulf is the ``wwctl`` command, which provides commands to manage nodes, profiles, containers, overlays, kernels, and more.

Virtual Node File System (VNFS)
wwinit
Warewulf performs some setup during the provisioning process before control is passed to the provisioned operating system.
This process is referred to as "wwinit," and is implemented and configured by a script and overlay of the same name.

Workers
Worker nodes are the systems that are being provisioned by
Warewulf. The roles of these systems could be "compute",
"storage", "GPU", "IO", etc. which would typically be used as a
prefix, for example: "**compute worker node**"
wwclient
Warewulf adds a ``wwclient`` daemon to provisioned nodes.
This daemon is responsible for periodically fetching and applying runtime overlays.
61 changes: 39 additions & 22 deletions static/docs/main/contents/glossary.html
Original file line number Diff line number Diff line change
Expand Up @@ -112,30 +112,47 @@

<section id="glossary">
<h1>Glossary<a class="headerlink" href="#glossary" title="Link to this heading"></a></h1>
<dl class="simple">
<dt>Container</dt><dd><p>Containers are used by Warewulf as the template for the VNFS
image. Warewulf containers can be any type of OCI or Singularity
standard image formats but maintained on disk as an “OCI
bundle”. Warewulf integrates with Docker, Docker Hub, any OCI
registery, Singularity, standard chroots, etc.</p>
<dl>
<dt>Container</dt><dd><p>Warewulf containers are the node images that it manages and provisions.
The use of the term “container” alludes to Warewulf’s support for importing OCI containers, OCI container archives, and Apptainer sandboxes to initialize its node images.</p>
<p>Warewulf containers are maintained as an uncompressed “virtual node file system” or VNFS, (sometimes also referred to as a “chroot”).
These containers are then built as images which may then be used to provision a node.</p>
<p>It is important to note, however, that Warewulf does not provision virtualized or nested “containers” in the common sense;
Warewulf nodes run a decompressed image on “bare metal” loaded directly into system memory.</p>
</dd>
<dt>Controller</dt><dd><p>The controller node(s) are the resources responsible for
management, control, and administration of the
cluster. Historically these systems have been called “master”,
“head”, or “administrative” nodes, but we feel the term
“controller” is more appropriate and descriptive of the role of
this system.</p>
<dt>Controller</dt><dd><p>The Warewulf controller runs the Warewulf daemon (<code class="docutils literal notranslate"><span class="pre">warewulfd</span></code>) and is responsible for the management, control, and administration of the cluster.
This system may also sometimes be referred to as the “master,” “head,” or “admin” node.</p>
<p>A typical Warewulf controller also runs a DHCP service and a TFTP service, and often an NFS service;
though these services may be managed separately and on separate servers.</p>
</dd>
</dl>
<p>Initramfs</p>
<p>Kernel</p>
<p>Overlays</p>
<p>Virtual Node File System (VNFS)</p>
<dl class="simple">
<dt>Workers</dt><dd><p>Worker nodes are the systems that are being provisioned by
Warewulf. The roles of these systems could be “compute”,
“storage”, “GPU”, “IO”, etc. which would typically be used as a
prefix, for example: “<strong>compute worker node</strong></p>
<dt>Kernel</dt><dd><p>In addition to a container, Warewulf also requires a kernel (typically a Linux kernel) in order to provision a node.</p>
<p>Kernels may be imported independently into Warewulf, either from the controller or from a container;
however, recent versions of Warewulf (after v4.3.0) support automatically provisioning with a kernel detected and extracted from the container itself.
In most cases, kernels may be installed in the container using normal system packages, and no special consideration is necessary.</p>
</dd>
<dt>Node</dt><dd><p>Warewulf nodes are the systems that are being provisioned by Warewulf.
The roles of these systems could be “compute”, “storage”, “GPU”, “IO”, etc.</p>
</dd>
<dt>nodes.conf</dt><dd><p>One of two primary Warewulf configuration files, <code class="docutils literal notranslate"><span class="pre">nodes.conf</span></code> is a YAML document which records all configuration parameters for Warewulf’s nodes and profiles.
It does not contain the containers or overlays, but refers to them by name.</p>
</dd>
<dt>Overlay</dt><dd><p>Warewulf overlays provide customization for the provisioned container image.
Overlays may be configured on nodes or profiles, as either <strong>system</strong> or <strong>runtime</strong> overlays.</p>
<p><strong>System overlays</strong> are applied only once, when a node is first provisioned.</p>
<p><strong>Runtime overlays</strong> are applied when a node is first provisioned and periodically during the runtime of the node. (The default period is 1 minute.)</p>
</dd>
<dt>Profile</dt><dd><p>Warewulf profiles are abstract nodes that carry the same configuration attributes but do not provision any specific node.
Warewulf nodes may then refer to one or more such profiles for their configuration.
In this way, profiles provide a simple mechanism for applying configuration to a group of nodes,
and this configuration may be mixed with configuration from other profiles.</p>
</dd>
<dt>wwctl</dt><dd><p>The main administrative interface for Warewulf is the <code class="docutils literal notranslate"><span class="pre">wwctl</span></code> command, which provides commands to manage nodes, profiles, containers, overlays, kernels, and more.</p>
</dd>
<dt>wwinit</dt><dd><p>Warewulf performs some setup during the provisioning process before control is passed to the provisioned operating system.
This process is referred to as “wwinit,” and is implemented and configured by a script and overlay of the same name.</p>
</dd>
<dt>wwclient</dt><dd><p>Warewulf adds a <code class="docutils literal notranslate"><span class="pre">wwclient</span></code> daemon to provisioned nodes.
This daemon is responsible for periodically fetching and applying runtime overlays.</p>
</dd>
</dl>
</section>
Expand Down
2 changes: 1 addition & 1 deletion static/docs/main/searchindex.js

Large diffs are not rendered by default.

0 comments on commit d9730c8

Please sign in to comment.