Red Hat Network Satellite Server, as well as its upstream project Katello, enables you to easily manage the registered client hosts using Puppet: for this to work, you must first install the Puppet agent on the client host and register it to the Puppet master instance that is running on the Satellite (or on the Capsule).

This post is a step by step guide that does not only show you how to install and configure the Puppet agent on the client host: it also thoroughly describe how to create the Puppet product on the Red Hat Network Satellite Server 6 (or Katello), add repositories for the relevant architecture families, assign them to the right Content View and publish them into the right Lifecycle Environment.

The Linux distribution used in the examples is CentOS 7, but you can of course easily adapt it to any other Red Hat and derived Linux distribution.

Mind that configuration management using Puppet is the oldest way of managing registered hosts on Satellite: the current way is using Ansible. Red Hat announced Puppet deprecation by Red Hat Network Satellite Server 7.0 in 2020. For this reason my suggestion is to quickly migrate from Puppet to Ansible. If you want to learn more on using Ansible with Satellite, please read Enable And Configure Ansible On Red Hat Network Satellite.

Prerequisites

The straightforward requisite is having the client host already registered on the Red Hat Network Satellite Server 6 or Katello. If you do not know how to do this, or simply want to learn more on this topic, please read Register Clients To Satellite Server 6 Or Katello before going on with this post.

In order to avoid redundancy in writing, I use "on Satellite" every time the described procedure is the same on both Red Hat Network Satellite Server 6 and Katello, otherwise I explicitly specify the procedure necessary on Katello.
As per the best practices, we work using an unprivileged user and we make use of "sudo" only when we need administrative rights. Hammer commands are instead issued using the "katello" user: the rationale is having the environment of this user already setup at login time so that “hammer” uses our Satellite by default.

Create the Puppet Product

The very first thing to do is create the Puppet product on the Satellite, so that the client hosts registered to Satellite can be attached to the subscription and so become entitled to install it.

Please note that in this post everything is always supposed to belong to the "Carcano CH" Organization, so if you are working using the web UI of the Satellite please select the "Carcano CH" organization.

Login to the Satellite using SSH and switch to the user you decided to use to issue hammer commands:

sudo su - katello

Create The Product

Let's create the "Puppet" product:

hammer product create \
  --organization "Carcano CH" \
  --name "Puppet" \
  --description "Puppet infrastructure automation and delivery suite"

the output must be as follows:

Product created.
If you prefer to use the Web UI of the Satellite, you can create a product for the Organization you have already selected by opening the path "Content" / "Products" and clicking on the "Create Product" button.

Attach Clients To the Puppet Subscription

Now that the product has been created, the registered client hosts must be able to see it into the list of available subscriptions: login to an already registered client host using SSH and issue the following command:

sudo subscription-manager list --available

the output is a list that among the various member must contain the subscription to the "Puppet" product:

+-------------------------------------------+
Available Subscriptions
+-------------------------------------------+
Subscription Name: Puppet
Provides: 
SKU: 768103798701
Contract: 
Pool ID: 8a8180827e828d29017e836cd1c4005f
Provides Management: No
Available: Unlimited
Suggested: 1
Service Type: 
Roles: 
Service Level: 
Usage: 
Add-ons: 
Subscription Type: Standard
Starts: 01/22/2022
Ends: 12/01/2049
Entitlement Type: Physical

Mind that simply creating a product does only mean that the Product with its related subscription(s) is available on the Satellite - despite the client hosts see it among the available subscriptions (and they can even attach to it), they are not still able to install the product yet: the subscription does not provide any repository yet, neither any of them has been linked to a Content View and published to a lifecycle environment infact.

Create The Puppet5 YUM Repository

We can now create the "Puppet 5" repositories that a client host gets access to after attaching the "Puppet" subscription.
Be wary that since the RPM packages contained in the online upstream Puppet repositories are GPG signed, we must publish this GPG key on Satellite so as to have clients automatically download and install it to verify the integrity of the packages before installing them from the local repositories provided by the Satellite itself.

Publish the GPG Signing Key

Let's download the GPG key from the online official Puppet repository:

wget https://yum.puppet.com/RPM-GPG-KEY-puppet-20250406

then we can push to the Satellite the GPG Key file we have just download:

hammer gpg create \
--key RPM-GPG-KEY-puppet-20250406 \
--name "RPM-GPG-KEY-puppet-20250406" \
--organization "Carcano CH"

the output must be as follows:

GPG Key created.
If you prefer to use the Web UI of the Satellite, you can publish the GPG key into the Organization you have already selected by opening the path "Content" / "Content Credentials" and clicking on the "Create Content  Credential" button.

Create The Puppet5 YUM Repository For CentOS 7

We are now ready to create the local yum repositories for the "Puppet 5" version of the "Puppet" product hosted on the Satellite: in this post I show you only how to create the repository for the "el7" Linux family (that means "Red Hat Enterprise Linux 7", "CentOS 7" and derivatives).

hammer repository create \
  --organization "Carcano CH" \
  --product "Puppet" \
  --name "Puppet 5 for CentOS 7 RPMs x86_64" \
  --content-type yum \
  --url "http://yum.puppetlabs.com/puppet5/el/7/x86_64" \
  --gpg-key "RPM-GPG-KEY-puppet-20250406"

the output must be as follows:

Repository created.

Please note how we supplied both

  • the URL of the upstream online repository from where download the packages
  • the name of the GPG key we just published on Satellite.
If you prefer to use the Web UI of the Satellite, you can create the YUM repository by opening the path "Content" / "Products" and clicking on the "Puppet" product in the table that lists the available products. Then click on the "New Repository" button.

Publish The Puppet5 Repository

The requisite for publishing a repository into a Lifecycle Environment is adding it to the Content View of the operating system family the repository belongs to: since we are using the "el7" family, we add the repository to the "CentOS 7" Content View.

Add The Puppet5 Repository To The CentOS 7 Content View

We now add the "Puppet 5  for CentOS 7 x86_64" YUM repository to the "CentOS 7" Content View (we are of course assuming that the Content View does  already exist):

hammer content-view add-repository \
--organization "Carcano CH" \
--name "CentOS 7" \
--product "Puppet" \
--repository "Puppet 5 for CentOS 7 RPMs x86_64"

the output is as follows:

The repository has been associated.
If you prefer to use the Web UI of the Satellite, you can add the local yum repository to an existing Content View by opening the path "Content" / "Content Views'' and click on the "CentOS 7" Content View in the table that lists the available Content Views. Then click on the "Yum Content'' tab and select the "Repositories" menu item and on the "Add" link: this displays the list of yum repositories that can be added to the Content View: pick up the ones you need and click on the "Add Repositories" button.

Publish A new Version Of The Content View

A Content View is published into an environment using versions: each time anything is modified - for example if new software has been downloaded from the upstream repository, or a new upstream repository has been added, we must publish a new version of the Content View.
So, since we added a new repository, we must publish a new version of the "CentOS 7" content view:

hammer content-view publish \
--organization "Carcano CH" \
--name "CentOS 7" \
--description "Added Puppet5 repository"

the output is as follows:

[...................] [100%]
If you prefer to use the Web UI of the Satellite, you can publish a new version of the Content View by opening the path "Content" / "Content Views" and click on the "CentOS 7" Content View in the table that lists the available Content Views. Then click on the "Publish New Version" button.

Publish The New Version Of The Content View To A Lifecycle Environment

Every new version of a Content View gets published into the "Library": since the client hosts are not linked to the "Library" (despite it being possible, if necessary), rather than to a particular Lifecycle Environment, we must promote the new version of the "CentOS 7" content view to the target environment.

In this example, the client hosts are bound to the "Lab" environment: we can publish this version of the Content View as follows:

hammer content-view version promote \
--organization "Carcano CH" \
--content-view "CentOS 7" \
--to-lifecycle-environment Lab

the output is as follows:

[...................] [100%]
If you prefer to use the Web UI of the Satellite, you can promote the new version of the Content View by opening the path "Content" / "Content Views" and click on the "CentOS 7" Content View in the table that lists the available Content Views: a table with the list of available versions of the content view is displayed. Now simply click on the "Promote" button of the version you want to promote, and pick-up the environments you want to promote the version into.

Install Puppet5 On Clients

We are eventually ready to install the "Puppet 5" RPM packages on the client hosts.

Attach Clients To the Puppet Subscription

Login to an already registered client host using SSH and issue the following command to list the available subscriptions:

sudo subscription-manager list --consumed

the "Puppet" subscription must be shown among the other subscriptions:

+-------------------------------------------+
Available Subscriptions
+-------------------------------------------+
Subscription Name: Puppet
Provides: 
SKU: 768103798701
Contract: 
Pool ID: 8a8180827e828d29017e836cd1c4005f
Provides Management: No
Available: Unlimited
Suggested: 1
Service Type: 
Roles: 
Service Level: 
Usage: 
Add-ons: 
Subscription Type: Standard
Starts: 01/22/2022
Ends: 12/01/2049
Entitlement Type: Physical

to go on we need to know the "pool ID" of the "Puppet" subscription, since we need to specify it in the command to attach the client host to that subscription.

In this example, the command to attach the client host to the "Puppet" subscription is:

sudo subscription-manager attach --pool=8a8180827e828d29017e836cd1c4005f

the output is as follows:

Successfully attached a subscription for: Puppet

Install puppet-agent

Now that the client have successfully attached the client host to the Puppet subscription, we can install the puppet-agent RPM package (that is part of the Puppet product):

sudo yum install -y puppet-agent

Configure Puppet-Agent

Once installed, Puppet agent must be configured so to connect to the Satellite (or the Capsule) - this can be easily achieved by running the following commands:

PUPPET_ENVIRONMENT="production"
PUPPET_RUNINTERVAL_MINUTES="180"
SATELLITE_FQDN=$(awk -F "=" '/baseurl[ ]*/ {print $2}' /etc/rhsm/rhsm.conf | cut -d / -f 3)

cat << EOF > /etc/puppetlabs/puppet/puppet.conf
[agent]
    server = ${SATELLITE_FQDN}
    certname = ${HOSTNAME}
    runinterval = ${PUPPET_RUNINTERVAL_MINUTES}
    environment = ${PUPPET_ENVIRONMENT}
    listen = false
    pluginsync = true
    report = true
EOF

please note note how:

  • we automatically guess the FQDN of the Satellite from the registration settings configured in "/etc/rhsm/rhsm.conf" file
  • we set the Puppet environment using the "PUPPET_ENVIRONMENT" variable
  • we use the PUPPET_RUNINTERVAL_MINUTES to set the interval in minutes between puppet-agent runs

Our first run of puppet-agent is a foreground launch in test mode set to verbose:

/opt/puppetlabs/bin/puppet agent -tv

the output must be as by the following (cut) snippet:

Info: Creating a new SSL key for srv-ci-up3a002.mgmt.carcano.local
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for srv-ci-up3a002.mgmt.carcano.local
Info: Certificate Request fingerprint (SHA256): 8D:0E:85:5C:5E:1A:34:8F:F9:9D:BE:1E:B6:40:62:B2:EF:4F:73:28:AD:E3:E5:0A:B6:C5:47:79:77:87:D5:8B
Info: Caching certificate for ca
Exiting; no certificate found and waitforcert is disabled

as you see, puppet-agent immediately stops right after creating a Certificate Signing Request (CSR) and submits it to the Puppet Master.

Sign The Client Certificate Of Puppet Agent

We must now login to the Satellite using SSH so to have a look to the pending CSRs: we can use the puppetserver command line to get them as follows:

sudo /opt/puppetlabs/bin/puppetserver ca list

the output must pretty similar to the following (cut) snippet:

Requested Certificates:
    srv-ci-up3a002.mgmt.carcano.local       (SHA256)  8D:0E:85:5C:5E:1A:34:8F:F9:9D:BE:1E:B6:40:62:B2:EF:4F:73:28:AD:E3:E5:0A:B6:C5:47:79:77:87:D5:8B

as we see by the FQDN ("srv-ci-up3a002.mgmt.corner.local") and the fingerprint, the pending request is actually the one sent by our client host - let's sign as follows:

sudo /opt/puppetlabs/bin/puppetserver ca sign --certname srv-ci-up3a002.mgmt.carcano.local

the output is as by the following (cut) snippet:

Successfully signed certificate request for srv-ci-up3a002.mgmt.carcano.local
If you prefer to use the Web UI of the Satellite, you can sign the pending CSR by opening the path "Infrastructure" / "Smart Proxies" and picking "Certificates" from the actions drop-down of the smart-proxy you registered the client onto. When the list of certificates is shown, simply pick the "Sign" action from the pull-down of the certificates you want to sign.

Start The Puppet-Agent Service

Now that the client certificate of the puppet node has been issued, we only have to:

  • login again on the client using SSH
  • start  the puppet service
  • enable the puppet service to run at boot
systemctl enable puppet
systemctl start puppet

the client host  is now configured to be managed by Puppet.

Footnotes

Here it ends this tutorial on how to setup Puppet agent on client hosts registered to Red Hat Network Satellite Server 6 or Katello - we do not only learned how to install and configure it: we also know how to create the Puppet product on the Red Hat Network Satellite Server 6 (or Katello), how to add repositories for the relevant architecture families, assign them to the right Content View and publish them into the right Lifecycle Environment. I hope that you enjoyed it, but being however aware that, as I warned you, Puppet will be deprecated as of Red Hat Network Satellite Server 7.

Writing a post like this takes hours. I'm doing it for the only pleasure of sharing knowledge and thoughts, but all of this does not come for free: it is a time consuming volunteering task. This blog is not affiliated to anybody, does not show advertisements nor sells data of visitors. The only goal of this blog is to make ideas flow. So please, if you liked this post, spend a little of your time to share it on Linkedin or Twitter using the buttons below: seeing that posts are actually read is the only way I have to understand if I'm really sharing thoughts or if I'm just wasting time and I'd better give up.

2 thoughts on “Puppet agent on Red Hat Network Satellite Clients

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>