Ansible is a powerful datacenter automation tool that enables nearly declarative automations - "Ansible playbooks, ansible-galaxy, roles and collections" is a primer with Ansible, gradually introducing concepts that we better elaborate in other posts following this one: as we already said, Ansible is a powerful tool, and as many powerful tool can make more pain than benefits if improperly managed - the aim of this post is providing a good baseline that enable quickly enable operating Ansible running ad hoc statements, playbooks and operating using Ansible Galaxy with shelf roles and collections .

This post begins where we left with the "Ansible Tutorial – Ansible Container How-To" post, writing a playbook for preparing hosts for being managed by Ansible, learning how to use Ansible Galaxy for downloading and installing shelf Ansible roles and collections. The outcome will be a running PostgreSQL instance we will use as the DB engine in the next post of the series..

This post begins where we left with the "Ansible Tutorial – Ansible Container How-To" post: I strongly suggest you to read it if you haven't done it, or it will be hard to make sense of the containerized Ansible environment we are using.

Why using A Containerized Ansible

In my working experience the best way to use Ansible is running it inside a container image - this approach provides lots of benefits:

  • it demands a very little set up and maintenance effort (you don’t need to install or patch it at all)
  • you can very easily switch between different Ansible versions - it is just a matter of specifying the container image you want to run
  • it provides an out of the box way to always have the development and the operational environments aligned - it is just a matter of running the same container image
  • it is very easy to integrate it not only within an existing CI/CD suite, but also to migrate it to a different CI/CD suite when it will be necessary.

Ansible Ad Hoc Statements

Ansible leverages on its own idempotent Pythons modules, providing an easy way to inject them into the target hosts as temporary files and running them.

The most basic way for running Ansible is the so called ad-hoc mode: in this way Ansible connects to the remote target and runs just the specified module with the supplied arguments.

As an example; let's set the "System in use by the Ansible Lab" login banner on the "pgsql-ca-up1a001" target host by using Ansible ad-hoc statements.

First we need to add "System in use by the Ansible Lab" to the "/etc/issue.net" file .

The full statement to run is:

ansible -b -u vagrant -k pgsql-ca-up1a001 --ssh-extra-args='-o StrictHostKeyChecking=no' -m copy -a "content='System in use by the Ansible Lab' dest=/etc/issue.net"

the above statement run the actual Ansible ad hoc statement, invoking the "copy" module passing the following arguments ("-a" option):

  • content='System in use by the Ansible Lab'
  • dest=/etc/issue.net

we had to provide the following Ansible's command line tool parameters:

  • -u vagrant: tells Ansible to connect to the target system as the "vagrant" user
  • -k: tells Ansible to prompt for the user's password
  • -b stands for "become", that in Ansible terms means: right after connecting to the target system, become another user (that is the "root" user by default)
  • --ssh-extra-args='-o StrictHostKeyChecking=no' this option disables the check of the target host's host key

After this we must of course configure SSHD to use the "/etc/issue.net" file as the banner file by adding the "Banner /etc/issue.net" to the "/etc/ssh/sshd_config" file:

ansible -b -u vagrant -k pgsql-ca-up1a001 -m lineinfile -a "path=/etc/ssh/sshd_config regexp='^[#][ ]*Banner .*' line='Banner /etc/issue.net'"

Lastly, we must restart the SSHD service to apply the change:

ansible -b -u vagrant -k pgsql-ca-up1a001 -m service -a "name=sshd state=restarted"

It is clear that if the only way for running Ansible statements was the ad-hoc mode, Ansible wouldn't be a very handy tool.

Ad-hoc is the way to use Ansible suitable only when having to deal with single statements to run massively, such as a mass restart of services or target systems.

When having to deal with a large list of tasks Ansible must run by using playbooks.

Ansible Playbooks

Ansible provides a much more clever and handy way for running a long list of tasks: you can list the tasks within YAML formatted files, grouping them into what is called "play". Since these files can actually contain more than just one play, they are called playbooks.

An Ansible play is not just a list of tasks specifying modules and their settings: it has its own syntax, supporting control structures such as loop and conditionals. In addition to that, it supports the JINJA2 templating language.

Playbooks are processed by Ansible using the "ansible-playbook" command line utility.

Writing An Ansible Playbook

This post is just a primer with Ansible, so we are about to see a quite basic playbook (it's missing control structures and conditionals): we will dig into the Ansible playbook syntax in another post following this one.

The most immediate use case to address is "preparing" target hosts for being managed by Ansible: for this specific use case, we will use the "join.yml" playbook.

Create the "ansible/playbooks/engine/join.yml" playbook file with the following contents:

---
- name: prepare the Ansible Container
  hosts: localhost
  become: true
  tasks:
    - name: Update repositories and install the sshpass package
      community.general.apk:
        name: sshpass
        update_cache: true

- name: prepare targets for being managed by Ansible
  hosts: "{{ targets }}"
  vars:
    ansible_host_key_checking: false
  tasks:
    - name: create the Ansible service user
      become: true
      remote_user: vagrant
      ansible.builtin.user:
        name: "{{ ansible_svc_user }}"
    - name: setting the  Ansible service user's password
      become: true
      ansible.builtin.shell: |
        echo "{{ ansible_svc_user }}:{{ ansible_svc_password }}" | chpasswd -c SHA512
    - name: set the Ansible service user's public key
      become_user: "{{ ansible_svc_user }}"
      become: true
      ansible.builtin.authorized_key:
        user: "{{ ansible_svc_user }}"
        key: "{{lookup('file', '/ansible/environment/ansible.pub')}}"
    - name: grant sudo without password to the Ansible service user
      become: true
      ansible.builtin.copy:
        content: "{{ ansible_svc_user }} ALL=(ALL)NOPASSWD: ALL"
        dest: /etc/sudoers.d/{{ ansible_svc_user }}
    - name: making sure the Ansible service user's sudo grant is effective
      become: true
      become_user: "{{ ansible_svc_user }}"
      ansible.builtin.command: "sudo -l"

The playbook contains two plays: the first is from line 2 to line 9, the second from line 11 to the end of the file.

The first play just installs on the container itself the "sshpass" package - we already explained in the "Ansible Tutorial – Ansible Container How-To" post why this is needed - as for the use of the "apk" Ansible module, it is because since Ansible is running on an Alpine Linux container.

The second play:

  • create the local service user used by Ansible (lines 16-20) and sets its password (lines 21-24)
  • configure SSH public key authentication for that local service user (lines 25-29)
  • configure a sudo rule to enable that local service user to run every command as any user without being asked for a password (lines 30-34)
  • runs the "sudo -l" command to make sure the configured sudo rule is actually effective (lines 35-38)
Ansible Playbooks are YAML formatted, and Ansible in general requires to be strongly skilled in YAML - YAML format is not just as trivial as it may look at first glance: my heartfelt advice is also to have you reading the "YAML in a Nutshell" post, since it provides everything you must know for properly working with YAML.

To avoid having the password in the shell history, we read it from the TTY and store it into the "ANSIBLE_SVC_PASSWORD" variable:

read -s ANSIBLE_SVC_PASSWORD

We are now ready for our first go with our Ansible Playbook:

ansible-playbook -u vagrant -k -e targets=pgsql-ca-up1a001 -e ansible_svc_user=ansible -e ansible_svc_password=${ANSIBLE_SVC_PASSWORD} playbooks/engine/join.yml

the above statement runs the "ansible-playbook" command line tool to:

  • connect to the target systems as "vagrant" user, and prompts for its password ("-k" option)
  • configures the Ansible local service user to be created on the target host as "ansible" with the password defined in the "ANSIBLE_SVC_PASSWORD" variable
  • runs the playbook only on the "pgsql-ca-up1a001" target host
  • run the "playbooks/engine/join.yml" playbook
In order to have Ansible's facter to be able to detect all the available facts on the target host, on Red Hat family target hosts it is necessary to install the "redhat-lsb-core" RPM package. But there is still a problem with Oracle Linux: that RPM package is provided by the "distro_builder" repository, which is disabled by default. This means that on Oracle linux it is necessary to enable it first, then install the package and lastly disable it again. Ansible manages the enabling and disabling of DNF repositories using the "dnf" module available in the "community.general" Ansible collection. This is definitively a good use case for going on and seeing how to deal with shelf Ansible collections.

Remind to unset the ANSIBLE_SVC_PASSWORD variable:

unset ANSIBLE_SVC_PASSWORD

Ansible Namespaces

As we saw so far, Ansible is a very modular system leveraging Python, modules, plugins and Playbooks.

A problem every modular system has to address is avoiding naming collisions, such as having different entities writing modules or playbooks with the same name.

Ansible addresses the naming collision problem in the most common and trivial way: when invoking objects, such as modules or playbooks, outside of the ones provided by the Ansible distribution itself, it is mandatory to prepended them specifying their namespace: if the namespace part is omitted, then Ansible looks for the referenced object into the default namespace ("ansible.builtin").

So, the syntax to be used when referring to objects is:

<namespace>.<objectname>

Ansible Roles And Collections

Ansible of course fosters also the Do Not Repeat yourself paradigm: it is indeed possible to just import tasks list from existing YAML files, so to be able to re use them as necessary.

In addition to that, Ansible provides two delivery formats that go far beyond just the simple providing of reusable tasks lists.

Ansible Roles

Ansible roles are a convenient format enabling to deliver within a single package:

  • reusable tasks list
  • resource files and templates
  • handlers
  • var files and defaults

all of these are packaged within a gzipped tarball along with some metadata useful for determining its version, requirements (for example minimum supported Ansible version), and operating environment (for example the supported operating systems).

As you can easily guess, namespacing applies to Ansible roles as well, so, only when dealing with Ansible roles delivered alone (so not within an Ansible Collection), the best practice is to always include the namespace within the Ansible role's name.

For example:

<namespace>.<role_name>

Ansible Collections

Ansible collections goes far beyond Ansible roles, enabling to deliver within a single package:

  • playbooks
  • roles
  • var files
  • Ansible modules
  • Ansible plugins

This enabled third parties starting to develop and deliver their own Ansible contents, managing versioning at their own pace.

As you can easily guess, namespacing applies to objects shipped within Ansible collections.

That means that invoking objects inside a collection requires prepending the collection's name.

For example:

<namespace>.<collection>.module
<namespace>.<collection>.role
<namespace>.<collection>.playbook
...

Ansible Galaxy

Both Ansible roles and collections can be delivered through a distribution server and managed by the "ansible-galaxy” command line tool.

You can of course use the online Ansible Galaxy repository, or run your own, for example using Pulp 3.

II you are interested in setting up an on premises Galaxy local repository using Pulp 3, you may find useful reading "Installing Pulp3 As A Container" and of course also "Pulp 3 As A Caching Proxy Of The Online Ansible Galaxy".

The very first thing to do when dealing with any use case is to have a look on the online Ansible Galaxy repository to see if there is already any role for our specific use case: the online Ansible Galaxy provides a convenient web UI for quick lookups indeed.

Working With Ansible Roles

It has finally come the time to see an Ansible role in action. 

This post is just a primer with Ansible, so we are about to see only how to use shelf roles: we will dig into how to write a custom Ansible role in another post.

Install An Ansible Role

As an example use case of an Ansible role, we will consider the use case of installing a PostgreSQL server on the "pgsql-ca-up1a001" target host. As we said, instead of just starting to write a playbook, the best practice is to have a look into the online Ansible Galaxy.

A quick check shows us that it exists the "galaxyproject.postgresql" Ansible role: since it looks like an official (and so well maintained) one, we can just use this shelf Ansible role - this will spare us from spending a lot of time into writing a custom role, and even better we don't even have to maintain it.

Let's install the "galaxyproject.postgresql" Ansible role using the "ansible-galaxy" command line utility as follows:

ansible-galaxy role install -p /ansible/roles galaxyproject.postgresql

the above statement runs the "ansible-galaxy", specifying to install the downloaded role in the "/ansible/roles" directory within the container ("-p" command line parameter)

Although it is technically possible to run a role using an Ansible one-shot statement (you must pass the role as module and specify to include roles when looking up for modules), as we saw that is a very uncomfortable way for running Ansible, suitable only for one statement massive actions.

Write A Playbook Invoking The Role

As we said, the best way for running a role is including it into a playbook. Some roles provides a sample playbook that is used for running unit tests beneath the "tests" subdirectory, but unfortunately that is not the case of the "galaxyproject.postgresql" Ansible role

That means we must create our own playbook from scratch - create the "ansible/playbooks/postgresql.yml " playbook with the following contents:

---
- hosts: pgsql-ca-up1a001
  become: true
  roles:
    - galaxyproject.postgresql

it contains the bare minimum statements necessary to run the role:

  • the "hosts" dictionary item provides the target host - mind it can be used to provide also a list of hosts, a group of hosts, a list of group of hosts or a mixture of all of this
  • the "become" dictionary item tells Ansible to become another user ("root" is the default become user) for running the tasks
  • the "roles" dictionary provides the list of roles to load - in this case it of course contains just the "galaxyproject.postgresql" Ansible role

Write A Var File

As you can easily guess, running the role alone is not enough:we also need to provide the necessary values to process the configuration.

The easiest way is to specify a var file - that is a file containing variables - while running the "ansible-playbook" statement -create the"ansible/environment/postgresql-instance.yml" variable file with the following contents:

postgresql_version: 14
postgresql_conf:
  - listen_addresses: "'*'"
  - max_connections: 50
postgresql_pg_hba_conf:
  - host all all all md5

Run A Play

We are now ready to have a go with our playbook.

In my lab (I'm using Oracle Linux) the Ansible role was unable to verify the GPG key of the PostgreSQL repository's RPM package: this actually prevents the role from properly running.

To address it, I had to first install the RPM GPG key from "https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-AARCH64-RHEL"; I did it by using an ad hoc Ansible command such as: 

ansible pgsql-ca-up1a001 -b -m rpm_key -a 'key=https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY-AARCH64-RHEL'
Please note how I downloaded the GPG key for the ARM architecture - you must of course download the suitable one for your target hosts's architecture.

As for running the playbook we just wrote, the statement is:

ansible-playbook -e @environment/postgresql-instance.yml -l pgsql-ca-up1a001 playbooks/postgresql.yml

the above statement runs the "ansible-playbook" command line tool to:

    • connect to the target systems, limiting the run to the "pgsql-ca-up1a001" target host only
    • load the variables from the "environment/postgresql-instance.yml" var file
    • run the "playbooks/postgresql.yml" playbook

Working With Ansible Collections

The last step of this post is showing how to deal with Ansible collections in action. 

This post is just a primer with Ansible, so we are about to see only how to use shelf collections: we will dig into how to write a custom Ansible collection in another post.

Install An Ansible Collection

As the use case for playing with a shelf Ansible collection we will complete the "join.yml" playbook we previously wrote adding its missing bits.

As we previously said, in order to have Ansible's facter to be able to detect all the available facts on the target host, on Red Hat family target hosts it is necessary to install the "redhat-lsb-core" RPM package. But there is still a problem with Oracle Linux: that RPM package is provided by the "distro_builder" repository, which is disabled by default. This means that on Oracle linux it is necessary to enable it first, then install the package and lastly disable it again.

Ansible manages the enabling and disabling of DNF repositories using the "dnf" module available in the "community.general" Ansible collection.

So, let's install the "community.general" Ansible collection as follows:

ansible-galaxy collection install --force -p /ansible/collections community.general

the above statement runs the "ansible-galaxy", specifying to install the downloaded collection in the "/ansible/collections" directory within the container ("-p" command line parameter).

We also had to force the install by providing the "-f": that was necessary since the Ansible version provided by the Alpine Linux container comes from the Python PyPI - that kind of Ansible distribution provides also a few Ansible collections, so installing other using the "ansible-galaxy" command line tool may break compatibility - mind that the force option in this specific case has just to be intended as "I know what I'm doing".

Invoke Modules From A Collection Into A Playbook

We can now complete the "ansible/playbooks/engine/join.yml" playbook we just created by appending the following tasks:

    - name: temporarily enable the distro builder repo
      become: true
      community.general.dnf_config_manager:
        name: ol9_distro_builder
        state: enabled
      when: ansible_distribution == 'OracleLinux'
    - name: install the redhat-lsb-core RPM package
      become: true
      ansible.builtin.package:
        name: redhat-lsb-core
        state: present
    - name: disble the distro builder repo
      become: true
      community.general.dnf_config_manager:
        name: ol9_distro_builder
        state: disabled
      when: ansible_distribution == 'OracleLinux'

Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:

Even a small contribution is always welcome!

as you see this snippet provides three tasks:

  • enable the "distro_builder" repository using the "dnf" module available in the "community.general" Ansible collection (lines 1 - 6)
  • install the "redhat-lsb-core" package using the "package" module available in the "ansible.builtin" Ansible collection - so the default Ansible distribution (lines 7 - 11)
  • set again as disabled the "distro_builder" repository using the "dnf" module available in the "community.general" Ansible collection (lines 12 - 17)

As you see, this was also an opportunity to introduce the "when" conditional in the tasks invoking the "dnf" module (lines 6 and 17): the effect is to run the task only when the "ansible_distribution" fact is "OracleLinux".

Run A Play

Let's try running again the the playbook again: first, read the password to be set to the Ansible local service user from the TTY and store it into the "ANSIBLE_SVC_PASSWORD" variable as we already did:

read -s ANSIBLE_SVC_PASSWORD

then run the playbook:

ansible-playbook -u vagrant -k -e targets=pgsql-ca-up1a001 -e ansible_svc_user=ansible -e ansible_svc_password=${ANSIBLE_SVC_PASSWORD} playbooks/engine/join.yml

the play must complete successfully, installing also the "redhat-lsb-core" package.

Again, remind to unset the "ANSIBLE_SVC_PASSWORD" variable:

unset ANSIBLE_SVC_PASSWORD

Ansible And Katello / Red Hat Network Satellite Server

Ansible is tightly integrated also in Katello, the upstream project of the the Red Hat Network Satellite Server: if you are interested in that, you might find interesting reading "Install Katello Using Ansible", "Enable And Configure Ansible On Red Hat Network Satellite" and "Install Foreman-proxy Using Ansible".

Footnotes

As you see it is not so hard to start working with Ansible: in this post we very quickly saw how to run it in a pure operational scenario, writing small playbooks re-using shelf Ansible roles and playbooks.

Mind anyway that the real power of Ansible can be unleashed only after learning how to write your own Ansible roles and Ansible collections, and of course setting up an enterprise class automation suite.

We will gradually go through all of this - in the next post "Ansible inventory best practices: caveats and pitfalls" for example we see how to properly structure an Ansible inventory the proper way, avoiding to mess things up.

I hate blogs with pop-ups, ads and all the (even worse) other stuff that distracts from the topics you're reading and violates your privacy. I want to offer my readers the best experience possible for free, ... but please be wary that for me it's not really free: on top of the raw costs of running the blog, I usually spend on average 50-60 hours writing each post. I offer all this for free because I think it's nice to help people, but if you think something in this blog has helped you professionally and you want to give concrete support, your contribution is very much appreciated: you can just use the above button.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>