Ansible is an extremely powerful data center automation tool: most of its power comes from not being too strict into defining a structure - this enables it to be used into extremely complex scenarios as well as to very quickly set it up in quite trivial scenarios.

But this is a two edged sword: too many times I saw POC for adopting it permed POC with too poor requirements, thinking they can reuse what they experimented as a baseline for structuring Ansible: this is a very harmful error that quickly lead to unmaintainable real life environments with duplicated code and settings, often stored into structures without a consistent logic or naming, so losing the most of the benefits of such a great automation tool.

Ansible playbooks best practices: caveats and pitfalls starts from where we left with Ansible inventory best practices: caveats and pitfalls, exploring how to properly deal with writing playbooks, structuring things both to promote maintainability as well as to ease the operation and configuration tasks.

Playbooks' Types

What is a playbook has already been discussed in the "Ansible playbooks, ansible-galaxy, roles and collections" post, so I won't repeat those concepts: here I just provide some guidelines for classifying them, so to later apply the most suitable design pattern.

As we said in the end of the Ansible inventory best practices: caveats and pitfalls post, Ansible playbooks can be classified by grouping them by purpose.

There exists:

  • delivery targeted playbooks - these are playbooks aimed at delivering configurations (such as firewall rules) or configuration items (such as database schemas) to already existing services - the peculiarity of delivery targeted playbooks is that very often they contain tasks lists that can be reused by other playbooks, such as blueprint based playbooks. For this reason, the best approach is to group these tasks lists by purpose and store them into separate files, so as to easily import them from other playbooks.
  • deployment targeted playbooks - these are playbooks aimed at deploying services

These playbooks can be further classified as follows:

    • simple services playbooks
    • solutions playbooks

These latters are aimed at deploying a service that depends on other configuration items - for example a web application that requires a database schema and a virtual host on the load balancer. Because of this complexity that involves distributed changes it is very convenient developing them so that they are configured using a blueprint: this eases configuration management, since everything is in the same place (the blueprint) and the configuration structure itself enables an easy understanding of the overall service details. In addition to that, it makes it very easy to deploy other service instances, by simply creating a copy of the blueprint and modifying the settings as necessary.

Playbook's Directory Tree

One of the very first standards to agree on the structure to use for storing playbooks: this helps both in the development as well as in the operation processes, since this way everyone knows how to behave and what to expect.

The Root Directories

Within the "playbooks" directory, we create "basic" and "solutions" directory trees:

mkdir -m 755 ansible/playbooks/infra ansible/playbooks/solutions
  • "infra" is used to contain both delivery targeted playbooks as well as simple services deployment playbooks
  • "solutions" is used to store solutions deployment playbooks

This initial splitting is good also by separation of working role perspective, since:

  • the "infra" tree contains playbooks that are mostly developed, maintained and operated by the IT support teams (SysAdmins/Engineers), by the DBAs and by the Networking team
  • the "solutions" contains playbooks that are often developed, maintained and operated by applications specialists, sometimes leveraging on step lists imported from the "basic" directory tree.

The Infra Directory Tree

In my experience a good playbooks classification pattern is:

  • first by "family " (such as "db", or "linux") first
  • and then by "implementation/technology" (such as "postgresql" or "firewall")

Linux Firewall Playbooks

Since in this post we'll see how to write playbooks for managing the Linux's system firewall, create the directory tree where to store them as follows:

mkdir -m 755 ansible/playbooks/infra/linux ansible/playbooks/infra/linux/firewall

Since as we said playbooks often make use of tasks lists that can be shared with other playbooks, it is best to put these shared tasks lists into a dedicated directory..

Create the "tasks" sub directory:

mkdir -m 755 ansible/playbooks/infra/linux/firewall/tasks

Linux OS Playbooks

since playbooks are very handy also for managing the operating system, let's create the directory tree where to store the os related ones as follows:

mkdir -m 755 ansible/playbooks/infra/linux ansible/playbooks/infra/linux/os

even here, it is best to put these shared tasks lists into a dedicated directory.

Create the "tasks" sub directory:

mkdir -m 755 ansible/playbooks/infra/linux/os/tasks

PostgreSQL Playbooks

Since in this post we'll see also how to write playbooks for managing PostgreSQL, create the directory tree where to store them as follows:

mkdir -m 755 ansible/playbooks/infra/db ansible/playbooks/infra/db/postgresql

Since as we said playbooks often make use of tasks lists that can be shared with other playbooks, it is best to put these shared tasks lists into a dedicated directory..

Create the "tasks" sub directory:

mkdir -m 755 ansible/playbooks/infra/db/postgresql/tasks

The Solutions Directory Trees

Solutions deployment playbooks actually require two distinct directory tree:

  • "environment/blueprints" - it is used to store all the blueprints consumed by the solutions deployment playbooks
  • "solutions/<solution_label>" - it is used to store every playbook of a specific solution

since the "<solution_label>" is available only when developing a solution, for now we can just create the "environment/blueprints" directory:

mkdir -m 755 ansible/environment/blueprints

The Secrets Directory Tree

Secrets are just vars_files encrypted by Ansible Vault - they provide a very basic way for implementing security for sensitive data (Ansible Vault is not the only way for implementing it - in this post we use it for sake of simplicity).

It is convenient to have all these secrets files grouped beneath the same directory - create the "ansible/secrets" directory as follows::

mkdir -m 755 ansible/secrets

Lab's Prerequisites

The lab shown in this posts requires:

ansible-galaxy role install -p /ansible/roles galaxyproject.postgresql

the above statement runs the "ansible-galaxy", specifying to install the downloaded role in the "/ansible/roles" directory within the container ("-p" command line parameter).

After installing it, have a look to the "ansible/roles/galaxyproject.postgresql/README.md" file to see the variables that can be passed to the role and that so we will use.

Deployment Targeted Playbooks

This is the most straightforward and common use case of Ansible playbooks: playbooks of this kind are used to deploy services without actual dependencies (of course out of the topological ones).

A good standard is to always call this kind of playbook "deploy.yml", so to have everything predictable.

The PostgreSQL Deploy Playbook

The first playbook we create is the one for deploying PostgreSQL on the target hosts - create the "ansible/playbooks/infra/db/postgresql/deploy.yml" playbook file with the following contents:

---
- hosts: all
  become: true
  pre_tasks:
    - rpm_key:
        key: https://download.postgresql.org/pub/repos/yum/keys/PGDG-RPM-GPG-KEY{{ '-AARCH64' if ansible_facts['architecture'] == 'aarch64' else '' }}-RHEL
      when: "'postgresql' in host_labels and ansible_facts['os_family'] == 'RedHat'"
  roles:
    - role: galaxyproject.postgresql
      vars:
        #postgresql_version: "{{ ansible_facts['ansible_local']['postgresql']['version'] }}"
        postgresql_conf: "{{ ansible_facts['ansible_local']['postgresql']['conf'] }}"
        postgresql_pg_hba_conf: "{{ ansible_facts['ansible_local']['postgresql']['pg_hba_conf'] }}"
      when: "'postgresql' in host_labels"

this playbook contains the bare minimum statements necessary to run the "galaxyproject.postgresql" Ansible role:

  • the "become" dictionary item tells Ansible to become another user ("root" is the default become user) for running the tasks
  • the "pre_tasks" list provides the list of tasks to run before running the contents of the "roles" list: in this case we are exploiting it to load the GPG key used to sign the RPM package providing the settings to setup the PostgreSQL repository in the target hosts. Here it is interesting to note the "when" clause, that limits the tasks to the Red Hat family only targets, and how the architecture specific GPG key's filename is generated using a JINJA2 inline if block
  • the "roles" list provides the list of roles to load - in this case it of course contains just the "galaxyproject.postgresql" Ansible role.

The "vars" dictionary item in each listed role is used to provide the list of variable to pass to the role itself - overriding its internal defaults. Since the values we want to pass to the"galaxyproject.postgresql" Ansible role are contained into variables different from the ones the role expects, we define them on the fly, mapping the values from the local facts.

More specifically:

  • postgresql_conf - we use the "conf" list defined in the JSON ansible local fact file "/etc/ansible/facts.d/postgresql.fact" stored in the target host itself
  • postgresql_pg_hba_conf - we use the "pg_hba_conf" list defined in the JSON ansible local fact file "/etc/ansible/facts.d/postgresql.fact" stored in the target host itself
We do not need to map the "postgresql_version" variable since it is already defined with the correct name as a group_var.
Another important thing to note is the use of when, that contains conditional to run the tasks and the role only on hosts having the "postgresql" label in the "host_labels" list - this is a life savior: playbooks are run by human beings, and they are supposed to do mistakes sometimes.

We can now run the "deploy.yml" playbook as follows:

ansible-playbook -l pgsql-ca-up1a001 /ansible/playbooks/infra/db/postgresql/deploy.yml
You are certainly wondering why the playbook does not configure PostgreSQL's firewall exception: this is because of best practices - granting access to a service from any source is certainly not a good security practice in the first place. In relation to your security rules and regulatory, you may be allowed to grant exceptions using a source subnet or single host as granularity: that means firewall exceptions must be delivered only when use cases are clearly defined, a moment that does not often match the time you are deploying the service. For this reason the firewall exception must be managed by another kind of playbook that we are about to see: the delivery targeted playbooks.

Delivery Targeted Playbooks

As we said, delivery targeted playbooks are used to on demand delivery configurations to already existing services - example use cases are:

  • adding or removing firewall rules to the system firewall
  • adding or removing database instances to existing database engines
  • adding or removing virtual hosts to web servers, reverse proxies or load balancers
  • ...

The Firewall Configuration Delivery Playbook

The most used (and also easy to implement) delivery targeted playbook is the one that delivers firewall rules to the Linux firewall.

In this example, we create the purpose-specific tasks list "rich-rules.yml" aimed at managing the Linux firewall's rich rules - create the "ansible/playbooks/infra/linux/firewall/tasks/rich-rules.yml" tasks list file with the following contents:

- name: Firewall rules
  become: true
  ansible.posix.firewalld:
    rich_rule: rule family={{ item['family'] | default('ipv4') }} source address={{ item['src_ip'] }} service name={{ item['service'] }} {{ item['action'] }}
    zone: "{{ item['zone'] | default('public') }}"
    permanent: true
    immediate: true
    state: "{{ item['state'] }}"
  loop: "{{ firewall | default([])}}"
  when:
    - "item['rule'] == ''+firewall_rule|default('')+'' or firewall_rule is undefined"

this steps list file can then be imported by the "delivery.yml" playbook, that actually implements the delivery targeted playbook - create the "ansible/playbooks/infra/linux/firewall/delivery.yml" playbook file with the following contents:

- hosts: all
  gather_facts: false
  tasks:
    - name: create rich rules in the Linux system firewall
      ansible.builtin.import_tasks:
        file: tasks/rich-rules.yml

this is a dual personality playbook: if you just run it, it delivers every rule defined on the target host's "firewall_rule" list:

ansible-playbook -l pgsql_ca_up1 /ansible/playbooks/infra/linux/firewall/delivery.yml

This flavor is handy if the target host has been re-created, or just to perform a rerun to make sure that every applied firewall rule is as it is supposed to be.

By the playbook can also deliver only specific rules - for example, if you want to deliver only the rule labeled as "apps_p1_c_s0_to_pgsql" (subnet 0 security tier 1 containing only application servers) , just type:

ansible-playbook -l pgsql_ca_up1 -e firewall_rule=apps_p1_a_s0_to_pgsql /ansible/playbooks/infra/linux/firewall/delivery.yml

The use of the "firewall_rule" variable as filtering criteria can be exploited also when importing this playbook from other playbooks, for example by the "deploy" playbook of solutions deployment playbooks: the rules to deliver can simply be specified by the delivery-specific blueprint.

The Linux OS Update Playbook

One the very first playbooks to implement is the one that updates the systems: in this example, we create the purpose-specific tasks list "update.yml" aimed at managing the update process, restarting the system only if it is necessary.

Create the "ansible/playbooks/infra/linux/os/tasks/update.yml" tasks list file with the following contents:

- block:
    - name: install yum-utils
      ansible.builtin.yum:
        name: yum-utils
        state: latest
    - name: yum update
      ansible.builtin.yum:
        name: '*'
        state: latest
      register: output
    - name: print the output of the update process
      ansible.builtin.debug:
        var: output
        verbosity: 1
    - name: run needs-restarting -r
      ansible.builtin.command: needs-restarting -r
      register: needs_restarting
      changed_when: needs_restarting.rc == 1
      failed_when: needs_restarting.rc != 0 and needs_restarting.rc != 1
    - name: reboot
      ansible.builtin.reboot:
        reboot_timeout: 300
      when: needs_restarting.rc == 1
  become: true
  when: ansible_os_family == "RedHat"

this steps list file can then be imported by the "update.yml" playbook, that actually implements the update playbook - create the "ansible/playbooks/infra/linux/os/update.yml" playbook file with the following contents:

- name: linux os update
  hosts: all
  gather_facts: true
  tasks:
    - name: import tasks from tasks/update.yml
      ansible.builtin.import_tasks:
        file: tasks/update.yml

As an example, run it limiting it to only the  "pgsql-ca-up1a001" as target host:

ansible-playbook -v --ask-vault-pass -l pgsql-ca-up1a001 /ansible/playbooks/infra/linux/os/update.yml

The Database Instances Delivery Playbook

As we said, another typical use case of deliverable configuration items are database instances:  the same way as firewall rules are delivered to the system firewall, database instances are created on the already installed database engines. Database instances are consumed by specific services, so they are created only when necessary (on demand or using a blue-print describing the whole deliverable - we will see an example of this soon).

In this example, we create the purpose-specific tasks list "create-dbs-and-users.yml" aimed at managing the database instances served by the database engine - create the "ansible/playbooks/infra/db/postgresql/tasks/create-dbs-and-users.yml" tasks list file with the following contents:

- block:
    - name: postgresql | create databases
      become_user: postgres
      no_log: true
      postgresql_db:
        name: "{{ item['name'] }}"
      loop: "{{ pgsql_databases | default(deliverable['hosts_groups']['pgsql_servers']['databases']) }}"
    - name: postgresql | create users
      become_user: postgres
      no_log: true
      postgresql_user:
        db: "{{ item['name'] }}"
        name: "{{ item['dbo_username'] }}"
        password: "{{ item['dbo_password'] }}"
        priv: "{{ item['priv'] | default('ALL')}}"
      loop: "{{ pgsql_databases | default(deliverable['hosts_groups']['pgsql_servers']['databases']) }}"
  become: true
  become_user: postgres
  when: "'postgresql' in host_labels"

in the above block there are two tasks:

  • the first (lines 2-7 ) create the databases
  • the second (lines 8-16 ) created the users and assigns them to the databases

please note the use of the "no_log" flag (lines 4 and 10) to prevent Ansible from displaying the details in the console log - we are using it since both tasks contain sensitive data (the username and passwords)

note also that the tasks run as "postgres" user and only on host having the "postgresql" in the "host_labels" host_var

This steps list file can then be imported by the "delivery.yml" playbook, that actually implements the delivery targeted playbook - create the "ansible/playbooks/infra/db/postgresql/delivery.yml" playbook file with the following contents:

- hosts: all
  gather_facts: true
  tasks:
    - name: create databases and users
      ansible.builtin.import_tasks:
        file: tasks/create-dbs-and-users.yml

this playbook expects the "pgsql_database" list of dictionaries or, in its absence, the "deliverable::pgsql_database" list of dictionaries - since list of dictionaries are "complex" objects, this playbook can only be run by passing a vars_file through the command line - for example:

ansible-playbook -e @vars/database.yml -l pgsql-ca-up1a001 playbooks/infra/db/postgresql/delivery.yml

of course, just as an example, the "ansible/vars/database.yml" file must contain a dictionary like the following one:

pgsql_databases:
  - name: foo_database
    dbo_username: foo_user
    dbo_password: foo_password
  - name: bar_database
    dbo_username: bar_user
    dbo_password: bar_password

The Database Backup Playbook

Since when dealing with databases another broadly used capability is backup and restore, we implement also the database backup playbook - in order to promote reusability, we create the purpose-specific tasks list "backup.yml"  - create the "ansible/playbooks/infra/db/postgresql/tasks/backup.yml" file with the following contents:

- block:
    - name: install python3-psycopg2 if necessary
      ansible.builtin.package:
        name: python3-psycopg2
        state: present
    - name: create the directory for storing the backup
      ansible.builtin.file:
        path: "{{ postgresql_backups_dir }}"
        mode: 0750
        owner: postgres
        state: directory
    - name: perform the backup
      community.postgresql.postgresql_db:
        state: dump
        name: "{{ postgresql_dbname }}"
        target: "{{ postgresql_backups_dir }}/{{ postgresql_backup_file|default(postgresql_dbname+'-'+ansible_date_time['epoch']+'.gz')}}"
      become_user: postgres
  become: true
  when: 
    - "'postgresql' in host_labels"
    - postgresql_dbname is defined

Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:

Even a small contribution is always welcome!

then we create the actual "ansible/playbooks/infra/db/postgresql/backup.yml" playbook that imports it:

---
- name: postgresql server backup
  hosts: all
  tasks:
    - name: import tasks/backup.yml
      ansible.builtin.import_tasks:
        file: tasks/backup.yml

as you see by the "when" clause,  running this playbook requires at least setting the "postgresql_dbname" variable - for example:

ansible-playbook --ask-vault-pass -l pgsql-ca-up1a001 -e postgresql_dbname=gitea_p1_0 /ansible/playbooks/infra/db/postgresql/backup.yml

optionally, you can also pass the "postgresql_backup_file" variable with the file name you want to assign to the backup.

The Restore Database Playbook

Of course we implement also the playbook for restoring a backup - again, in order to promote reusability, we create the purpose-specific tasks list "restore.yml"  - create the "ansible/playbooks/infra/db/postgresql/tasks/restore.yml" file with the following contents:

- block:
    - name: install python3-psycopg2 if necessary
      ansible.builtin.package:
        name: python3-psycopg2
        state: present
    - name: perform the restore
      community.postgresql.postgresql_db:
        state: restore
        name: "{{ postgresql_dbname }}"
        target: "{{ postgresql_backups_dir }}/{{ postgresql_backup_file }}"
      become_user: postgres
  become: true
  when:
    - "'postgresql' in host_labels"
    - postgresql_dbname is defined
    - postgresql_backup_file is defined

then we create the actual "ansible/playbooks/infra/db/postgresql/restore.yml" playbook that imports it:

---
- name: postgresql server restore
  hosts: all
  tasks:
    - name: import tasks/restore.yml
      ansible.builtin.import_tasks:
        file: tasks/restore.yml

an example statement for running this playbook is:

ansible-playbook --ask-vault-pass -l pgsql-ca-up1a001 -e postgresql_dbname=gitea_p1_0 -e postgresql_backup_file=gitea_p1_0-24061201.gz /ansible/playbooks/infra/db/postgresql/restore.yml

this time it is obviously mandatory to specify the "postgresql_backup_file" variable with the file name of the backup you want to restore.

The "postgresql_db" Ansible module, when used with the "restore" state, requires the database to be restored to already exist - if you are restoring on a new server, you must first create the database using the "ansible/playbooks/infra/db/postgresql/delivery.yml"playbook.

Solutions Deployment playbooks

We have finally reached the hottest topic of this post - solutions deployment playbooks: this kind of playbooks are used for seamlessly deploying and configuring complex services that are deeply dependent on the rest of the infrastructure - for example, an application that requires a database (upstream dependency), maybe in a scenario with replicated instances (that makes the service instance a downstream dependency, as backend of a load balancer or simply as an instance of a round-robin service).

In such a scenario, having a blueprint that governs the deployment is very handy, since every upstream and downstream dependency is clearly claimed in the blueprint: this make it much more easy and straightforward not only to the configuration management team, but also to the IT operations team, since they can very easily have a look to the blueprint to immediately make sense of any topological topic of the whole service and dependencies.

This very clean approach is compliant to both Infrastructure As Code and Configuration As Code paradigms by the way.

In this example lab we are deploying a mock project of a Gitea service backed by PostgreSQL, assigning it the "git-p1-0" deliverable label (mind how that contains every summary information needed to univocally identify the service - the "git" service in the production environment, security tier 1, cluster 0).

This post shows only the first part of the project: I want to keep the second part for another post where we explore the best practices for developing Ansible roles.

The BluePrint

The very first thing to do is writing blueprint - create the "ansible/environment/blueprints/git-p1-0.yml" file with the following contents: 

deliverable:
  label: git_p1_0
  description: Gitea based Git service, production security tier 1, instance 0
  hosts_groups:
    pgsql_servers:
      members:
        - name: pgsql-ca-up1a001
        #- name: pgsql-ca-up1b002
        #- name: pgsql-ca-up1c003 
      databases:
        - name: gitea_p1_0
          dbo_username: "{{ deliverables['git_p1_0']['pgsql_databases']['gitea_p1_0']['dbo_username'] }}"
          dbo_password: "{{ deliverables['git_p1_0']['pgsql_databases']['gitea_p1_0']['dbo_password'] }}"
      firewall:
        - rule: git-ca-up1a001_to_pgsql
          src_ip: 192.168.254.15/24
          service: postgresql
          action: accept
          state: enabled
        - rule: git-ca-up1b002_to_pgsql
          src_ip: 192.168.253.15/24
          service: postgresql
          action: accept
          state: enabled
    git_servers:
      members:
        - name: git-ca-up1a001
        #- name: git-ca-up1b002
    load_balancers:
      members:
        - name: lb-ca-up1a001
        - name: lb-ca-up1b002
You must never enter sensitive data into a blueprint - as you see here the username and password of the PostgreSQL database are fetched by other variables: these variables will of course come from another vars_file, but that file is encrypted using Ansible Vault.
Some of the hosts listed in this blueprint are commented out - I've put them there as placeholders to show where they must be put if you want to implement redundant git and PostgreSQL hosts in different availability zones to achieve high availability. Mind anyway that this post is just about Ansible playbooks best practices, so showing how to deploy a PostgreSQL HA cluster would bring it out of scope. But it could of course be the topic for another post.

The Secrets Vars File

The vars_files can be easily secured by using the "ansible-vault" command line utility.
For example, to create the "ansible/secrets/blueprints.yml" secret, just type:

ansible-vault create /ansible/secrets/blueprints.yml

pick up a good password and type it when requested - mind you will need that password each time you need to decrypt that file, so even while running the "ansible-playbook" statement (we will see it very soon).

The above statement opens your default editor - probably "vi": just add the following contents and then exit saving the changes:

deliverables:
  git_p1_0:
    pgsql_databases:
      gitea_p1_0:
        dbo_username: gitea_p1_0
        dbo_password: g1t-G6.lP-1!

The Directory For The Solutions Deployment Playbook

In my experience it is best to create a directory for storing the solutions deployment playbooks grouping them by technology - since in this case we are deploying Gitea instances, the technology is "gitea" (the other involved technologies, such as PostgreSQL or Ha-Proxy are just CI dependencies), so we  need to create the "ansible/playbooks/solutions/gitea" directory as follows:

mkdir -m 755 ansible/playbooks/solutions/gitea

we can now develop the actual solution deployment playbook - create the "ansible/playbooks/solutions/gitea/play.yml" file with the following contents:

- name: generating dynamic hostgroups
  hosts: all
  gather_facts: false
  become: false
  run_once: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit) }}"
  tasks:
    - debug:
        var: deliverable_vars_file
    - name: composing the pgsql_servers dynamic hostgroup
      add_host:
        groups: pgsql_servers
        hostname: "{{ item['name'] }}"
      loop: "{{ deliverable['hosts_groups']['pgsql_servers']['members'] }}"
    - name: composing the git_servers dynamic hostgroup
      add_host:
        groups: git_servers
        hostname: "{{ item['name'] }}"
      loop: "{{ deliverable['hosts_groups']['git_servers']['members'] }}"
    - name: composing the load_balancers dynamic hostgroup
      add_host:
        groups: load_balancers
        hostname: "{{ item['name'] }}"
      loop: "{{ deliverable['hosts_groups']['load_balancers']['members'] }}"
- name: pinging the PostgreSQL servers
  hosts: pgsql_servers
  gather_facts: false
  become: false
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit) }}"
  tasks:
    - name: ping member
      ping:
- name: pinging the Git servers
  hosts: git_servers
  gather_facts: false
  become: false
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit) }}"
  tasks:
    - name: ping member
      ping: 
- name: common tasks for PostgreSQL and Git
  hosts:
    - git_servers
    - pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit) }}"
  tasks:
    - name: configuring linux firewall rich rules
      ansible.builtin.import_tasks:
        file: ../../infra/linux/firewall/tasks/rich-rules.yml
- name: PostgreSQL tasks
  hosts: pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit) }}"
  tasks:
    - name: set the firewall fact
      set_fact: 
        firewall: "{{ deliverable['hosts_groups']['pgsql_servers']['firewall'] }}"
    - name: configure solutuion-specific firewall rules
      ansible.builtin.import_tasks:
        file: ../../infra/linux/firewall/tasks/rich-rules.yml
    - name: create databases and users
      ansible.builtin.import_tasks:
        file: ../../infra/db/postgresql/tasks/create-dbs-and-users.yml
Please note how each play loads two vars_files: the first is the ansible-vault encrypted “blueprints.yml” containing the credentials, the other one is loaded using the value of the “deliverable_vars_file” variable - that is unset by default: this is a trick for having this playbook work in two distinct ways: the first, is a direct run passing loading the vars_file using the "-e" command line parameter (you will see this in action soon); the second way is importing this playbook in the “site.yml” playbook , passing to this playbook the path of the vars_files to load using the “deliverable_vars_file” variable. We’ll see in detail this other way of usage later on.

The above playbook consists of several plays:

  • the first play (lines 1-26) is used to to dynamically build the host_groups: by having the "add_host" module (lines 13,18 and 23) looping on the different targets by type (lines 16, 21 and 26), the target hosts are added to the proper host_group ("pgsql_servers", "git_servers" or "load_balancers")
  • the next two plays (lines 27-46) are used to ping one by one each host of each host_group - this is not really necessary, but I saw with some Ansible's version it gets stuck in the next plays if this is not performed - I've not investigated further on this, but if anybody of you knows why please add a comment and I'll integrate it in this post.
  • The fourth play (lines 47-58) actually starts doing something - it contains tasks to be performed on both the postgresql hosts and the git hosts, such as applying firewall rules. Please note how we are reusing the "ansible/playbooks/infra/linux/firewall/tasks/rich-rules.yml" by importing it: this approach promotes maintainability since changes made to the to the "ansible/playbooks/infra/linux/firewall/tasks/create-dbs-and-users.yml" are immediately applied to this playbook too..
  • The fifth play (lines 59-74) is used to deliver changes only to the postgresql server hosts.

More specifically:

    • additional firewall rules, specific of this solutions deployment - as you see, we are reusing "ansible/playbooks/infra/linux/firewall/tasks/rich-rules.yml"
    • the database used by this solutions deployment - even here please note how we are reusing "ansible/playbooks/db/postgresql/tasks/create-dbs-and-users.yml"

We can use this playbook to instantiate the "git-p1-0" deployment as follows::

ansible-playbook --ask-vault-pass -e@/ansible/environment/blueprints/git-p1-0.yml /ansible/playbooks/solutions/gitea/play.yml

The Site playbook

We have finally come to the creation of the "site.yml" playbook - this is a very important playbook that enables you to deliver from scratch with just a single "ansible-playbook" statement the entire Ansible managed site.

The gold rule is just importing every playbook - in the right execution order, so first the ones to deploy the infrastructure, and last the ones for deploying the solutions, taking in account their interdependencies.

In our example lab, just create the "ansible/playbooks/site.yml" with the following contents:

- name: Deploy every PotgreSQL server
  ansible.builtin.import_playbook: infra/db/postgresql/deploy.yml
- name: Deploy the Git instance
  ansible.builtin.import_playbook: solutions/gitea/play.yml
  vars:
    deliverable_vars_file: "../../../environment/blueprints/git-p1-0.yml"

Mind how we are passing as "deliverable_vars_file" variable the relative path to the blueprint to be deployed by the solutions deployment playbook.

If you keep your "site.yml" up to date, adding the playbooks one by one whenever you develop new ones, you will be able to deploy the whole datacenter by just running:

ansible-playbook --ask-vault-pass  /ansible/playbooks/site.yml

Footnotes

I think that after these initial posts on Ansible you are starting to feel the huge power that comes with Ansible - figure this is only the beginning, ... we have not started talking about how to develop your own Ansible roles and collections.

The next post of this set is "Ansible roles best practices: practical example gitea role": in this post we will se not only a non trivial hands on lab explaining ow to write a custom Ansible role in a clean and tidy way, but we see also how to write the playbooks to use it.

If you like this post and any other ones, just share this and the others on Linkedin - sharing and comments are an inexpensive way to push me into going on writing - this blog makes sense only if it gets visited.

I hate blogs with pop-ups, ads and all the (even worse) other stuff that distracts from the topics you're reading and violates your privacy. I want to offer my readers the best experience possible for free, ... but please be wary that for me it's not really free: on top of the raw costs of running the blog, I usually spend on average 50-60 hours writing each post. I offer all this for free because I think it's nice to help people, but if you think something in this blog has helped you professionally and you want to give concrete support, your contribution is very much appreciated: you can just use the above button.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>