Ansible roles are reusable objects that provide specialized tasks lists, handlers, templates and resource files within a single delivery unit: these objects can be directly accessed from the filesystem, downloaded from Git, from the online Ansible Galaxy of from a Ansible Galaxy compatible local service, such as Pulp 3. Anyway writing custom roles is really a challenging task, especially designing them to be as easy to use and maintain as possible.

The “Ansible roles best practices: practical example gitea role”post guides you into developing a custom Ansible role using a clean and tidy design that you can use as a reference to develop other custom roles.

As use case, we see how to deploy Gitea, a blazoned full featured Git Web UI supporting multiple organizations, providing authentication and authorization facilities enabling to protect repositories and branches, supporting Merge Requests and a lot of other advanced features, with of even a powerful and well standardized API that can be easily exploited by your automations. And, last but not least, ... it is even Java-free.

Designing The Gitea Role

In this post we pretend someone really requested us to develop an Ansible role so,  as by best practices, we must start from gathering the requirements.

The Business Requirements

The requestor just asks for an Ansible role for providing Gitea - we are doing it a near real life example: do not expect much more than this level of detail from user's requests.

Refining The Business Requirements

After a quick interview, it turned out that the role must provide the following capabilities:

  • install Gitea
  • update Gitea - that means also being able to downgrade when necessary
  • uninstall Gitea
  • provide the initial setup: once delivered, the Gitea instance must have an initial configuration

The service can tolerate short down time but it must be available 24x7 and contains confidential data.

Combining The Requirements With The Enterprise Software Definition Document (SDD)

After gathering the above requirements, we must make them matching the Enterprise wide requirements claimed in the Enterprise wide Software Definition Document - SDD.

Your Enterprise Architect, or if your corporation is small, somebody entitled to do it, MUST have defined the Enterprise wide Software Definition Document - SDD, integrating (not changing!) from time to time as necessary.

If you don't have it you must be really sad (I'd also be angry), because without it things really like to quickly become very messy, especially with Ansible.

This not only causes a lot of frustration to people when developing and operating, but even money loss: a messy environment is:

  • hard to be expanded - that means delay when delivering new services
  • not very stable - the messy complexity leads to human errors and unpredictable events causing service outages

Daily firefighting is not good to anybody, and just increases the turnover, often leading to having the most valuable professionals leaving looking for less messy working environments.

In our example scenario, the SDD document, concerning custom services and solutions, claims:

  • services managing confidential data must implement TLS using an X.509 certificate: the certificate's CN must match the host's FQDN. In addition to that, the certificate must mandatorily contain in the subjectAltName the hosts' FQDN, the host's IP address and the FQDN of the service itself.
  • certificates must be stored beneath the "/etc/pki/tls/certs" directory, whereas their private key must be stored beneath the "/etc/pki/tls/private" directory. File names must be the certificate's CN with a trailing ".crt" for certificates and ".key" for private keys.
  • certificates must be managed by Cloudflare's certmgr, and enrolled using the PKI of the same  security tier assigned to the specific environment the host belongs to

concerning services' endpoints, the SDD claims that:

  • when an HTTPS service providing a Web UI is directly accessed, it must be bound to the default HTTPS port (TCP/443) - exceptions to this rule apply only to administrative-only web UIs, such as administrative interface for configuring appliances, for configuring the host itself, for configuring cluster suites, ecc.

concerning databases - Gitea makes use of a database backend - the SDD claims that:

  • the preferred database engine must always be PostgreSQL, using MariaDB as second choice only if the application does not support PostgreSQL
  • applications must use the corporate wide database engines instances, - the only exception is the lab environment, where it is possible to have collapsed installations with the application and the database engine on the same host

concerning the platforms, the SDD claims:

  • the preferred platform for services is Oracle Linux, either "x86_64" or "aarch64", although exceptions can apply only if the service does not officially support that platform. Linux is alway the preferred choice, falling back to Microsoft Windows only as last resort.

concerning Ansible roles, the SDD states:

  • custom roles must be developed only if they don't exists already available shelf ones
  • custom roles must be bound to a single application only - it is forbidden to develop a role covering more than one application
  • the role's name must match the application name or be a short nick that enables easily guessing the related application
  • it is forbidden to split the logic of a role for an application into multiple roles: the only exception to this rule is when having to deal with multi-tier applications. An example of this exceptional scenario is a "foo" two tiers application providing a web-ui front and a REST API backend: in this case there must be developed the both the "foo-ui" and "foo-api"
  • custom roles must take care of installing every application dependency - such as installing libraries, frameworks or command-line tools.
  • custom roles must provide an initial configuration whenever it is technically possible
  • custom roles must provide a way to easily update the application, as well as an health-check facility
  • custom roles must have the application uninstallation capability
  • custom roles must be generated using the "ansible-galaxy" command line tool
  • custom roles must provide the following metadata: "author", "description", "company", "license", "min_ansible_version", "platforms": "author" must be set with the name and corporate email address of who developed the role (e.g "John Doe <john.doe@carcano.corp>"), whereas "company" must be set to "Carcano SA"
  • custom roles must belong to the "carcano" namespace - the short code is "ca" - and, unless specifically told, be licensed as "license (GPL-2.0-or-later, MIT, etc)"
  • custom roles must not implement Ansible tags, since tags are used only at the playbook level.

Create The Gitea Role

After collecting all the above information we are ready to design and implement our Gitea role without making a mess.

Initialize The Role Using Ansible Galaxy

Accordingly with the requirements, we generate the Gitea Ansible role by using the "ansible-galaxy" command-line tool, putting it inside the "carcano" namespace - in our case, the statement to run is:

ansible-galaxy role init --offline --init-path /ansible/roles carcano.gitea

the output is as follows:

- Role carcano.gitea was created successfully

once created, change to the "ansible/roles/carcano.gitea" directory, that is the root role's root directory:

cd ansible/roles/carcano.gitea

Fill in The Role's Metadata

First and foremost, we must populate the Ansible role metadata that are used by Ansible Galaxy to classify the role: these attributes are used when downloading roles for automatically resolving roles' dependencies (automatically downloading the required ones), as well as, when performing queries using the web UI - such as the online Ansible Galaxy web UI - using them as matching criteria for the search.

Modify the "meta/main.yml" file until it looks like as follows - I omitted the comment lines to keep it short:

galaxy_info:
  author: Marco Carcano <marco.carcano@carcano.corp>
  description: Gitea Ansible Role
  company: Carcano SA
  namespace: carcano
  license: license (GPL-2.0-or-later, MIT, etc)
  min_ansible_version: 2.7
  platforms:
    - name: oracle
      versions:
        - 9
        - 8
  galaxy_tags:
    - git 
  dependencies: []

In the above metadata file we set all the metadata requested by the requirements - mind the "namespace" attribute!

Configure Role's Defaults

When dealing with roles, it is very important to provide inside the "defaults/main.yml" reasonable defaults that can be used to instantiate the role - these are used both:

  • for running unit tests, since they provide the necessary values for a complete successfully run of the role
  • as a documentation that can be used by developers to understand which are the available settings for configuring the role - by using host_vars, group_vars or var-files these settings can then be overridden in the playbooks importing the role at runtime..
The best practice for variable's names is to always prefix by the role's name: this way you should be safe from collisions with other variables defined in playbooks or in any other Ansible roles. To keep the prefix short, we use "ca_"  as a code to refer to the "carcano" namespace. Since this role's name is "gitea", we prefix each variable's name with "ca_gitea_".

Having said that, modify the defaults settings file "defaults/main.yml" created by "ansible-galaxy" so that it looks like as follows:

---
# defaults file for carcano.gitea
ca_gitea_git_home_dir: /var/lib/git
ca_gitea_version: 1.21.10
ca_gitea_download_url: https://dl.gitea.io
ca_gitea_install_dir: /opt
ca_gitea_data_dir: "{{ ca_gitea_git_home_dir }}/data"
ca_gitea_repos_dir: "{{ ca_gitea_git_home_dir }}/repositories"
ca_gitea_log_dir: "/var/log/gitea"

ca_gitea_group:
  name: git
  gid: 987
ca_gitea_user:
  name: git
  uid: 990
  home: "{{ ca_gitea_git_home_dir }}"
  gecos: Gitea System User
  shell: /bin/bash
  groups:
      - "{{ ca_gitea_group.name }}"

ca_gitea_settings:
  db_schema: ""
  db_type: "postgres"
  db_host: "localhost"
  db_url: "localhost:5432"
  db_user: "gitea"
  db_passwd: "giteapwd123-"
  db_name: "gitea"
  ssl_mode: "disable"
  charset: "utf8"
  db_path: "{{ ca_gitea_data_dir }}/gitea.db"
  app_name: "Gitea: Instance For Running Unit Tests"
  repo_root_path: "{{ ca_gitea_repos_dir }}"
  lfs_root_path: "{{ ca_gitea_data_dir }}/lfs"
  run_user: "git"
  domain: "git1.lab1.foo.corp"
  ssh_port: "22"
  http_port: "3000"
  app_url: "http://git1.lab1.foo.corp:3000/"
  log_root_path: "{{ ca_gitea_log_dir }}"
  smtp_addr: "mail.lab1.foo.corp"
  smtp_por: "587"
  smtp_from: "git1-l1@foo.corp"
  smtp_user: "git1-l1"
  smtp_passwd: "blabla"
  enable_federated_avatar: "on"
  disable_registration: "on"
  require_sign_in_view: "on"
  default_allow_create_organization: "on"
  default_enable_timetracking: "on"
  password_algorithm: "pbkdf2"
  no_reply_address: "git1-l1@foo.corp"
  admin_name: "administrator"
  admin_passwd: "foo-123"
  admin_confirm_passwd: "foo-123"
  admin_email: "lab@foo.corp"

The bare minimum to to adjust at runtime are:

  • ca_gitea_version - the Gitea's version to install or to update to
  • ca_gitea_group::gid the GID to assign to the "gitea" local OS group
  • ca_gitea_user::uid the UID to assign to the "gitea" local OS user
  • ca_gitea_settings::db_type the Database Engine used as DB backend
  • ca_gitea_settings::db_host the hostname or FQDN of the DB server
  • ca_gitea_settings::db_name the name of the database to connect on the DB server
  • ca_gitea_settings::db_user the username used to access the database backend - mind it must have administrative rights on the database
  • ca_gitea_settings::db_passwd the password of the above user
  • ca_gitea_settings::domain: the FQDN that Git clients will use to reach the Gitea instance
  • ca_gitea_settings::app_url: the HTTP URL the Git clients will use to reach the Gitea instance when using the HTTP transport
  • ca_gitea_settings::no_reply_address: the sender email address Gitea will use when Gitea sends emails
  • ca_gitea_settings::admin_name: the username of the Gitea Web UI's administrative user
  • ca_gitea_settings::admin_passwd: the password of the Gitea Web UI's administrative user
  • ca_gitea_settings::admin_confirm_password - the web form's password confirmation field - it obviously must match gitea_settings::admin_passwd
  • ca_gitea_settings::admin_email: the email of the Gitea Web UI's administrative user

I won't provide a detailed explanation of the above settings file since the variables' purposes are easily inferred from their straightforward names. For the sake of completeness, mind that the "gitea_settings'' dictionary items are the exact input fields of the form you get when configuring Gitea for the first time using its web UI.

Configure Role's Variables

Role's variables are like classes' private attributes when dealing with object oriented programming - they are attributes that are intended to be managed only inside the role itself.

The best practice for variable's names is to always prefix by the role's name: this way you should be safe from collisions with other variables defined in playbooks or in any other Ansible roles. To keep the prefix short, we use "ca_"  as a code to refer to the "carcano" namespace. Since this role's name is "gitea", we prefix each variable's name with "ca_gitea_".

Modify the file "vars/main.yml" to look as the following contents:

---
# vars file for carcano.gitea
ca_gitea_packages:
  - git

as you see, here we just define the "gitea_packages" list, with "git" as the only entry: this list is used to provide the list of required packages to be installed on the target host before going on with installing and configuring Gitea.

Create Role's JINJA2 Templates

As we saw in the proceeding posts, Ansible can generate contents on the fly by merging variables into JINJA2 templates.

Since the Gitea's Systemd service unit file contents depend on the settings provided by some variables, we generate it from a JINJA2 template.

Create the "templates/gitea.service.jn2" file with the following contents:

[Unit]
Description=Gitea (Git with a cup of tea)
After=syslog.target
After=network.target
After=postgresql.service

[Service]
RestartSec=2s
Type=simple
User={{ ca_gitea_user['name'] }}
Group={{ ca_gitea_group['name'] }}
WorkingDirectory={{ ca_gitea_install_dir }}/gitea
ExecStart={{ ca_gitea_install_dir }}/gitea/gitea web
Restart=always
Environment=USER={{ ca_gitea_user['name'] }} HOME={{ ca_gitea_user['home'] }}

[Install]
WantedBy=multi-user.target

the markers "{{" and "}}" are used to enclose contents that must be replaced by the variable's values while merging the template to generate the final contents.

Configure The Role's Handlers

Handlers are special tasks that run

  • when the role run is complete (so after running every tasks in the tasks list)
  • only if they have been notified to run

This kind of task is used typically to perform actions on services, such as start, stop or restart.

Mind it is possible to force additional runs of the notified handlers before reaching the end of the role's tasks list if necessary: we'll see an example later on.

Since this role creates the "gitea" Systemd's service unit, we add an handler for managing its restarts - modify the "handlers/main.yml" file to look like as follows:

---
# handlers file for gitea
- name: gitea_restart
  become: true
  service:
    enabled: true
    name: gitea
    state: restarted

Configure The Role's Tasks

When running a role, Ansible runs the tasks listed in the "tasks/main.yml" file - anyway mind that putting all the tasks here means having roles with just one capability: it is much more handy have multiple entry points dedicated to each capability - for example "tasks/update.yml" for updating the application provisioned by the role, "tasks/uninstall.yml" to uninstall the application and so on.

Create A Library Directory

Since these entry point may share some tasks, it worths the effort to create a “lib” directory used to store all the shared tasks lists - create the "tasks/lib" directory:

mkdir tasks/lib

Systemd Related Tasks

The first tasks list we create is the one used to generate the Gitea Systemd's service unit from the JINJA2 template we configured a few moments ago - create the "tasks/lib/systemd.yml" Systemd unit tasks file:

---
- name: carcano.gitea::systemd | systemd related block of tasks
  become: yes
  block:
  - name: carcano.gitea::systemd | create gitea's service systemd unit file
    ansible.builtin.template:
       src: gitea.service.jn2
       dest: /etc/systemd/system/gitea.service
       mode: 0644
    notify: gitea_restart
    register: ca_gitea_systemd_unit_created
  - name: carcano.gitea::systemd | reload systemd
    ansible.builtin.command: systemctl daemon-reload
    when: ca_gitea_systemd_unit_created is defined and ca_gitea_systemd_unit_created.changed == true

The tasks list:

  • generate the "gitea" Systemd unit file from the "gitea.service.jn2" JINJA2 template file - if this actually results in a change (the Systemd unit file does not exists or had different contents), then it registers the "gitea_systemd_unit_created" Ansible fact, and notifies the "gitea_restart" handler - mind that this does not mean it immediately runs the service!
  • if the "gitea_systemd_unit_created" Ansible fact is set, it reloads Systemd so to apply the justly created or modified "gitea" Systemd unit file

OS Related Tasks

Since there are operating system related tasks, a clean design is grouping them within the same task s file -  create the "tasks/lib/os.yml":

---
- name: carcano.gitea::os | block with operating system related tasks
  become: yes
  block:
  - name: carcano.gitea::os | creating gitea's operating system group
    ansible.builtin.group:
      name: "{{ ca_gitea_group['name'] }}" 
      gid: "{{ ca_gitea_group['gid'] }}"
  - name: carcano.gitea::os | creating gitea's operating system users
    ansible.builtin.user:
      name: "{{ ca_gitea_user['name'] }}"
      uid: "{{ ca_gitea_user['uid'] }}"
      home: "{{ ca_gitea_user['home'] }}"
      comment: "{{ ca_gitea_user['gecos'] }}"
      groups: "{{ ca_gitea_user['groups'] }}"
      shell: "{{ ca_gitea_user['shell'] }}"

The tasks list:

  • create the operating system group that will be used as additional group to the Gitea system user
  • creates the Gitea's operating system user

Gitea's Initial Configuration Tasks

We can now focus on the Gitea's initial configuration related tasks: create the "tasks/lib/initial-config.yml" task file with the following contents:

---
- name: carcano.gitea::initial-config | block with configuration tasks
  become: yes
  block:
  - name: carcano.gitea::initial-config | wait for the http service to start
    uri:
      url: http://127.0.0.1:3000/
      method: GET
    register: ca_gitea_login_response
    until: ca_gitea_login_response.status == 200
    retries: 30
    delay: 1
  - name: carcano.gitea::initial-config | uploading the ca_gitea_settings document to the gitea service
    register: ca_gitea_install_response
    ansible.builtin.uri:
      url: "http://127.0.0.1:3000/"
      method: POST
      return_content: yes
      body_format: form-urlencoded
      status_code: 200
      body: "{{ ca_gitea_settings }}"
  - name: carcano.gitea::initial-config | print gitea service response
    ansible.builtin.debug:
      msg: "{{ ca_gitea_install_response }}"
      verbosity: 1
  - name: carcano.gitea::initial-config | enable TLS support on the gitea service
    notify: gitea_restart
    ansible.builtin.lineinfile:
      path: "{{ ca_gitea_install_dir }}/gitea/custom/conf/app.ini"
      search_string: '^[ ]*PROTOCOL[ ]*='
      insertafter: '^\[server\]$'
      line: "PROTOCOL  = https"
    when:
      - ca_gitea_tls_key_file is defined
      - ca_gitea_tls_cert_file is defined
  - name: carcano.gitea::initial-config | configure TLS private key
    notify: gitea_restart
    ansible.builtin.lineinfile:
      path: "{{ ca_gitea_install_dir }}/gitea/custom/conf/app.ini"
      search_string: '^[ ]*KEY_FILE[ ]* ='
      insertafter: '^\[server\]$'
      line: "KEY_FILE = {{ ca_gitea_tls_key_file }}"
    when: ca_gitea_tls_key_file is defined
  - name: carcano.gitea::initial-config | configure TLS certificate
    notify: gitea_restart
    ansible.builtin.lineinfile:
      path: "{{ ca_gitea_install_dir }}/gitea/custom/conf/app.ini"
      search_string: '^[ ]*CERT_FILE[ ]* ='
      insertafter: '^\[server\]$'
      line: "CERT_FILE = {{ ca_gitea_tls_cert_file }}"
    when: ca_gitea_tls_cert_file is defined
  - name: carcano.gitea::initial-config | redirect port 80 to {{ ca_gitea_settings.http_port }}
    ansible.builtin.firewalld:
      rich_rule: rule family={{ item }} forward-port port=80 protocol=tcp to-port={{ ca_gitea_settings.http_port }}
      zone:      public
      permanent: true
      immediate: true
      state:     enabled
    with_items:
      - ipv4
      - ipv6
    when:
      - ca_gitea_tls_key_file is not defined
      - ca_gitea_tls_cert_file is not defined
  - name: carcano.gitea::initial-config | redirect port 443 to {{ ca_gitea_settings.http_port }}
    ansible.builtin.firewalld:
      rich_rule: rule family={{ item }} forward-port port=443 protocol=tcp to-port={{ ca_gitea_settings.http_port }}
      zone:      public
      permanent: true
      immediate: true
      state:     enabled
    with_items:
      - ipv4
      - ipv6
    when:
      - ca_gitea_tls_key_file is defined
      - ca_gitea_tls_cert_file is defined
  - name: carcano.gitea::initial-config | flush handlers
    ansible.builtin.meta: flush_handlers

This tasks list:

  • calls the Gitea's HTTP endpoint, trying it 30 times until it get an answers (lines 5 - 12 )
  • submits the "ca_gitea_settings" document as web formatted, so to mimic the submission of the web UI's initial configuration form (lines 13 - 21 )
  • if the TLS private key ("ca_gitea_tls_key_file") and certificate ("ca_gitea_tls_cert_file") has been configured, it configures the endpoint to use TLS (lines 26 - 51 ) and it configures traffic redirection from the https port (443) the the Gitea's endpoint port (lines 65 - 77 )
  • if the TLS private key ("ca_gitea_tls_key_file") and certificate ("ca_gitea_tls_cert_file") has not been configured, it configures traffic redirection from the http port (80) the the Gitea's endpoint port (lines 52 - 64 )
  • restarts the gitea service by flushing the notified handlers (lines 78 - 79 )

Mind that the role assumes that certificates are already available on the gitea hosts: by design, accordingly with the enterprise wide SDD, we decided not to have Ansible managing the certificates lifecycle - instead we are relying on Cloudflare's PKI and TLS toolkit, having the "certmgr" service enrolling certificates as necessary, as described in the "Cloudflare's Certmgr Tutorial – A Certmgr HowTo" post.

Gitea Installation Tasks

Going on with our clean structure, create the "tasks/install.yml" with the following contents:

- name: carcano.gitea::install | import lib/os.yml
  ansible.builtin.include_tasks: lib/os.yml
- name: carcano.gitea::install | import lib/systemd.yml
  ansible.builtin.include_tasks: lib/systemd.yml
- name: carcano.gitea::install | block with install tasks
  become: yes
  block:
    - name: carcano.gitea::install | install dnf versionlock plugin
      ansible.builtin.package:
        name: python3-dnf-plugin-versionlock
        state: present
    - name: carcano.gitea::install | install pakages
      ansible.builtin.package:
        name: "{{ ca_gitea_packages }}"
        state: present
    - name: carcano.gitea::install | create {{ ca_gitea_git_home_dir }} directory
      ansible.builtin.file:
        path: "{{ ca_gitea_git_home_dir }}"
        state: directory
        mode: 0755
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
    - name: carcano.gitea::install | create {{ ca_gitea_install_dir }}/gitea directory
      ansible.builtin.file:
        path: "{{ ca_gitea_install_dir }}/gitea"
        state: directory
        mode: 0755
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
    - name: carcano.gitea::install | create {{ ca_gitea_settings['log_root_path'] }} directory
      ansible.builtin.file:
        path: "{{ ca_gitea_settings['log_root_path'] }}"
        state: directory
        mode: 0755
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
    - name: carcano.gitea::install | download gitea {{ ca_gitea_version }}
      ansible.builtin.get_url:
        url: "{{ ca_gitea_download_url }}/gitea/{{ ca_gitea_version }}/gitea-{{ ca_gitea_version }}-linux-{{ 'arm64' if ansible_facts['architecture'] == 'aarch64' else 'amd64' }}"
        dest: "{{ ca_gitea_install_dir }}/gitea/gitea-{{ ca_gitea_version }}"
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
        mode: 0755
    - name: carcano.gitea::install | symlink the current Gitea Version
      ansible.builtin.file:
        src: "{{ ca_gitea_install_dir }}/gitea/gitea-{{ ca_gitea_version }}"
        dest: "{{ ca_gitea_install_dir }}/gitea/gitea"
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
        state: link
      notify: gitea_restart
    - name: carcano.gitea::install | dnf version lock the ca_gitea_packages packages
      ansible.builtin.shell:
        cmd: "dnf versionlock add {{ ca_gitea_packages | join(' ') }}"
    - name: carcano.gitea::install | flush handlers
      ansible.builtin.meta: flush_handlers
    - name: carcano.gitea::install | include lib/initial-config.yml
      ansible.builtin.include_tasks: lib/initial-config.yml

Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:

Even a small contribution is always welcome!

understanding this tasks list should not be hard, anyway, it roughly:

  • include the tasks from the "lib/os.yml" ( lines 1 - 2 ) and "lib/systemd.yml" tasks lists ( lines 3 - 4)
  • installs the "versionlock" DNF  module ( lines 8 - 11 )
  • install the packages listed in the "ca_gitea_packages" list ( lines 12 - 15 )
  • create the directory where to store the gitea shared files (data, repositories, lfs) ( lines 16 - 22 )
  • create the directory tree where to store the gitea application and its settings files ( lines 23 - 29 )
  • create the directory where to store the logs files ( lines 30 - 36 )
  • download the gitea with version specified by "ca_gitea_version" ( lines 37 - 43 )
  • symlink the current gitea with the downloaded one matching the version specified by "ca_gitea_version" ( lines 44 - 50 )
  • set version-lock for all the packages listed in the" ca_gitea_packages" list ( lines 52 - 54 )
  • flush the handlers so to start the Gitea service as necessary ( lines 55 - 56 )
  • include the tasks from the "lib/initial-config.yml"( lines 57 - 58 )

HealthCheck Tasks

We create also a tasks list to perform the gitea service health check  - create the "tasks/lib/healthcheck.yml" file with the following contents:

- name: carcano.gitea::healthcheck | HTTP connect to :{{ ca_gitea_settings.http_port }}
  ansible.builtin.uri:
    url: "{{ 'https' if ca_gitea_tls_key_file is defined and ca_gitea_tls_cert_file is defined else 'http' }}://127.0.0.1:{{ ca_gitea_settings.http_port }}/api/healthz"
    method: GET
    # we are validating the Gitea service itself
    # there's no need for validating certificates here
    validate_certs: false
  register: ca_gitea_login_response
  retries: 5
  delay: 1
- ansible.builtin.debug:
    msg: "{{ ca_gitea_login_response }}"
    verbosity: 1

this tasks list is straightforward - the only interesting part is the trick used for automatically set "http" or "https" protocol if the TLS private key ("ca_gitea_tls_key_file") and certificate ("ca_gitea_tls_cert_file") files have been set.

Create then the  "tasks/healthcheck.yml" file that includes it:

- name: carcano.gitea::healthcheck | import lib/healthcheck.yml
  ansible.builtin.include_tasks: lib/healthcheck.yml

Service Start Task

It is also convenient to create a tasks list to perform the gitea service start  - create the "tasks/start.yml" file with the following contents:

---
- name: carcano.gitea | services start
  become: true
  service:
    name: gitea
    state: started

this task list can be then imported by playbooks for starting the Gitea service.

Service Stop Task

Another convenient task list to create is the one to perform the gitea service stop  - create the "tasks/stop.yml" file with the following contents:

---
- name: carcano.gitea | services stop
  become: true
  service:
    name: gitea
    state: stopped

this task list can be then imported by playbooks for stopping the Gitea service.

Service Restart Task

We can of course create the tasks list to perform the gitea service restart  - create the "tasks/restart.yml" file with the following contents:

---
- name: carcano.gitea | services restart
  become: true
  service:
    name: gitea
    state: restarted

this task list can be then imported by playbooks for restarting the Gitea service.

Gitea Update Tasks

Another very useful capability to implement is the one that updates the service -  create the "tasks/update.yml" tasks list with the following contents:

- name: carcano.gitea::update | get service facts
  ansible.builtin.service_facts:
- name: carcano.gitea::update | fail if the gitea service is still running
  ansible.builtin.fail:
     msg: exiting because the gitea service is still running - you must stop it before running the update
  when: ansible_facts.services['gitea.service']['state'] == 'running'
- name: carcano.gitea::update | block with update tasks
  become: yes
  block:
    - name: carcano.gitea::update | stop the gitea service
      ansible.builtin.service:
        name: gitea
        state: stopped
    - name: carcano.gitea::update | remove dnf version lock of the ca_gitea_packages packages
      ansible.builtin.shell:
        cmd: "dnf versionlock delete {{ ca_gitea_packages | join(' ') }}"
    - name: carcano.gitea::update | update pakages
      ansible.builtin.package:
        name: "{{ ca_gitea_packages }}"
        state: present
    - name: carcano.gitea::update | download gitea {{ ca_gitea_version }}
      ansible.builtin.get_url:
        url: "{{ ca_gitea_download_url }}/gitea/{{ ca_gitea_version }}/gitea-{{ ca_gitea_version }}-linux-{{ 'arm64' if ansible_facts['architecture'] == 'aarch64' else 'amd64' }}"
        dest: "{{ ca_gitea_install_dir }}/gitea/gitea-{{ ca_gitea_version }}"
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
        mode: 0755
    - name: carcano.gitea::update | get info from the {{ ca_gitea_install_dir }}/gitea/gitea symlink
      ansible.builtin.stat:
        path: "{{ ca_gitea_install_dir }}/gitea/gitea"
      register: ca_gitea_current_version
    - name: carcano.gitea::update | print ca_gitea_current_version
      ansible.builtin.debug:
        var: ca_gitea_current_version
        verbosity: 1
    - name: carcano.gitea::update | symlink the current gitea Version
      ansible.builtin.file:
        src: "{{ ca_gitea_install_dir }}/gitea/gitea-{{ ca_gitea_version }}"
        dest: "{{ ca_gitea_install_dir }}/gitea/gitea"
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
        state: link
      notify: gitea_restart
    - name: carcano.gitea::update | add version lock to the packages
      ansible.builtin.shell:
        cmd: "dnf versionlock add {{ ca_gitea_packages | join(' ') }}"
    - name: carcano.gitea::update | flush handlers
      ansible.builtin.meta: flush_handlers
    - name: carcano.gitea::update | import lib/healthcheck.yml
      ansible.builtin.include_tasks: lib/healthcheck.yml
    - name: carcano.gitea::update | remove {{ ca_gitea_current_version.stat.lnk_target }}
      ansible.builtin.file:
        path: "{{ ca_gitea_current_version.stat.lnk_target }}"
        state: absent
      when: ca_gitea_version not in ca_gitea_current_version.stat.lnk_target.split('/')[-1]
  rescue:
    - name: carcano.gitea::update | restore version lock to the packages
      ansible.builtin.shell:
        cmd: "dnf versionlock add {{ ca_gitea_packages | join(' ') }}"
    - name: carcano.gitea::update | restore symlink the previous gitea Version
      ansible.builtin.file:
        src: "{{ ca_gitea_current_version.stat.lnk_target }}"
        dest: "{{ ca_gitea_install_dir }}/gitea/gitea"
        owner: "{{ ca_gitea_user['name'] }}"
        group: "{{ ca_gitea_group['name'] }}"
        state: link
    - name: carcano.gitea::update | remove {{ ca_gitea_install_dir }}/gitea/gitea-{{ ca_gitea_version }}
      ansible.builtin.file:
        path: "{{ ca_gitea_install_dir }}/gitea/gitea-{{ ca_gitea_version }}"
        state: absent
      ignore_errors: true
      when: ca_gitea_version not in ca_gitea_current_version.stat.lnk_target.split('/')[-1]
    - name: carcano.gitea::update | set fact ca_gitea_update_failed
      ansible.builtin.set_fact:
        ca_gitea_update_failed: true

understanding this tasks list should not be hard, anyway, it roughly:

  • get facts about the running services ( lines 1 - 2) and fail if the gitea service is running ( lines 3 - 6 )
  • stops the gitea service - just to be sure, if anything was not working with the previous lines ( lines 10 -13 )
  • remove the "ca_gitea_packages" from versionlock ( lines 14 - 16 ) and updates them ( lines 17 - 20 )
  • download gitea ( lines 21 - 27 )
  • get information from the symlink that points to the current version ( lines 28 - 31 )
  • modify the symlink to point to the new version just downloaded (lines 36 - 43 )
  • set version-lock for all the packages listed in the" ca_gitea_packages" list ( lines 44 - 46 )
  • flush the handlers so to start the Gitea service as necessary ( lines 47 - 48 )
  • include the tasks from the "lib/healthcheck.yml"( lines 49 - 50 )
  • remove the old gite binary application file (lines 51 - 55)

the above tasks are in a block with a rescue list - if any of the above tasks fails, the following tasks run as rescue:

  • set version-lock for all the packages listed in the" ca_gitea_packages" list ( lines 57 - 59 )
  • restore the symlink to point to the current application binary (lines 60 - 66)
  • remove the just downloaded gitea file ( lines 67 - 72 )
  • set the "ca_gitea_update_failed" fact (lines 73 - 75 )

Uninstall Tasks

The last capability we implement is the one for uninstalling the application: create the "tasks/uninstall.yml"task file with the following contents:

---
- name: carcano.gitea::uninstall | block with uninstall related tasks
  become: yes
  block:
    - name: carcano.gitea::uninstall | stop the gitea service
      ansible.builtin.service:
        name: gitea
        enabled: false
        state: stopped
    - name: carcano.gitea::uninstall  | remove gitea's systemd Unit
      ansible.builtin.file:
        path: /etc/systemd/system/gitea.service
        state: absent
    - name: carcano.gitea::uninstall | reload systemd
      ansible.builtin.shell:
        cmd: systemctl daemon-reload
    - name: carcano.gitea::uninstall | remove installed files and directories
      ansible.builtin.file:
        path: "{{ item }}"
        state: absent
      loop:
        -  "{{ ca_gitea_data_dir }}"
        -  "{{ ca_gitea_repos_dir}}"
        -  "{{ ca_gitea_log_dir }}"
        - "{{ ca_gitea_install_dir }}/gitea"
    - name: carcano.gitea::uninstall | release version lock of the ca_gitea_packages packages
      ansible.builtin.shell:
        cmd: "dnf versionlock delete {{ ca_gitea_packages | join(' ') }}"
    - name: carcano.gitea::uninstall | remove ca_gitea_packages pakages
      ansible.builtin.package:
        name: "{{ ca_gitea_packages }}"
        state: absent

this tasks list:

  • stops the gitea service ( lines 5 - 9 )
  • removes the gitea systemd unit ( lines 10 - 13 ) and reload systemd ( lines 14 - 16 )
  • remove every gitea related directory tree ( lines 17 - 25 )
  • remove from versionlock all the packages listed in the" ca_gitea_packages" list ( lines 26 - 28 )
  • uninstall all the packages listed in the" ca_gitea_packages" ( lines 29 - 32 )

The Role's Main Tasks File

Now that we have all the tasks files configured, we must configure the one that is loaded by default when the role is loaded - since no capability of this role makes sense without prior having installed Gitea, we configure install as the default capability.

It is enough to import it in the "tasks/main.yml" file, that is the role's default entrypoint:

---
# tasks file for carcano.gitea
- name: carcano.gitea::main - loading install.yml
  ansible.builtin.include_tasks: install.yml

Integrate The Gitea Role Into Solution Deployment Playbooks

Now that we completed the Gitea role we must create a solution deployment playbook to include it into.

Add Sensitive Data To The Secrets Vars File

As we saw in the "Ansible playbooks best practices: caveats and pitfalls" post,  sensitive data must be encrypted - in that post we saw how to deal with it using "ansible-vault", so in this post we go on that way: open the "ansible/secrets/blueprints.yml" secret for editing by typing:

ansible-vault edit /ansible/secrets/blueprints.yml

type the encryption password when prompted.

Once opened the file, modify it by

  • adding the gitea web UI administrative credentials ("admin_username" and "admin_password")
  • adding the credentials for authenticating to the SMTP server to deliver email notifications ("smtp_username" and "smtp_password")

When done it must look like as follows:

deliverables:
  git_p1_0:
    pgsql_databases:
      gitea_p1_0:
        dbo_username: gitea_p1_0
        dbo_password: g1t-G6.lP-1!
    gitea:
      admin_username: administrator
      admin_password: grimoire
      smtp_username: gitea_p1_0
      smtp_password: Ag0.od3n1

Configure The Blueprints

As we saw in the "Ansible playbooks best practices: caveats and pitfalls" post,  deployment playbooks require a blueprint - we configured the  "ansible/environment/blueprints/git-p1-0.yml" with the following contents:

deliverable:
  label: git_p1_0
  description: Gitea based Git service, production security tier 1, instance 0
  hosts_groups:
    # aggiungere members e fare diventare i target una lista di members
    pgsql_servers:
      members:
        - name: pgsql-ca-up1a001
        #- name: pgsql-ca-up1b002
        #- name: pgsql-ca-up1c003 
      databases:
        - name: gitea_p1_0
          dbo_username: "{{ deliverables['git_p1_0']['pgsql_databases']['gitea_p1_0']['dbo_username'] }}"
          dbo_password: "{{ deliverables['git_p1_0']['pgsql_databases']['gitea_p1_0']['dbo_password'] }}"
      firewall:
        - rule: git-ca-up1a001_to_pgsql
          src_ip: 192.168.254.15/24
          service: postgresql
          action: accept
          state: enabled
        - rule: git-ca-up1a001_to_pgsql
          src_ip: 192.168.253.15/24
          service: postgresql
          action: accept
          state: enabled
    git_servers:
      members:
        - name: git-ca-up1a001
        #- name: git-ca-up1b002
    load_balancers:
      members:
        - name: lb-ca-up1a001
        - name: lb-ca-up1b002

In that post we had only a few infrastructural settings to manage, so we managed with just one file. When dealing with roles, in real life the number of settings can quickly become huge, so it is best to have the infrastructural settings and the application specific settings into two distinct settings files.

Create the "ansible/environment/blueprints/git-p1-0" directory and rename the "ansible/environment/blueprints/git-p1-0.yml" file into "ansible/environment/blueprints/git-p1-0/infra.yml" as follows:

mkdir ansible/environment/blueprints/git-p1-0
mv ansible/environment/blueprints/git-p1-0.yml ansible/environment/blueprints/git-p1-0/infra.yml

then create the file with the application settings:using the contents of the "defaults/main.yml" file of the Gitea role as a reference, create the "ansible/environment/blueprints/git-p1-0/apps.yml" blueprint with the following contents:

---
ca_gitea_git_home_dir: /var/lib/git
ca_gitea_version: 1.21.10
ca_gitea_download_url: https://dl.gitea.io
ca_gitea_install_dir: /opt
ca_gitea_data_dir: "{{ ca_gitea_git_home_dir }}/data"
ca_gitea_repos_dir: "{{ ca_gitea_git_home_dir }}/repositories"
ca_gitea_log_dir: "/var/log/gitea"
ca_gitea_tls_cert_file: /etc/pki/tls/certs/{{ ansible_fqdn | split('.') | first }}.crt
ca_gitea_tls_key_file: /etc/pki/tls/private/{{ ansible_fqdn | split('.') | first }}.key
ca_gitea_backups_dir: "/srv/gitea-backups"

ca_gitea_group:
  name: git
  gid: 987
ca_gitea_user:
  name: git
  uid: 990
  home: "{{ ca_gitea_git_home_dir }}"
  gecos: Gitea System User
  shell: /bin/bash
  groups:
      - "{{ ca_gitea_group.name }}"

ca_gitea_settings:
  db_schema: ""
  db_type: "postgres"
  db_host: "pgsql-ca-up1a001"
  db_url: "pgsql-ca-up1a001:5432"
  db_user: "{{ deliverables['git_p1_0']['pgsql_databases']['gitea_p1_0']['dbo_username'] }}"
  db_passwd: "{{ deliverables['git_p1_0']['pgsql_databases']['gitea_p1_0']['dbo_password'] }}"
  db_name: "gitea_p1_0"
  ssl_mode: "disable"
  charset: "utf8"
  db_path: "{{ ca_gitea_data_dir }}/gitea.db"
  app_name: "Gitea: P1"
  repo_root_path: "{{ ca_gitea_repos_dir }}"
  lfs_root_path: "{{ ca_gitea_data_dir }}/lfs"
  run_user: "git"
  domain: "git0.p1.carcano.corp"
  ssh_port: "22"
  http_port: "3000"
  app_url: "https://git0.p1.carcano.corp:3000/"
  log_root_path: "{{ ca_gitea_log_dir }}"
  smtp_addr: "mail.p1.carcano.corp"
  smtp_por: "587"
  smtp_from: "git0-p1@carcano.corp"
  smtp_user: "{{ deliverables['git_p1_0']['gitea']['smtp_username'] }}"
  smtp_passwd: "{{ deliverables['git_p1_0']['gitea']['smtp_password'] }}"
  enable_federated_avatar: "on"
  disable_registration: "on"
  require_sign_in_view: "on"
  default_allow_create_organization: "on"
  default_enable_timetracking: "on"
  password_algorithm: "pbkdf2"
  no_reply_address: "git0-p1@carcano.corp"
  admin_name: "{{ deliverables['git_p1_0']['gitea']['admin_username'] }}"
  admin_passwd: "{{ deliverables['git_p1_0']['gitea']['admin_password'] }}"
  admin_confirm_passwd: "{{ deliverables['git_p1_0']['gitea']['admin_password'] }}"
  admin_email: "devops@carcano.corp"

please note how we are replacing sensitive data using variables contained in the vars file encrypted by "ansible-vault":

  • gitea web-ui administrative username ( line 57  )
  • gitea web-ui administrative password ( lines 58 - 59 )
  • username for authenticating on the SMTP server used for sending email notifications ( line 48 )
  • password for authenticating on the SMTP server used for sending email notifications ( line 49 )

Shared Playbooks

The deployment playbook we saw the "Ansible playbooks best practices: caveats and pitfalls" post was intentionally easy for the purpose of that post: in order to work with the role, we must refine things a little bit: more precisely, since we are going to implement playbooks for the different capabilities provided by the role (install, update and uninstall), we need to split it into several reusable plays that can be imported by each of these capability related playbooks.

First we need to create the "lib" directory to use as library:

mkdir -m 755 ansible/playbooks/solutions/gitea/lib

Dynamic Hostgroups Playbook

The first group of plays we move into the library is the one for generating the dynamic host groups: create the "ansible/playbooks/solutions/gitea/lib/dyn-hostgroups.yml" playbook with the following contents:

---
- name: solutions::gitea | guess dynamic hostgroups
  hosts: all
  gather_facts: false
  become: false
  run_once: true
  vars_files:
    - ../../../../secrets/blueprints.yml
    - "{{ '../'+deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ '../'+deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: compose pgsql_servers dynamic hostgroup
      ansible.builtin.add_host:
        groups: pgsql_servers
        hostname: "{{ item['name'] }}"
      loop: "{{ deliverable['hosts_groups']['pgsql_servers']['members'] }}"
    - name: compose git_servers dynamic hostgroup
      ansible.builtin.add_host:
        groups: git_servers
        hostname: "{{ item['name'] }}"
      loop: "{{ deliverable['hosts_groups']['git_servers']['members'] }}"
    - name: compose load_balancers dynamic hostgroup
      ansible.builtin.add_host:
        groups: load_balancers
        hostname: "{{ item['name'] }}"
      loop: "{{ deliverable['hosts_groups']['load_balancers']['members'] }}"
- name: solutions::gitea | ping the postgresql servers
  hosts: pgsql_servers
  gather_facts: false
  become: false
  vars_files:
    - ../../../../secrets/blueprints.yml
    - "{{ '../'+deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ '../'+deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: ping postgresql server
      ansible.builtin.ping:
- name: solutions::gitea | ping git servers
  hosts: git_servers
  gather_facts: false
  become: false
  vars_files:
    - ../../../../secrets/blueprints.yml
    - "{{ '../'+deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ '../'+deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: ping git server
      ansible.builtin.ping: 

Inventory Level Firewall Rules Playbook

The second group of plays we need to move are the ones having to deal with inventory level system firewall rules - create the "ansible/playbooks/solutions/gitea/lib/firewall.yml " playbook with the following contents:

---
- name: solutions::gitea | implement inventory-level firewall rules
  hosts:
    - git_servers
    - pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../../secrets/blueprints.yml
    - "{{ '../'+deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ '../'+deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: implement inventory-level linux firewall rich rules
      ansible.builtin.import_tasks:
        file: ../../../infra/linux/firewall/tasks/rich-rules.yml
- name: solutions::gitea | implement blueprint-level firewall rules
  hosts: pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../../secrets/blueprints.yml
    - "{{ '../'+deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ '../'+deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: set blueprint-level rules as firewall fact
      ansible.builtin.set_fact: 
        firewall: "{{ deliverable['hosts_groups']['pgsql_servers']['firewall'] }}"
    - name: implement blueprint-level linux firewall rich rules
      ansible.builtin.import_tasks:
        file: ../../../infra/linux/firewall/tasks/rich-rules.yml

Deployment Playbook

The post "Ansible playbooks best practices: caveats and pitfalls" showed just a single playbook - "ansible/playbooks/solutions/gitea/play.yml". In this post instead we need to have a dedicated playbook for every capability of the role.

The must-have capability is "deploy" - without it you don't have anything to operate on - get rid of the "ansible/playbooks/solutions/gitea/play.yml":

rm -f ansible/playbooks/solutions/gitea/play.yml

and replace it  with the "ansible/playbooks/solutions/gitea/deploy.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | deliver linux firewall rich rules
  ansible.builtin.import_playbook: lib/firewall.yml
  tags:
    - firewall
- name: solutions::gitea | deliver database instances
  hosts: pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: create databases and users
      ansible.builtin.import_tasks:
        file: ../../infra/db/postgresql/tasks/create-dbs-and-users.yml
  tags:
    - db
- name: solutions::gitea | deploy the gitea application
  hosts: git_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  roles:
    - carcano.gitea
  tags:
    - application
    - gitea

understanding this playbook should not be hard, anyway, it roughly:

  • include the tasks from the "lib/dyn-hostgroups.yml" ( lines 1 - 4 ) and "lib/firewall.yml" tasks lists ( lines 5 - 8)
  • setup the database instances on the postgresql servers ( lines 17 - 19 )
  • deploy gitea by including the role ( lines 29 - 30 )

Please, note how we tagged the database related play with the "db" tag and the gitea deployment play as "application" and "gitea". This enables the to run full playbook when no tags are specified, or to run only part of it as necessary - for example, if someone screws up by mistake database permissions, it is possible to easily recover them by running the playbook with the "db" tag

We now have everything is needed to deliver this Gitea instance - just type:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/deploy.yml

As you see we must explicitly pass both the "ansible/environment/blueprints/git-p1-0/infra.yml" and "ansible/environment/blueprints/git-p1-0/apps.yml" vars files.

Application Update Playbook

We intentionally not delivered the latest Gitea version, ... so to be able to perform an update. Of course all of this can be (actually must be) implemented using a playbook.

In this post, to keep things short and simple, we are not implementing a high available instance. When dealing with high available instances, this playbook must interact with the load balancer in front of the instances and implement a rolling release - the playbook first mark the backend to update as offline, perform the update, check the service health of the upgraded instance to make sure nothing broke, restore to online the disabled backend and disable the on that still need to be updated, updates it, checks its service health and if nothing bad happened enable it as a backend again. This id a very short description, where I omitted the automatic rollback

Create the "ansible/playbooks/solutions/gitea/app-update.yml" playbook with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | check the state of the current gitea application
  hosts: git_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - application
    - gitea
  tasks:
    - name: check gitea liveness
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: healthcheck 
    - name: stop the gitea service
      become: true
      ansible.builtin.service:
        name: gitea
        state: stopped
- name: solutions::gitea | backup the gitea database
  hosts: pgsql_servers
  gather_facts: true
  run_once: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - application
    - gitea
  tasks:
    - name: set fact postgresql_backup_file
      ansible.builtin.set_fact:
        postgresql_backup_file: "{{ ca_gitea_settings['db_name']+'-upgrade-'+ansible_date_time['epoch'] }}.gz"
    - name: import tasks from infra/db/postgresql/tasks/backup.yml
      ansible.builtin.import_tasks:
        file: ../../infra/db/postgresql/tasks/backup.yml
      vars:
        postgresql_dbname: "{{ ca_gitea_settings['db_name'] }}"
- name: solutions::gitea | update the gitea application
  hosts: git_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - application
    - gitea
  tasks:
    - name: update gitea
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: update
- name: solutions::gitea | restore the gitea database
  hosts: pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - application
    - gitea
  tasks:
    - name: import tasks from infra/db/postgresql/tasks/restore.yml
      ansible.builtin.include_tasks:
        file: ../../infra/db/postgresql/tasks/restore.yml
      vars:
        postgresql_dbname: "{{ ca_gitea_settings['db_name'] }}"
      run_once: true
      when: hostvars[item]['ca_gitea_update_failed'] is defined
      with_items: "{{ groups['git_servers'] }}"
- name: solutions::gitea | mark the playbook as failed
  hosts: git_servers
  gather_facts: false
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - application
    - gitea
  tasks:
    - name: mark the update process as failed
      fail:
        msg: the update process has failed - gitea has automatically recovered as its previous version
      run_once: true
      when: ca_gitea_update_failed is defined

understanding this playbook should not be hard, anyway, it roughly:

  • include the tasks from the "lib/dyn-hostgroups.yml" ( lines 1 - 4 )
  • check the status of the gitea ( lines 16 - 19 ) application and stop the service ( lines 20 - 24 )
  • make a backup copy of the database ( lines 25 - 44 )
  • update the application ( lines 45 - 59 )
  • if the update failed, restore the database backup previously made (lines 60 - 78 ) and mark the playbook as failed (lines 79 - 94)

We are ready to have a go also with it - first, set the "ca_gitea_version" variable in the "ansible/environment/blueprints/git-p1-0/apps.yml" blueprint to the version you want to update to.

For example, to update to Gitea 1.21.11:

ca_gitea_version: 1.21.11

then just run the following statement:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/app-update.yml

As you see we must explicitly pass both the "ansible/environment/blueprints/git-p1-0/infra.yml" and "ansible/environment/blueprints/git-p1-0/apps.yml" vars files.

Operating System Update Playbook

As you certainly note, the above "app-update.yml" just updates the application and its dependencies, but does nothing with the operating system. That is because very often the corporate's patching policy of applications and operating systems have a very different frequency (very often it is no more than 3 months for operating system patching, and no more than 1 year for application patching - of course if any update is available).

For this reason we must now implement the operating system update playbook - create the "ansible/playbooks/solutions/gitea/os-update.yml" playbook with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | update the operating system
  hosts:
    - git_servers
    - pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: update the operating system
      ansible.builtin.import_tasks:
        file: ../../infra/linux/os/tasks/update.yml

As you see, this playbook, besides including the "lib/dyn-hostgroups.yml" playbook to compose the target hostgroups, just imports the tasks list of the  operating system update playbook we developed in the "Ansible playbooks best practices: caveats and pitfalls" post.

The statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/os-update.yml

Most of the "old-school's" guys usually focus on patching by grouping on a per general purpose and environment basis (for example "every tomcat server in the production environment". Despite at first glance this approach can seem correct from the operating team perspective, it is not from the service operation perspective - it indeed makes it hard to answer questions such as "was the os of the hosts running the git-p1-0 Gitea deployment patched?". Answering such a question requires spending time checking infrastructural diagrams (that in real life are often outdated by the way). A more effective approach is instead patching on a per deployment basis: this means that the patching target is not the host groups by general purpose, but the hosts being part of a deployment. So the patching session, having the ""git-p1-0 as target, can for example run the "os-update.yml" playbook monthly and the "app-update.yml" playbook every three months.

Solution Reconfigure Playbook

Being able to reset to deployment's configuration is a very important capability. In this specific use case, while it is not possible (and even not wise) to reset the Gitea's configuration itself, it is still possible to reset to a known state every dependency, such as firewall rules or database grants.

Create the "ansible/playbooks/solutions/gitea/configure.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | deliver linux firewall rich rules
  ansible.builtin.import_playbook: lib/firewall.yml
  tags:
    - firewall
- name: solutions::gitea | deliver database instances
  hosts: pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: create databases and users
      ansible.builtin.import_tasks:
        file: ../../infra/db/postgresql/tasks/create-dbs-and-users.yml
  tags:
    - db

the statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/configure.yml

as you see we are explicitly passing both the "ansible/environment/blueprints/git-p1-0/infra.yml" and "ansible/environment/blueprints/git-p1-0/apps.yml" vars files.

You probably are asking yourself why you need such a playbook, since it does not actually reconfigure Gitea. What you are reconfiguring is not the Gitea application, rather than the whole solution - for example, you may want to add firewall exceptions, or just reset database privileges. As this is a lab, you may try for example try the thrill of dropping the "gitea_p1_0" postgresql's role: on the "pgsql-ca-up1a001" host, switch to the "postgres" user and launch the "psql" command line tool. Then, in the psql console, run:

REASSIGN OWNED BY gitea_p1_0 TO postgres;
DROP OWNED BY gitea_p1_0;

The outcome is preventing the "gitea_p1_0" user accessing the "gitea_p1_0" database - you can have fun experimenting with the impact by restarting the Gitea service. In a real life scenario, if anybody makes something like that by mistake, you can quickly and easily fix everything simply by running the "configure.yml" playbook.

Gitea Service Start Playbook

To ease service operations, it is always wise to provide a playbook that start the service (or the services) - this playbook can for example be wrapped by a web UI that the sysops can run when necessary, sparing them from learning the technical details for starting - mind in real life starting a solution is not always trivial (sometimes the start requires a specific sequence of service to be started (in my experience I saw several masterpieces made by psychopathic software's geniuses who don't care at all about the poor guys having to run their software).

Create the "ansible/playbooks/solutions/gitea/start.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | start the gitea application
  hosts: git_servers
  gather_facts: false
  tasks:
    - name: start gitea
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: start

the statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/start.yml

Gitea Service Stop Playbook

Same way, so to ease service operations, it is always wise to provide a playbook that stops the service (or the services) - just create the "ansible/playbooks/solutions/gitea/stop.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | stop the gitea application
  hosts: git_servers
  gather_facts: false
  tasks:
    - name: stop gitea
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: stop

the statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/stop.yml

Gitea Service Restart Playbook

In for a penny, in for a pound: create the "ansible/playbooks/solutions/gitea/restart.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | restart the gitea application
  hosts: git_servers
  gather_facts: false
  tasks:
    - name: restart gitea
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: restart

the statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/restart.yml

Healthcheck Playbook

In my experience, a capability that is very important for the operations team is being able to know if while performing their tasks they have broken a service. For this reason, an invaluable playbook to them is the healthcheck playbook - create the "ansible/playbooks/solutions/gitea/healthcheck.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | check the gitea application
  hosts: git_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - healthcheck
  tasks:
    - name: check Gitea liveness
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: healthcheck

the statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/healthcheck.yml

as you see we are explicitly passing both the "ansible/environment/blueprints/git-p1-0/infra.yml" and "ansible/environment/blueprints/git-p1-0/apps.yml" vars files.

I used to suggest the operations team to always run such a playbook not only after operating their tasks, but also before: when dealing with clustered high available solutions, it happens to find improperly configured monitoring probes that didn't detect failures on single nodes: it is not fair to blame people for breaking a service that was already broken before they started doing their operations.

Backup Playbook

While operating a quite common need is being able to perform a backup of the solutions, so to be able to recover if anything goes wrong. But it is not fair nor wise demanding the operations team a deep knowledge of every solution they operate: the best option is always providing them with an automation they can just run.

In this specific case, the automation consists into:

  • backup the database
  • backup the git repositories on the filesystem

Create the "ansible/playbooks/solutions/gitea/backup.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | backup the database
  hosts: pgsql_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: set fact postgresql_backup_file
      ansible.builtin.set_fact:
        postgresql_backup_file: "{{ ca_gitea_settings['db_name']+'-upgrade-'+ansible_date_time['epoch'] }}.gz"
    - name: import tasks from infra/db/postgresql/tasks/backup.yml
      ansible.builtin.import_tasks:
        file: ../../infra/db/postgresql/tasks/backup.yml
      vars:
        postgresql_dbname: "{{ ca_gitea_settings['db_name'] }}"
- name: solutions::gitea | backup the filesystem
  hosts: git_servers
  gather_facts: true
  become: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tasks:
    - name: create the directory for storing the backup
      ansible.builtin.file:
        path: "{{ ca_gitea_backups_dir }}"
        mode: 0750
        state: directory
    - name: backup the git repositories, gitea data and logs
      community.general.archive:
        path:
          -  "{{ ca_gitea_data_dir }}"
          -  "{{ ca_gitea_repos_dir}}"
          -  "{{ ca_gitea_log_dir }}"
          - "{{ ca_gitea_install_dir }}/gitea"
        dest: "{{ ca_gitea_backups_dir + '/gitea-' + ansible_date_time['epoch'] + '.gz' }}"

the statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/backup.yml

as you see we are explicitly passing both the "ansible/environment/blueprints/git-p1-0/infra.yml" and "ansible/environment/blueprints/git-p1-0/apps.yml" vars files.

Of course if you want a consistent backup you must run the service stop playbook, the backup playbook and the service start playbook in sequence: to preserve the service availability, the backup playbook has been intentionally designed to operate keeping the service online, despite this of course means having a small risk of being inconsistent.

Application Uninstall Playbook

The last capability we implement in a playbook is the solution's uninstallation - just create the "ansible/playbooks/solutions/gitea/undeploy.yml" with the following contents:

- name: solutions::gitea | compose dynamic hostgroups
  ansible.builtin.import_playbook: lib/dyn-hostgroups.yml
  tags:
    - always
- name: solutions::gitea | deliver linux firewall rich rules
  ansible.builtin.import_playbook: lib/firewall.yml
  tags:
    - firewall
- name: solutions::gitea | uninstall the gitea application
  hosts: git_servers
  gather_facts: true
  vars_files:
    - ../../../secrets/blueprints.yml
    - "{{ deliverable_vars_file|default(omit)+'/infra.yml'|default(omit) }}"
    - "{{ deliverable_vars_file|default(omit)+'/apps.yml'|default(omit) }}"
  tags:
    - application
    - gitea
  tasks:
    - name: uninstall Gitea
      ansible.builtin.include_role:
        name: carcano.gitea
        tasks_from: uninstall

as you see, here we include the gitea role, but we specify the "uninstall.yml" tasks list as the entrypoint.

The statement for running this playbook is as follows:

ansible-playbook --ask-vault-pass \
-e@/ansible/environment/blueprints/git-p1-0/infra.yml \
-e@/ansible/environment/blueprints/git-p1-0/apps.yml \
/ansible/playbooks/solutions/gitea/undeploy.yml

Footnotes

This is certainly the longest post I have written on Ansible - I made my best to keep it short although avoiding to just show trivial examples you can guess by yourself or by reading the official documentation. In this post I'm emphasizing the correct way for structuring things rather than the technical features provided by the tool (you can easily find the technical features from the manual, so focusing on that would provide no benefit to you).

I hope you enjoyed this post, which cost me around 80 hours of hard work, mostly spent figuring out how to refine the examples to be clear but also as near real life as possible.

Ansible is a huge topic - if your corporation is interested in having training or even just having some help in redesigning things in a clean and tidy way, you can contact me on Linkedin explaining the requirements, and we'll find a way for making it.

If you appreciate this strive please and if you like this post and any other ones, just share this and the others on Linkedin - sharing and comments are an inexpensive way to push me into going on writing - this blog makes sense only if it gets visited.

I hate blogs with pop-ups, ads and all the (even worse) other stuff that distracts from the topics you're reading and violates your privacy. I want to offer my readers the best experience possible for free, ... but please be wary that for me it's not really free: on top of the raw costs of running the blog, I usually spend on average 50-60 hours writing each post. I offer all this for free because I think it's nice to help people, but if you think something in this blog has helped you professionally and you want to give concrete support, your contribution is very much appreciated: you can just use the above button.

4 thoughts on “Ansible roles best practices: practical example gitea role

    • Marco Antonio Carcano says:

      Hello Bas,

      Glad to know you liked the post.

      I developed the Gitea role with the sole purpose of providing a non-trivial real life use case that can engage the readers: I never thought of publishing it on the online Ansible Galaxy repository, also because it would then require maintenance effort to keep it current with the next Gitea releases, whereas my focus is on this blog.

      Cheers.

      Marco

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>