Ansible is a powerful datacenter automation tool that enables nearly declarative automations - "Ansible Tutorial - Ansible Container Howto" is the first of a series of posts dedicated to Ansible, paying particularly attention at "doing all-right": Ansible is a powerful tool, and as many powerful tool can make more pain than benefits if improperly managed.
In this post we see how to quickly set up a containerised Ansible on a workstation, configuring the environment so that it can be run from the shell without explicitly invoking podman, providing a very friendly user experience the same way, enabling it to run statements as it was really installed on the system.
Why Using Containerized Ansible
In my working experience the best way to use Ansible is running it inside a container image - this approach provides lots of benefits:
- it demands a very little set up and maintenance effort (you don’t need to install or patch it at all)
- you can very easily switch between different Ansible versions - it is just a matter of specifying the container image you want to run
- it provides an out of the box way to always have the development and the operational environments aligned - it is just a matter of running the same container image
- it is very easy to integrate it not only within an existing CI/CD suite, but also to migrate it to a different CI/CD suite when it will be necessary.
Setup The Container Environment
Let's start from installing Podman:
sudo dnf install -y podman
Running rootless Podman requires to setup a "subuid" and a "subgid" range for the user running the containers: in this example we are using the "vagrant" user, so:
sudo usermod --add-subuids 100000-165535 vagrant
sudo usermod --add-subgids 100000-165535 vagrant
of course adjust the numeric values accordingly to your current set up.
The most compact container image providing the most up to date as possible Ansible's version in my opinion is the Alpine Linux based one.
Pull The Ansible Container
To speed up the Ansible container initialization, we can pre fetch the container image as follows:
podman pull docker.io/alpine/ansible:latest
Setup The Ansible Directory Tree
Since it is obviously more handy to work using the development tools installed on our workstation, we create a directory on its local file system, bind-mounting it when running the Ansible container as necessary.
Let's create the "ansible" directory in the current users' home directory:
mkdir -m 755 ~/ansible
inside it, we create the following directories:
- environment - used to store the Ansible environment
- playbooks - used to store the Ansible playbooks
- roles - used to store the Ansible roles
- collections - used to store then Ansible collections
mkdir -m 755 ~/ansible/environment ~/ansible/playbooks \
~/ansible/roles ~/ansible/collections
Ansible reads its settings from the "ansible.cfg" configuration file.
Configure Ansible
We can generate it by launching one shot the Ansible container as follows:
podman run --rm -it -w /ansible -v $(pwd)/ansible:/ansible:Z \
docker.io/alpine/ansible:latest \
bash -c "ansible-config init --disabled > ansible.cfg"
the above statement spinned up a container and:
- bind-mounted the local "~/ansible" directory to the "/ansible" directory inside the container ("-v $(pwd)/ansible:/ansible:Z")
- used the "/ansible" directory inside the container as the working directory ("-w /ansible") for the "ansible-config" statement
- run the "ansible-config" tool to initialize the "ansible.cfg" configuration file in the "/ansible" directory in the container
Since the "/ansible" directory in the container was bind-mounted to the "~/ansible" directory on the host (our workstation), the "ansible.cfg" configuration file is actually stored beneath the "~/ansible" directory.
We can verify it is actually in the workstation's filesystem as follows:
ls ~/ansible/ansible.cfg
Let's now adjust the generated file to suit our needs:
First, we configure Ansible to login to the target hosts as the "ansible" user:
sed -i 's/^[ ]*[;][ ]*remote_user[ ]*=.*/remote_user=ansible/' ansible/ansible.cfg
In this post I assume only Linux target hosts: Ansible connects to Linux hosts using the so-called "smart" connection - that means it connects via SSH. Although Ansible supports password based SSH logins to the target hosts, the most convenient way for connecting is of course using public key authentication.
Let's configure Ansible to use the "environment/ansible.key" SSH private key:
sed -i 's/^[ ]*[;][ ]*private_key_file[ ]*=.*/private_key_file=environment\/ansible.key/' ansible/ansible.cfg
We must of course generate it - since we are on the host, the path where to save it is "~/ansible/environment/ansible.key":
ssh-keygen -t rsa -b 4096 -f ~/ansible/environment/ansible.key
pick up a good password and enter it when requested.
We must of course rename "~/ansible/environment/ansible.key.pub" into "~/ansible/environment/ansible.pub":
mv ~/ansible/environment/ansible.key.pub \
~/ansible/environment/ansible.pub
and also create the "~/.ssh/known_hosts" file if it does not exist yet.
[ -f ~/.ssh/known_hosts ] || touch -f ~/.ssh/known_hosts
Ansible needs an inventory where the target hosts are listed, grouped and described by providing attributes - in this example we configure it to use the "environment/hosts":
sed -i 's/^[ ]*[;][ ]*inventory[ ]*=.*/inventory=environment\/hosts/' ansible/ansible.cfg
Create the "~/ansible/environment/hosts" file with the following contents:
localhost ansible_connection=local
pgsql-ca-up1a001
This is a very minimal inventory suitable for this post's purposes.
It just contains only the following target hosts:
- localhost
- pgsql-ca-up1a001
Only for the "localhost" target host, we also set the "ansible_connection" variable to "local" - this actually is an override so that Ansible does not authenticate nor connect to anything when operating on the container itself.
We must then configure Ansible to look in the "roles" directory for the Ansible roles to load:
sed -i 's/^[ ]*[;][ ]*roles_path[ ]*=.*/roles_path=roles/' ansible/ansible.cfg
and in the "collections" directory for the Ansible collections to load:
sed -i 's/^[ ]*[;][ ]*collections_path[ ]*=.*/collections_path=collections/' ansible/ansible.cfg
Run Ansible Within The Container
We are now ready for a go with the Containerized Ansible: as an example, let's set the "System in use by the Ansible Lab" login banner on the "pgsql-ca-up1a001" target host by using Ansible ad-hoc statements.
First we need to add "System in use by the Ansible Lab" to the "/etc/issue.net" file.
podman run --rm -it -w /ansible \
-v $(pwd)/ansible:/ansible:Z \
-v $(pwd)/.ssh/known_hosts:/root/.ssh/known_hosts:Z \
docker.io/alpine/ansible:latest \
bash -c "apk add sshpass; ansible -b -u vagrant -k pgsql-ca-up1a001 --ssh-extra-args='-o StrictHostKeyChecking=no' -m copy -a \"content='System in use by the Ansible Lab' dest=/etc/issue.net\""
the above statement spinned up a container and:
- bind-mounted the local "~/ansible" directory to the "/ansible" directory inside the container ("-v $(pwd)/ansible:/ansible:Z")
- used the "/ansible" directory inside the container as the working directory ("-w /ansible")
- run the "apk add sshpass" statement: this step is necessary to enable Ansible to connect to the target hosts using password authentication. Of course if the user you are using to connect to the target system has already authorized your public key, you can of course omit it. In this case it is also not necessary to provide the "-k" ansible's command line parameter.
- run the actual Ansible ad hoc statement, invoking the "copy" module passing the following arguments ("-a" option):
- content='System in use by the Ansible Lab'
- dest=/etc/issue.net
we had to provide the following ansible's command line tool parameters:
-
-
- -u vagrant: tells Ansible to connect to the target system as the "vagrant" user
- -k: tells Ansible to prompt for the "vagrant" user's password
- -b stands for "become", that in Ansible terms means: right after connecting to the target system, become another user (that is the "root" user by default)
- --ssh-extra-args='-o StrictHostKeyChecking=no' this option disables the check of the target host's host key
-
Launching The Containerized Ansible Directly From The Shell
Yes, running Ansible like that is very annoying, since it requires providing the podman statement with lots of arguments, and also paying attention to properly escaping the Ansible's parameters too.
My advice is to create a shell function, so that you can launch as it was the actual "ansible" command line tool installed on the system:
Add the following function to the "~/.bash_profile":
function ansible {
statement="$@"
statement=$(echo $statement | sed 's/\(.*\).*-a \(.*\).*/\1-a "\2"/' |perl -pe "s|(.*)--ssh-extra-args=(-o .*=.*?)( -[^o].*)|\1--ssh-extra-args='\2'\3|")
ANSIBLE_CONTAINER="${ANSIBLE_CONTAINER:-docker.io/alpine/ansible}"
ANSIBLE_CONTAINER_LABEL="${ANSIBLE_CONTAINER_LABEL:-latest}"
podman run --rm -it -w /ansible -v $(pwd)/ansible:/ansible:Z -v $(pwd)/.ssh/known_hosts:/root/.ssh/known_hosts:Z ${ANSIBLE_CONTAINER}:${ANSIBLE_CONTAINER_LABEL} bash -c "apk add sshpass; ansible $statement"
}
Same way as we did for the "ansible" statement, it is certainly worth the effort to define an "ansible-playbook" function:
function ansible-playbook {
statement="$@"
ANSIBLE_CONTAINER="${ANSIBLE_CONTAINER:-docker.io/alpine/ansible}"
ANSIBLE_CONTAINER_LABEL="${ANSIBLE_CONTAINER_LABEL:-latest}"
podman run --rm -it -w /ansible -v $(pwd)/ansible:/ansible:Z -v $(pwd)/.ssh/known_hosts:/root/.ssh/known_hosts:Z ${ANSIBLE_CONTAINER}:${ANSIBLE_CONTAINER_LABEL} bash -c "apk add py3-netaddr; ansible-playbook $statement"
}
In this case, before running "ansible-playbook", we also install the "py3-netaddr" package since it is mandatory to use filters such as "ipaddr".
Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:
Even a small contribution is always welcome!
Let's define also the "ansible-galaxy" function:
function ansible-galaxy {
statement="$@"
ANSIBLE_CONTAINER="${ANSIBLE_CONTAINER:-docker.io/alpine/ansible}"
ANSIBLE_CONTAINER_LABEL="${ANSIBLE_CONTAINER_LABEL:-latest}"
podman run --rm -it -v $(pwd)/ansible:/ansible:Z ${ANSIBLE_CONTAINER}:${ANSIBLE_CONTAINER_LABEL} bash -c "ansible-galaxy $statement"
}
the "ansible-inventory" function:
function ansible-inventory {
statement="$@"
ANSIBLE_CONTAINER="${ANSIBLE_CONTAINER:-docker.io/alpine/ansible}"
ANSIBLE_CONTAINER_LABEL="${ANSIBLE_CONTAINER_LABEL:-latest}"
podman run --rm -it -v $(pwd)/ansible:/ansible:Z ${ANSIBLE_CONTAINER}:${ANSIBLE_CONTAINER_LABEL} bash -c "ansible-inventory $statement"
}
and the "ansible-vault" function:
function ansible-vault {
statement="$@"
ANSIBLE_CONTAINER="${ANSIBLE_CONTAINER:-docker.io/alpine/ansible}"
ANSIBLE_CONTAINER_LABEL="${ANSIBLE_CONTAINER_LABEL:-latest}"
podman run --rm -it -v $(pwd)/ansible:/ansible:Z ${ANSIBLE_CONTAINER}:${ANSIBLE_CONTAINER_LABEL} bash -c "ansible-vault $statement"
}
We must obviously reload the "~/.bash_profile" to have BASH processing it:
source ~/.bash_profile
We are now able to re-run the previous Podman/Ansible statement as follows:
ansible -b -u vagrant -k pgsql-ca-up1a001 --ssh-extra-args='-o StrictHostKeyChecking=no' -m copy -a "content='System in use by the Ansible Lab' dest=/etc/issue.net"
using exactly the same syntax you would use if Ansible was actually installed on your workstation.
Footnotes
As you see it is easy to set up an easy to maintain containerized Ansible environment that is also easy to operate. In the next post - Ansible playbooks, ansible-galaxy, roles and collections - we will have a primer on Ansible playbooks, and Ansible Galaxy: we will write a very handy playbook you can run to automatically prepare your hosts for being managed by Ansible and learn how to install and use shelf Ansible roles and shelf Ansible collections.
2 thoughts on “Ansible Tutorial – Ansible Container Howto”