Dev-Ops is a methodology aimed to speed-up application development and release aimed at promoting

  • fast development methodologies – Development teams
  • fast quality assurance methodologies – QA teams
  • fast deployment methodologies – System Operators teams
  • iteration and continuous feedback – Project Management teams

The aim is to achieve a faster time to market.

DevOps inherits Agile methods such as SCRUM Project Management, but it is more focused on the tools necessary to achieve the goal. It is also possible to involve the Security delegating some rights to the other teams: this approach is called DevSecOps.

These tools Devops brings with it were previously confined into the development field only, such as Source Code Management tools like GIT, branching models such as GitFlow, Continuous Integration and Continuous Delivery tools such as Jenkins or Drone, schedulers such ad dKron, scanners for code quality and compliance such as SonarQube.

This means that professional of every fields, even system engineers and administrators, should have an understanding of these tools and models.

Interacting with Git using Python is a very common use case in the DevOps field: very often it is necessary to checkout application’s or scripts along with their configuration or even just checkout versioned configurations. Although more rare, it is sometimes necessary to update the checked out contents and push the committed version back to the “origin” remote repository. In the "Git With Python HowTo GitPython Tutorial And PyGit2 Tutorial" post we play with the two most commonly used Python libraries used to interconnect to Git: gitpython and pygit2.

OAuth 2.0 and OpenID Connect are broadly used frameworks to address delegating of authentication and authorization. Despite their popularity they are such complex to be a tough nut to crack even for veterans: the scenarios and use cases they cover are very security sensitive and wide, so acquainting them is certainly a huge challenge very often causing a lot of pain and frustration.

The "OpenID Connect With Kratos And Hydra Tutorial - Gitea OAuth" post aim is to provide a good starting point for exploring this tough topic: after a short but comprehensive overview of them, we quickly focus on a real life scenario installing a full featured on premise suite made of Ory Kratos (the IDM), Ory Hydra (the OpenID Connect and OAuth 2 API) and the Ory Kratos Self Service UI node (the Resource Server - in this case it is just a demo).

Once the suite is up and running, we also explore a real life use case implementing the OAUth2 Authorization Code grant by configuring OpenID Connect as an authentication source into a Gitea instance.

Ansible roles are reusable objects that provide specialized tasks lists, handlers, templates and resource files within a single delivery unit: these objects can be directly accessed from the filesystem, downloaded from Git, from the online Ansible Galaxy of from a Ansible Galaxy compatible local service, such as Pulp 3. Anyway writing custom roles is really a challenging task, especially designing them to be as easy to use and maintain as possible.

The “Ansible roles best practices: practical example gitea role”post guides you into developing a custom Ansible role using a clean and tidy design that you can use as a reference to develop other custom roles.

As use case, we see how to deploy Gitea, a blazoned full featured Git Web UI supporting multiple organizations, providing authentication and authorization facilities enabling to protect repositories and branches, supporting Merge Requests and a lot of other advanced features, with of even a powerful and well standardized API that can be easily exploited by your automations. And, last but not least, ... it is even Java-free.

Ansible is an extremely powerful data center automation tool: most of its power comes from not being too strict into defining a structure - this enables it to be used into extremely complex scenarios as well as to very quickly set it up in quite trivial scenarios.

But this is a two edged sword: too many times I saw POC for adopting it permed POC with too poor requirements, thinking they can reuse what they experimented as a baseline for structuring Ansible: this is a very harmful error that quickly lead to unmaintainable real life environments with duplicated code and settings, often stored into structures without a consistent logic or naming, so losing the most of the benefits of such a great automation tool.

Ansible playbooks best practices: caveats and pitfalls starts from where we left with Ansible inventory best practices: caveats and pitfalls, exploring how to properly deal with writing playbooks, structuring things both to promote maintainability as well as to ease the operation and configuration tasks.

Ansible is an extremely powerful data center automation tool: most of its power comes from not being too strict into defining a structure - this enables it to be used into extremely complex scenarios as well as to very quickly set it up in quite trivial scenarios.

But this is a two edged sword: too many times I saw POC for adopting it permed POC with too poor requirements, thinking they can reuse what they experimented as a baseline for structuring Ansible: this is a very harmful error that quickly lead to unmaintainable real life environments with duplicated code and settings, often stored into structures without a consistent logic or naming, so losing the most of the benefits of such a great automation tool.

Ansible inventory best practices: caveats and pitfalls is the post from where we begin exploring how to properly structure Ansible to get all of its power without compromises, structuring things in an easy and straightforward way suitable for almost every operating scenario.

Ansible is a powerful datacenter automation tool that enables nearly declarative automations - "Ansible playbooks, ansible-galaxy, roles and collections" is a primer with Ansible, gradually introducing concepts that we better elaborate in other posts following this one: as we already said, Ansible is a powerful tool, and as many powerful tool can make more pain than benefits if improperly managed - the aim of this post is providing a good baseline that enable quickly enable operating Ansible running ad hoc statements, playbooks and operating using Ansible Galaxy with shelf roles and collections .

This post begins where we left with the "Ansible Tutorial – Ansible Container How-To" post, writing a playbook for preparing hosts for being managed by Ansible, learning how to use Ansible Galaxy for downloading and installing shelf Ansible roles and collections. The outcome will be a running PostgreSQL instance we will use as the DB engine in the next post of the series..