TLS peers can verify if a certificate was revoked by checking the CRL (very old and very poorly performing method with lots of shortcomings) or query the OCSP endpoint of the CA that issued the certificate.

However this design still has a shortcoming: what happens if for any reason the OCSP endpoint is unreachable (by accident or by anything caused by the evil people out there)? The outcome is a security risk - if the policy is to deny connection if the OCSP status cannot be checked, you risk to disserve.

Conversely, if the policy is that OCSP status check is a nice to have, there’s a risk that, if a revoked certificate has been stolen by the evils out there, they can just prevent your client to query the OCSP server and hijack the connection to a rogue TLS server managed by them that uses the stolen revoked certificate. To mitigate this you can set up OCSP stapling, which consists of prefetching OCSP responses and attaching them to the X.509 certificate.

In the "Apache HTTPd With Mutual TLS and OCSP Stapling" post we see not only how to configure an Apache server to provide a stapled certificate, but also how to set up mutual TLS authentication, seeing in action what happens when a certificate is revoked.

Read more >

While running a Public Key Infrastructure (PKI), the maintenance workload due to enrolling new certificates and renewing the existing ones can quickly become overwhelming. Dealing it manually is not only cumbersome: it is frustrating too. Luckily there are ways to automate the enrollment process by providing online Registration Authority endpoints.

Cloudflare's PKI and TLS Toolkit provides both an online Registration Authority as well the client software that can be used to automatically enroll new or renew existing certificates. The aim of the "Cloudflare's certmgr tutorial - a certmgr howto" blog post is ti show how quick and easy is setting up certmgr, the certificate monitoring and automatic enrolling facility provided by Cloudflare.

The operating environment used in this post is Red Hat Enterprise Linux 9 or Rocky Linux 9 - using a different environment may lead to having to adapt or even change things.

Read more >

Documenting the project is a critical part of the design process: since there are both business and technical audiences, it is not easy to draw up something that can look appealing and clear to the business and provide both enough implementation details for the technical people.

For this reason, the traditional top-down approach is to write:

  • the High Level Design Document: this document is mostly business oriented, describing the project in business terms, highlighting costs, benefits but it must also provide the bare minimum level of technical details to have this business-oriented people be able to figure out the final outcome, as well the necessary details to start writing a more detailed technical document - Personally, I like the approach described in the NASA Systems Engineering Handbook: according to them, writing the High-Level Design Document gets improved and updated while the technical documents that cover and address the low-level requirements get written, since only during that stage a lot of hidden problems are unveiled and addressed: this enable also to provide more accurate estimates by the way.
  • the Software Design Document: this is the document actually used to implement the project, so it must be clear enough and provide enough details to enable people working on the project's implementation to work as much independently as possible, avoiding pointless meetings that only lead to to waste their time and delay the project final delivery. Mind this document is quite agnostic and provides only a few hints about deployment - deployments are managed through a different document (each deployment is an instance, so a dedicated document is necessary). If the deployment is small, probably a diagram with a few details and a testing report are enough, but on huge deployments, it is necessary to write a Deployment Plan Document.
Agile purists will object that the top-down approach is superseded by bottom-up and that Agile transferred this part of the job to teams. That is the theory - in my personal experience the Agile bottom-up leverages on a lot of meetings necessary to make people aware of what's happening - this works nice with small projects, but it tends to become less effective with mid-size project and does not work at all with big projects. The outcome of that approach is having poorly written documentation, components designed using different styles of the involved people, that may hide unexpected later problems (especially from the operating or maintenance point of view) because of the limited point of view available to them. In addition to that, on big projects it is not as cost effective as they say, if you consider the cost of the increased number of meetings and the number of participants - remember that the cost of a member is its actual pay plus the missing earnings for not having him producing. So long story short, ... I've nothing against using bottom-up design in Agile, if it is properly used in the correct use cases, but blindly using it will only lead your projects into a mess: as someone else already said, "Agile comes with no brain, please use yours".

Writing both the above documents is a task that is not easy at all, so I wrote this post to provide a template with a use case of a SDD Template Software Design Document For Kubernetes Service - more specifically, a microservice of an existing service.

As you will see, this document leverages and is in compliance with the company wide Software Design Document: this global one is used to provide taxonomy, standards and best practices every project in the corporate must comply with..

Read more >

Overlay networking enables to implement tunnels to interconnect networks defined inside a host (such as Docker/Podman private networks): for example flannel based Kubernetes uses VxLANs to interconnect the Minion’s private networks. Anyway VxLAN is only one of the available technologies: other technologies such as GENEVE, STT or NVGRE are available.

In this post we setup a GENEVE tunnel with OpenVSwitch and Podman - the described set up goes beyond the simple interconnection on of layer 3 network segments, interconnecting two Podman’s private networks configured with the same IP subnet (so they share the same broadcast domain) - the layer 2 data are exchange between the OpenVSwitch bridges on the two hosts through the GENEVE tunnel.

Read more >

OpenVSwitch (OVS) is the pillar used by several emblazoned software, such as OpenStack or Red Hat's OpenShift to set up their Software Defined Networks (SDN): it enables users to quickly and easily implement multiple bridges to which connect Virtual Machines or Containers.

These bridges can be left standalone, creating isolated networks, or interconnected to the machine (or VM) NICs, providing bidirectional access to the network segment the NIC is connected to. In addition to that, it also enables the set up VxLAN Tunnel EndPoint (VTEP) on these bridges, enabling interconnecting OVS bridges from different machines. Last but not least, it also enforces traffic policies defined using OpenFlow.

The SDN tutorial - OpenFlow with OpenVSwitch on Oracle Linux, starts from where we left in the "Free Range Routing (FRR) And OpenVSwitch On OracleLinux" post, extending its Lab and provides a practical guide on how to write and set OpenFlow rules on OpenVSwitch.

Read more >