HAProxy is certainly one of the most blazoned, fast and efficient (in terms of processor and memory usage) open source load balancer and proxy, enabling both TCP and HTTP-based applications to spread requests across multiple servers. In the "High Available HA Proxy Tutorial With Keepalived" we see not only how to install it in a High Available fashion, but also how to set the configuration in a clean and tidy way, having it automatically fetched from a Git remote repository.

A Quick Overview Of HA-Proxy

Written in 2000 by Willy Tarreau (a core contributor to the Linux kernel) as a free and open source software, since 2013 is also available a commercial product (HAProxy Enterprise) provided and supported by HAProxy Technologies LLC, who also provides appliance-based application-delivery controllers named ALOHA.

Its most interesting features are:

  • Layer 4 (TCP) and Layer 7 (HTTP) load balancing
  • SSL/TLS termination proxy
  • Multi-factor stickiness
  • URL rewriting
  • Rate limiting
  • Gzip compression
  • Caching

When talking about the Linux platform, besides HAProxy, it is certainly worth to mention also NGINX and Traefik - Actually is it quite difficult to decide on which one to use: for example HAProxy supports more protocols than NGINX, but it lacks mail protocols that are instead supported by NGINX. HAProxy provides exportable metrics also in the community edition, whereas NGINX provides them only in the commercial edition. This is just to remark that it is nearly impossible to say which is the best one - you can just try to figure out which one best suits your use case (and most of the time you wind up using at least a pair of them).

HA-Proxy provides availability to the applications behind it, but not to itself - this means that if it goes down every high-available application configured behind it becomes unreachable as well.

Luckily it is possible to mitigate this by setting up HA-Proxy itself in a highly available fashion. This can be achieved in several ways: a very classic way is to have it along with a floating IP address as high available resources of a Corosync / Pacemaker suite, but there is a much more simple way to achieve it by using Keepalived.

The Lab

To see HA-Proxy in action, we are about to set up a lab with 2 Active/Passive HA-Proxy services installed on Oracle Linux 9 sharing 1 floating IP address, resolved on an High Available FQDN.

More specifically:

  • Hosts: haproxy-ca-up1a001.p1.carcano.corp (IP 10.100.100.10)  and haproxy-ca-up1a002.p1.carcano.corp  (IP: 10.100.100.11)
  • Floating IP address: 10.100.100.250
  • High Available FQDN: haproxy-ca-0.p1.carcano.corp (resolving on the floating IP address above)

If necessary it is possible to set up 2 HA-Proxy Active/Active with 2 floating IP addresses resolved on the same High Available FQDN. Despite the performance boost (the load is spread round robin on both the HA-Proxy, it is a much more complex configuration: for example, when dealing with with backends requiring sticky sessions, it is necessary to set up stick tables and their synchronization between the nodes, or session status of a node will die with it if the node goes down. In addition to that, operating an Active/Passive cluster is much more simple: for example, to test a new configuration, you can just apply it to the passive node and perform all the required testing before applying to the active node. With Active/Active, you must first release the floating IP address from the node you want to apply the configuration to be tested, otherwise some regular traffic will continue to reach the node, and suffers from the errors that may arise after applying the configuration to be tested. As usual - pros and cons.

To ease the Ha-Proxy's configuration management, we configure it so to download its setting from a Git repository - this provides several benefits:

  • the configuration is shared, so there's no need to synchronise it between the HA-Proxy hosts
  • the configuration is versioned, enabling easy rollbacks as necessary and promoting an easy auditing

Delivering The HA Proxy HA Cluster

In this scenario it may be interesting to use Ansible to install and maintain updated the software: in this post I won't provide playbooks and such because I want to explain how things work - if you want to use this post as a reference for your own real life service, feel free to write playbooks for the delivery (installation and upgrade), dismissal and operations (such as start, stop, restart) of this service.

Unless conversely specified, all the next tasks must be performed on every cluster's host "haproxy-ca-up1a001.p1.carcano.corp" and "haproxy-ca-up1a002.p1.carcano.corp" in this case.

Operating System Prerequisites

As usual, the very most straightforward requisite is to update the platform:

sudo dnf update -y

A load balancer is a critical infrastructure component - make sure that firewalld is running and enabled at boot.

sudo systemctl is-enabled -q firewalld || sudo systemctl start --now firewalld

Kernel Tunables

Since the workload of a heavy loaded balancer is far different from the one of a "regular" system, we must adjust some kernel tunables.

Enable Binding To Non Existent IP Addresses

It is mandatory to enable binding to non-existent IP: this is needed to enable HAProxy to start when the floating IP is owned by the other node - this can be accomplished by adding the below settings to the "/etc/sysctl.d/99-haproxy.conf" file. 

net.ipv4.ip_nonlocal_bind = 1

Networking Tunables

The default networking tunables are set for regular systems that are not supposed to deal with high connection rates. For this reason we must adjust them to bare the huge load typical of a load balancer, adding the the below values to the "/etc/sysctl.d/99-haproxy.conf":

net.ipv4.tcp_max_syn_backlog = 100000
net.core.somaxconn = 100000
net.core.netdev_max_backlog = 100000

Mind these values were guessed for a system with 4GB of RAM and a 10Gb network interface card: in real life you must adjust them to match your actual hardware resources.

  • "net.ipv4.tcp_max_syn_backlog" relates to the number of half-open TCP connection the system can bare: during load spikes it is legitimate to have half-open connections on a load balancer, so it is correct and advised to raise it
  • "net.core.somaxconn" is the maximum value "net.ipv4.tcp_max_syn_backlog" can have, so it must be raised accordingly or the connections will be silently truncated anyway
  • "net.core.netdev_max_backlog" is the maximum number of packets passed through the network interface the receive queue can hold, waiting to be processed by the kernel

Increase File Limits

Also limits on the number of opened files has a huge impact: even under a regular workload, HAProxy opens a huge number of files, so we need to increase them to a number not causing cuts.

First we need to increase the number of open files limit system wide - add the following entry to the "/etc/sysctl.d/99-haproxy.conf" file:

fs.file-max=262144
fs.nr_open=1048576
  • "fs.file-max" sets the maximum number of file-handles that the Linux kernel will allocate. A reasonable rule of the thumb is an amount of 256 file-handles every 4M of RAM. In this lab we are using VM with 4GB of RAM, so 4096/4=1024; 1024*256=262144
  • the maximum value of "fs.nr_open" is capped to "sysctl_nr_open_max" (this is hardcoded in the Kernel - on x86_64 it is 2147483584).

Then we have to address "haproxy" and "root" users' limits: Red Hat family distributions use PAM, so we have to change the values in the "/etc/security/limits.d/haproxy.conf", raising it for example to 100.000:

root soft nofile 100000
root hard nofile 100000
haproxy soft nofile 100000
haproxy hard nofile 100000

We are still missing to raise the limit in the Systemd service unit - first create the directory where to store the override file:

mkdir -m 755 /etc/systemd/system/haproxy.service.d

then create the "/etc/systemd/system/haproxy.service.d/override.conf" file with the following contents:

[Service]
LimitNOFILE=100000

Now we are supposed to reload Systemd and logoff, but since we have still a pending reboot because of the system upgrade we did at first, then we reboot the system now:

shutdown -r now

Keepalived

The first component we install and configure is Keepalived - install it as follows:

sudo dnf install -y keepalived

Our scenario requires us to configure just a single VRRP instance (VI_1), with a higher priority on the node with the role "master". On each node, the priority is then dynamically raised if the haproxy process is detected as running.

Let's start from configuring it on the "haproxy-ca-up1a001" host - after logging to it, configure the "/etc/keepalived/keepalived.conf" as follows:

! Configuration File for keepalived

global_defs {
}

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 100

    authentication {
        auth_type PASS
        auth_pass H2dSf3gFdeGr
    }

    virtual_ipaddress {
        10.100.100.250/24
    }

    unicast_src_ip 10.100.100.10

    unicast_peer {
       100.100.100.11
    }

    track_script {
        chk_haproxy
    }
}

it is then time to configure the "haproxy-ca-up1a002" host - after logging to it, configure the "/etc/keepalived/keepalived.conf" as follows:

! Configuration File for keepalived

global_defs {
}

vrrp_script chk_haproxy {
    script "killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens192
    virtual_router_id 51
    priority 99

    authentication {
        auth_type PASS
        auth_pass H2dSf3gFdeGr
    }

    virtual_ipaddress {
        10.100.100.250/24
    }

    unicast_src_ip 10.100.100.11

    unicast_peer {
       10.100.100.10
    }

    track_script {
        chk_haproxy
    }
}

This setup assigns 10.100.100.250 as High Available IP shared between the two nodes, with a preference for the "haproxy-ca-up1a001" (priority 100).

When Keepalived starts, both nodes get their initial priority ("haproxy-ca-up1a001" has priority 100, whereas "haproxy-ca-up1a002" has priority 99). The actual priority is then computed by checking the outcome of the "chk_haproxy" vrrp_script: the "killall -0 haproxy" statement returns an errorlevel different than 0 if the "haproxy" process is not found. If the "chk_haproxy" vrrp_script returns no error, then the actual priority is raised by 2 (weight 2). - this check is repeated every 2 seconds (interval 2).

This means that on the host running haproxy, the actual priority is raised by 2 (weight 2): this brings "haproxy-ca-up1a001" to priority 102, whereas "haproxy-ca-up1a002" has priority 100.

So, during normal conditions (both HAProxy services are running), "haproxy-ca-up1a001" has higher priority and so "wins" the High Available IP address (10.100.100.250). If a failure occur on it (a node crash or a failure of the HAProxy service, "haproxy-ca-up1a002" become the node with higher priority and so the High Available IP address gets migrated to it, until the "haproxy-ca-up1a001" node is restored. Once restored, since the "haproxy-ca-up1a001" node has a higher priority, the High Available IP address (10.100.100.250) migrates back to it.

We can now enable the "keepalived" service at boot and immediately start it as follows:

sudo systemctl enable --now keepalived

If after starting "keepalived" the hostname got reset to match the floating IP FQDN, you can restore it to its original hostname as follows - on "haproxy-ca-up1a001" type:

sudo hostnamectl hostname haproxy-ca-up1a001.p1.carcano.corp

and, same way, on "haproxy-ca-up1a002" type:

sudo hostnamectl hostname haproxy-ca-up1a002.p1.carcano.corp

HAProxy

The HAProxy version provided by the usual repositories is very out of date, and it is missing very important features such as automatic OCSP response retrieval when dealing with OCSP stapling, which was added only in version 2.8. Percona provides HAProxy as part of the PostgreSQL cluster with Patroni: since the HAProxy version they provide is much more current than the one provided by the Oracle Enterprise Linux 9, it is certainly best to install the one provided by their repo.

If you want to know more about OCSP stapling, you may find interesting my post Apache HTTPd With Mutual TLS and OCSP Stapling.

First, install they RPM configuring the Percona's repository:

sudo dnf install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm

then enable the most current PostgreSQL version - in this example it is PostgreSQL 16:

sudo percona-release setup ppg16

We are of course not interested in PostgreSQL's RPM packages, but the step was necessary to enable one of the repositories providing the Percona's HAProxy

We are now ready to install Percona's HAProxy as follows:

sudo dnf install -y percona-haproxy

In addition to that, since we are using it to support HAProxy, it is necessary to put HAProxy as a dependency of the Keepalived service to prevent the floating IP from being acquired before the HAProxy service is started:

sudo sed -i 's /After=\(.*\)*/\1 haproxy.service/g' /etc/systemd/system/keepalived.service

then reload Systemd to apply the new settings:

sudo systemctl daemon-reload 

Rsyslog

HAProxy's default configuration instructs it to connect to rsyslog connecting to it via UDP - since most of the time this is disabled, we must enable the UDP listener.

Edit the "/etc/rsyslog.conf " file so that the directives look as follows:

$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514

Mind that these directives are often already present in the file, but they are commented out - in this case just uncomment them.

By default HAProxy sends its log messages to the "local2" rsyslog facility - since it is often missing , we must add it to the "/etc/rsyslog.conf " file as follows:

local2.=info                                                 /var/log/haproxy/info.log
local2.notice                                                /var/log/haproxy/errors.log

The above two lines writes log messages with severity equal to "info" to the "/var/log/haproxy/info.log" file, whereas "notice" and all the higher severities (critical, emergency and so on) are logged in the "/var/log/haproxy/errors.log" file.

Since we want to have HAProxy's log files stored beneath the "/var/log/haproxy" directory, we must of course create it:

mkdir -m 750 /var/log/haproxy

Once done, restart "rsyslog" to apply the new settings:

sudo systemctl restart rsyslog

we also need to adjust the Logrotate configuration, since the one provided by the Percona's HAProxy does not match our logging files - modify the "/etc/logrotate.d/haproxy " file until it looks like as follow:

/var/log/haproxy/*.log {
    daily
    rotate 10
    missingok
    notifempty
    compress
    sharedscripts
    postrotate
        /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
        /bin/kill -HUP `cat /var/run/rsyslogd.pid 2> /dev/null` 2> /dev/null || true
    endscript
}

Setting Up Version Controlled Configuration

As we said in the very beginning of this post, we want to implement HAProxy configuration's version control using Git.

The obvious requisite to go on is to set up an empty private bare Git repository on a remote Git repository server: in my case - I'm using Gitea - I created the empty private bare "haproxy-p1-0" Git repository belonging to the "infrastructure" organization. I also added the SSH public key of the user I want to use to push the changes and granted that user read-write access to that repository. If you want to also try Gitea, you may like the post "Ansible Roles Best Practices: Practical Example Gitea Role".

The obvious requisite for using Git based version controlled configuration is to install Git:

sudo dnf install -y git

Copy The Original Configuration Into A Git Repository 

This step must be operated only once, on one host only and as a normal user - in my case I'm operating as "mcarcano" (my own user).

sudo su - mcarcano

if the current user has not yet configured the Git's committer information, add them now as follows:

git config --global user.name "Marco Carcano"
git config --global user.email marco.carcano@carcano.corp

We now need to create a copy of the default HAProxy configuration directory tree as follows:

cp -dpR /etc/haproxy ~

then we change directory into the copied one:

cd ~/haproxy

we are ready for initialising the Git repository inside it:

git init

lastly, we must add the remote empty Git repository we created on Gitea, linking it as the "origin" remote repo:

git remote add origin git@git0.p1.carcano.corp:infrastructure/haproxy-p1-0.git

we can now commit the original HAProxy's configuration - it is handy keeping it in the first commit:

git add haproxy.cfg 
git commit -m 'default haproxy configuration'

and of course push to the remote Git repository:

git push -u origin master
If desired, you can now protect the "master" branch, taking also care to set an approval rule: this way it is protected from further pushes and the only way to add changes to this branch is raising Merge Requests (Pull Requests).

Customize The HA-Proxy Configuration

This step must be operated only once, on one host only and as a normal user - in my case I'm operating as "mcarcano" (my own user).

If the "master" branch is now protected, we can no longer commit and push to it - for this reason we checkout the new "devel" branch, that will be used as a working branch each time it is necessary to alter the HAProxy's configuration:

git checkout -b devel

we can now safely overwrite the default "haproxy.cfg" configuration file with the following contents:

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid

    maxconn     1000000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
    stats timeout 30s

    # utilize system-wide crypto-policies
    ssl-default-bind-ciphers PROFILE=SYSTEM
    ssl-default-server-ciphers PROFILE=SYSTEM

    tune.ssl.cachesize 1000000

#---------------------------------------------------------------------
# Common defaults
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 1000000

The most of the above settings are straightforward - it is just worth the effort to explain that:

  • about the "maxconn" parameter, be wary that it is a per process setting: in this case, since "maxconn" is 100000 and we have 4 cores, HAProxy can manage 400000 connections
  • We also tuned the SSL cache to 1000000 of entries - mind, an entry is about 200 bytes, so in this setup the maximum amount of memory consumed by SSL cache is only 200MB.

HAProxy supports TLS termination, so we must reserve a directory where to store the certificates - since the default configuration directory is "/etc/haproxy", a good spot can be "/etc/haproxy/tls".

Since we are going to use Git inside "/etc/haproxy", we must exclude the "tls" directory from the Git versioned files: that is because they contain also their private key, certificate bundle files are very sensitive security objects that must be stored in a security vault, or automatically enrolled and retrieved by certificate agents installed on the system such as Cloudflare's certmgr.

If you want to know more about how to automatically enroll certificates and manage their lifecycle with Cloudflare's certmgr, you may find interesting Cloudflare's Certmgr - Tutorial A Certmgr Howto.

To exclude the "/etc/haproxy/tls" directory from the git repository, inside the repository directory, create the ".gitignore" file with the following contents:

tls

As you see, since the repository will be cloned in the "/etc/haproxy" in both the HAProxy hosts, we stripped "/etc/haproxy" from the path.

We can now commit these changes:

git add haproxy.cfg .gitignore
git commit -m "customized the global settings"

and push them:

git push -u origin devel

To complete the change we must now:

  • on the remote Git repository, raise the Merge Request for merging the "devel" branch into the "master" branch. 
  • have someone with the right privilege approving that Merge Request
  • complete the Merge Request by doing the merge

After these steps the changes will be merged into the "master" branch, and so will be available to HA-Proxy.

Clone THe Remote HAProxy Repository Into HAProxy's Configuration Directory

So far we have pushed the HAProxy's configuration into a remote repository. The missing bits are still:

  • clone the remote Git repository into the HAProxy's configuration directory ("/etc/haproxy")
  • modify the HAProxy Systemd service unit so that it pulls the changes from the remote origin every time the service is started

Since the remote repository is private, cloning requires authentication: by best practices when using a service users, we will clone using SSH as transport and public key authentication.

Let's start from creating the SSH key-pair - first create the directory for storing them as follows:

sudo mkdir -m 0700 /var/lib/haproxy/.ssh
sudo chown haproxy: /var/lib/haproxy/.ssh

then generate the SSH key pair:

sudo -u haproxy ssh-keygen -f /var/lib/haproxy/.ssh/id_rsa -N ''

we must now authorize the public key we just created to access the Git repository: this is of course very dependent on the software you are using as a remote Git repository - in this post I'm using Gitea.

From the Gitea's Web UI:

  • grant read only access to the user you use for this automation on the git repository containing the HAProxy configuration - in this example, "haproxy-p1-0" is the repository, belonging to the "infrastructure" organization, whereas the user to grant access is called "automations".
  • add the contents of the public key file ("/var/lib/haproxy/.ssh/id_rsa.pub") to the "automations" user's authorized keys keyring.

Since the current contents of the HAProxy's configuration directory will be replaced and refreshed each time by the Git repository contents, we must now remove it as follows:

sudo rm -rf /etc/haproxy/*

Once done, we can proceed to the cloning - just type the following two statements:

 GIT_SSH_COMMAND='ssh -i /var/lib/haproxy/.ssh/id_rsa'
git clone git@git0.p1.carcano.corp:infrastructure/haproxy-p1-0.git /etc/haproxy

As you can see, before running the actual git clone statement, we first must tell git to enable public key authentication using the "/var/lib/haproxy/.ssh/id_rsa" key: this is achieved by setting the "GIT_SSH_COMMAND" variable with the SSH statement to run when activating the SSH transport.

If everything is properly set, the "/etc/haproxy" directory now contains the contents of the cloned Git repository.

Configure HAProxy For Retrieving The Configuration From Git

We still need to modify the "haproxy" service so that it pulls the changes from the remote origin every time the service is started: since we are doing an override of a file provided by an RPM package, we must put it beneath the "/etc/systemd/system" directory tree.

Copy the current Systemd service unit as follows:

sudo cp /usr/lib/systemd/system/haproxy.service /etc/systemd/system

first we need to create an helper script for pulling the new changes before the service start and service reload - create the "/usr/local/bin/haproxy-cfg-checkout.sh" script with the following contents:

#!/bin/bash
set -e
set -o pipefail
export GIT_SSH_COMMAND='ssh -i /var/lib/haproxy/.ssh/id_rsa'
/usr/bin/git --git-dir=/etc/haproxy/.git --work-tree=/etc/haproxy checkout ${BRANCH:-master}
/usr/bin/git --git-dir=/etc/haproxy/.git --work-tree=/etc/haproxy pull
[ -d /etc/haproxy/listeners ] || mkdir /etc/haproxy/listeners
[ -d /etc/haproxy/proxies ] || mkdir /etc/haproxy/proxies
[ -d /etc/haproxy/maps ] || mkdir /etc/haproxy/maps
[ -d /etc/haproxy/stats ] || mkdir /etc/haproxy/stats
[ -d /etc/haproxy/backends ] || mkdir /etc/haproxy/backends
[ -d /etc/haproxy/certs ] || mkdir /etc/haproxy/certs

This script pulls the new contents from the origin repo, checking out the branch specified by the "BRANCH" variable if it is set.

We must of course set the script as executable:

sudo chmod 755 /usr/local/bin/haproxy-cfg-checkout.sh

Now we can modify the "/etc/systemd/system/haproxy.service" contents until it looks like as follows:

[Unit]
Description=HAProxy Load Balancer
After=network-online.target
Wants=network-online.target

[Service]
EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" "LISTENERS=/etc/haproxy/listeners" "PROXIES=/etc/haproxy/proxies" "BACKENDS=/etc/haproxy/backends" "STATS=/etc/haproxy/stats"
ExecStartPre=/usr/local/bin/haproxy-cfg-checkout.sh
ExecStartPre=/usr/sbin/haproxy -f $CONFIG -f $LISTENERS -f $PROXIES -f $BACKENDS -f $STATS -c -q $OPTIONS
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -f $LISTENERS -f $PROXIES -f $BACKENDS -f $STATS -p $PIDFILE $OPTIONS
ExecReload=/usr/local/bin/haproxy-cfg-checkout.sh
ExecReload=/usr/sbin/haproxy -f $CONFIG -f $LISTENERS -f $PROXIES -f $BACKENDS -f $STATS -c -q $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
SuccessExitStatus=143
Type=notify

[Install]
WantedBy=multi-user.target

As you see we not only added an "ExecStartPre" directive running the "/usr/local/bin/haproxy-cfg-checkout.sh" script before the service start, but also we also added the same script using the "ExecReload", so that it is ruin also before the service reload: this is the helper script running the actual checkout.

Reload Systemd to make it aware of the changes:

sudo systemctl daemon-reload

we can now enable the "haproxy" service at boot and immediately start it as follows:

sudo systemctl enable --now haproxy

since it is better safe than sorry, let's check the service status:

sudo systemctl status haproxy

Operating

Operating the changes now are much more safer and easier: on any computer you can clone and modify the configuration in the "devel" branch at wish.

When you are ready to test it, just commit and push to the Gitea remote repository then, on the passive HAProxy node, set the "BRANCH=devel" in the "/etc/sysconfig/haproxy" file and reload the haproxy service.

You can now perform every test you want taking all the time you need, since the real traffic does not pass on this passive node. Once done, remove the "BRANCH=devel" from the "/etc/sysconfig/haproxy" and raise a Merge Request for merging the "devel" branch into the "master" branch.

Once approved and completed, just reload the haproxy service on both the HAproxy nodes, one at a time.

Footnotes

Here it ends our tutorial on how to deploy an High Available HAProxy with version controlled configuration using Git: I hope you can use it as a starting point for deploying your own setups.

If you liked this post, you may be interested to the HAProxy Tutorial - A Clean And Tidy Configuration Structure post: it is an insight providing guidelines on how to structure the HAProxy configuration in an effective way.

If you appreciate this strive please and if you like this post and any other ones, just share this and the others on Linkedin - sharing and comments are an inexpensive way to push me into going on writing - this blog makes sense only if it gets visited.

I hate blogs with pop-ups, ads and all the (even worse) other stuff that distracts from the topics you're reading and violates your privacy. I want to offer my readers the best experience possible for free, ... but please be wary that for me it's not really free: on top of the raw costs of running the blog, I usually spend on average 50-60 hours writing each post. I offer all this for free because I think it's nice to help people, but if you think something in this blog has helped you professionally and you want to give concrete support, your contribution is very much appreciated: you can just use the above button.

3 thoughts on “High Available HA Proxy Tutorial With Keepalived

    • Marco Antonio Carcano says:

      Hi Stephan, thanks for the feedback – I guess you are referring also to the other blog’s posts. I actually wanted to publish a book titled “DevSecOps – The Foundations”: i have also already been reached by a publisher, but after exchanging a few messages about how to structure the project, I dropped down because the contents of the book after applying their recommendations would have been not as thorough as I wanted to, both because of space constraints and different visions we had on the importance of specific topics: my vision is the one working in this field, trying to provide clean and tidy approaches, whereas theirs is just to try to provide what they suppose is currently fashionable in the market. I’m anyway open to more open minded publishers. Cheers.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>