In an Infrastructure As Code (IaC) scenario, rather than provision a VM and install a networking dedicated appliance, it is best to provide something without a web-UI but that provides a good configuration API or that sources its settings from something that can be easily managed by automated configuration tools.

In such as scenario it is more convenient to just use one (or more) Linux VM with a very basic installation, having it manage infrastructural networking: these VM can not only manage networking using routing protocol such as RIP, OCSP and even BGP, but also enforce security policies dropping unauthorised traffic.

In this post we see Free Range Routing (FRR) and OpenVSwitch on Oracle Linux in action, setting up a Lab with two virtual machines providing routing sharing routing tables using OCSP: we achieve this by installing Free Range Routing (FRR) - a free and open source Internet routing protocol suite for Linux. The advanced setup shown in this lab also makes use of OpenVSwitch, stacking FRR on top of it.

This dual layer setup enables us to exploit the Software Defined Networking (SDN) features provided by OpenVSwitch, enhancing by adding dynamic routing support, but also providing a compatibility layer with legacy bare metal devices such as "traditional" hardware routers.

Provision The Lab

The following table summarizes the networks we are about to set up:

Name

Subnet CIDR

Domain

Description

Management Network

depends on your setup

mgmt-t1.carcano.local

We use the default network the VMs gets attached to as a fictional management network - the actual configuration of this network depends on the Hypervisor you are using

Core Network Testing Security Tier 1

N.A.

N.A.

This is a trunked network used to transport the Testing VLANs: Vagrant will set-up it as "192.168.253.0/24", but the VM will use it only as a network segment to transport VLANs.

Core Network Production Security Tier 1

N.A.

N.A.

This is a trunked network used to transport the Production VLANs: Vagrant will set-up it as "192.168.254.0/24", but the VM will use it only as a network segment to transport VLANs.

Infrastructural Network Security Tier 1

172.16.0.0/24

netdevs-t1.carcano.local

The network used for interconnecting networking equipments - it is used purely for infrastructural purposes.

Application Servers Network Testing Security Tier 1

192.168.0.0/24

as-t1.carcano.local

This network used for attaching Application Servers VMs of the Testing environment.

Database Network Testing Security Tier 1

192.168.6.0/24

db-t1.carcano.local

This network used for attaching Database Servers VMs of the Testing environment.

Shared Services Network Security Tier 1

192.168.30.0/24

shared-p1.carcano.local

This network used for attaching Shared Servers VMs of the Production environment, such as IdMs, Corporate Directory Servers and such.

NAS Servers Network Security Tier 1

192.168.36.0/24

nas-p1.carcano.local

This network used for attaching NAS Servers VMs, of the Production environment.

In real life, having a dedicated management network provides several security and availability benefits: it provides you a trusted network you can use to always reach your hosts, either physical or virtual, enabling to operate using SSH and Datacenter Automation tools, PXE boot them or running backups using dedicated networking policies (for example traffic shaping) as well as security policies and even dedicated firewalls. Mind that it is necessary to have a dedicated management network for each Security Tier - if security is a concern, don't forget to have a couple of jump hosts for each management network. In my personal experience using dedicated management networks is absolutely a best practice.

This table summarizes the VM.s homing on the above networks:

Hostname

Services Subnet/Domain(s)

Management Subnet/Domain

Description

gw-ca-ut1a001

  • netdevs-t1.carcano.local
  • as-t1.carcano.local
  • db-t1.carcano.local

 

 

mgmt-t1.carcano.local

the Test Security Tier 1 environment's gateway: it provides routing and enforce network policies set through OpenFlow.

gw-ca-up1a001

  • netdevs-p1.carcano.local
  • shared-p1.carcano.local
  • nas-p1.carcano.local

mgmt-t1.carcano.local

the Production Security Tier 1 environment's gateway: it provides routing and enforce network policies set through OpenFlow.

When dealing with multi-homed hosts scenarios, the best practice is to register every FQDN in the DNS - for example "gw-ca-ut1a001.as-t1.carcano.local" ,  "gw-ca-ut1a001.db-t1.carcano.local", "gw-ca-ut1a001.netdevs-t1.carcano.local" and "gw-ca-ut1a001.mgmt-t1.carcano.local".

Deploy The VM Using Vagrant

In order to ease the initial deployment of the VMs of this Lab, we use Vagrant: if it is not already installed on your computer, you must install it along with a Hypervisor supported by a Vagrant provider.

The most popular ones are  Oracle's Virtualbox or Parallels Desktop - if you need guidelines on this, you may refer to my previous post "Vagrant - Installing And Operating".

This tutorial is based on Oracle Linux 9, so first we need to download the Oracle 9 official Vagrant box:

vagrant box add oraclelinux/9 --provider virtualbox \
https://oracle.github.io/vagrant-projects/boxes/oraclelinux/9.json

of course, after the "--provider" command option, you must specify the name of the provider for the Hypervisor you are actually using.

For the sake of completeness, the list of URLs to download Oracle's OracleLinux official Vagrant boxes is available here.

Unfortunately sometimes there do not exist pre-packaged Vagrant boxes for every distribution/hardware-architecture/vagrant-provider: for example, when dealing with OracleLinux, if you are on an AARM Mac you are out of luck - you have to create the Vagrant box by yourself. In this case you may find useful the post "Vagrant - Installing And Operating", that among other things also explains how to create a Vagrant box from scratch.

Next we need to create the Vagrantfile, containing the statements that describe our Lab  we will use to manage the deployment of the above VMs. You can of course alter the CPU/RAM settings, but mind they are already set quite low.

This post has been designed to leverage on Vagrant using "personal" hypervisors, (it supports both VirtualBox or Parallels Desktop): mind that both Vagrant and this kind of hypervisor are targeted at personal use, so they both poorly support advanced networking settings such as VLAN and in general manage networking in a very basic way. More specifically, they don't provide an advanced way to first create hosted networks, assigning a name and settings to them: all you can do is just to specify the IP address of  each VM's NIC. For this reason we need to use an "odd" setup with a disposable initial IP configuration that is used  only to make Vagrant understand the topology of the hosted networks we need, and to which hosted network attache the VMs NICs: at the end of the provisioning process, Vagrant will take care of getting rid of that networking configuration.

On your computer, create create the "grimoire-lab" directory we will use for our Vagrant based Lab, and then change directory into it:

mkdir -m 755 ~/grimoire-lab
cd ~/grimoire-lab

then create into a file called "Vagrantfile" with the following contents:

# -*- mode: ruby -*-
# vi: set ft=ruby :

$wipe_network_interface = <<-'SCRIPT'
DEV=$(ip -4 addr | grep -v "altname" | grep ${1} -B 1 | head -n 1 | awk '{print $2}' |tr -d :)
[[ -z "${DEV}" ]] && exit 0
CONN_NAME=$(nmcli -t -f NAME,DEVICE con | grep "${DEV}" | grep -v "br-" | cut -d : -f 1)
nmcli conn modify "${CONN_NAME}" ipv4.method auto
nmcli conn modify "${CONN_NAME}" ipv4.address ''
nmcli conn modify "${CONN_NAME}" ipv4.method disabled
nmcli conn modify "${CONN_NAME}" autoconnect no
SCRIPT

$reboot = <<-'SCRIPT'
sudo shutdown -r now
SCRIPT

VAGRANTFILE_API_VERSION = "2"
Vagrant.require_version ">= 1.5"

host_vms=[
  {
    :hostname => "gw-ca-ut1a001",
    :domain => "netdevs.carcano.local",
    :infra_net_ip => "172.16.0.11",
    :core_net_temporary_ip => "192.168.253.253",
    :box => "oraclelinux/9",
    :ram => 2048,
    :cpu => 2,
    :service_class => "netdev"
  },
  {
    :hostname => "gw-ca-up1a001",
    :domain => "netdevs.carcano.local",
    :infra_net_ip => "172.16.0.12",
    :core_net_temporary_ip => "192.168.254.254",
    :box => "oraclelinux/9",
    :ram => 2048,
    :cpu => 2,
    :service_class => "netdev"
  },
]

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  host_vms.each do |machine|
    config.vm.define machine[:hostname] do |node |
      node.vm.box = machine[:box]
      node.vm.hostname = "#{machine[:hostname]}.#{machine[:domain]}"
      node.vm.network "private_network", ip:  machine[:core_net_temporary_ip]
      node.vm.network "private_network", ip: machine[:infra_net_ip]

      node.vm.provider :virtualbox do |vm|
        vm.name = "grimoire_#{machine[:hostname]}"
        vm.cpus = machine[:cpu]
        vm.customize [ "modifyvm", :id, "--memory", machine[:ram], "--nicpromisc2", "allow-vms" ]
      end

      node.vm.provider :parallels do |vm|
        vm.name = "grimoire_#{machine[:hostname]}"
        vm.memory =  machine[:ram]
        vm.cpus = machine[:cpu]
        vm.update_guest_tools = false
        vm.optimize_power_consumption = false
      end
      node.vm.provision :shell, :args => machine[:core_net_temporary_ip], inline: $wipe_network_interface, run: "always"
      node.vm.provision :shell, inline: $reboot, run: "always"
    end
  end
end


finally provision the VMs by simply running:

vagrant up

Update Everything

As suggested by best practices, it is always best provisioning something that is as current as possible.

SSH connect to the gw-ca-ut1001 VM as follows:

vagrant ssh gw-ca-ut1a001

then switch to the "root" user again:

sudo su -

update the system using DNF

dnf -y update

reboot the VM

shutdown -r now
SSH connect to the gw-ca-up1a001 host and repeat all the above steps.

Install The OpenVSwitch (OVS) Kernel Module

Since the Vagrant box provided by Oracle is missing the OpenVSwitch kernel module, we must install it - SSH connect to the gw-ca-ut1001 VM as follows:

vagrant ssh gw-ca-ut1a001

then switch to the "root" user again:

sudo su -

the OpenVSwitch kernel module is provided by two different RPM packages, depending on you are using the Unbreakable Enterprise Kernel (UEK) or the Red Hat Compatible Kernel (RHCK), so first we have to check the of the currently used kernel's flavor:

uname -r  |grep --color 'el[a-z0-9_]*'

if the outcome string, as the following one, contains the "uek" word, then it is an Unbreakable Enterprise Kernel (UEK):

5.15.0-101.103.2.1.el9uek.x86_64

in this case, install the "kernel-uek-modules" RPM package as follows:

dnf install -y kernel-uek-modules

otherwise, if the outcome string, as the following one, does not contain the "uek" word, then it is an Red Hat Compatible Kernel (RHCK):

5.14.0-284.30.1.el9_2.x86_64

in this case, install the "kernel-modules" RPM package as follows:

dnf install -y kernel-modules
SSH connect to the gw-ca-up1a001 host and repeat all the above steps.

Install The Software

OpenVSwitch is not shipped with Oracle Linux, but since Oracle Linux is binary compatible with Red Hat Enterprise Linux, and the same is for CentOS Linux, we can download the pre-built RPM packages freely shipped by CentOS

Mind you can browse the available packages from cbs.centos.org.

To be tidy, we create a directory tree where to download the RPM packages we are about to install:

mkdir -m 755 /opt/rpms /opt/rpms/3rdpart
cd /opt/rpms/3rdpart

let's start by downloading every OpenVSwitch RPM packages of our desired version and build:

URL=https://cbs.centos.org/kojifiles/packages
OVSPACKAGE=openvswitch3.1
OVSVERSION=3.1.0
OVSBUILD=65.el9s
ARCH=$(uname -i)
wget ${URL}/${OVSPACKAGE}/${OVSVERSION}/${OVSBUILD}/${ARCH}/${OVSPACKAGE}-${OVSVERSION}-${OVSBUILD}.${ARCH}.rpm
wget ${URL}/${OVSPACKAGE}/${OVSVERSION}/${OVSBUILD}/${ARCH}/${OVSPACKAGE}-devel-${OVSVERSION}-${OVSBUILD}.${ARCH}.rpm
wget ${URL}/${OVSPACKAGE}/${OVSVERSION}/${OVSBUILD}/${ARCH}/${OVSPACKAGE}-ipsec-${OVSVERSION}-${OVSBUILD}.${ARCH}.rpm
wget ${URL}/${OVSPACKAGE}/${OVSVERSION}/${OVSBUILD}/${ARCH}/python3-${OVSPACKAGE}-${OVSVERSION}-${OVSBUILD}.${ARCH}.rpm
Of course we actually need only the openvswitch3.1 RPM package, but it is always best to have them so as to be able to achieve future needs.

the OpenVSwitch RPM package depends on "openvswitch-selinux-extra-policy" RPM package, so let's download it as well:

SELINUX_POLICY_PACKAGE=openvswitch-selinux-extra-policy
SELINUX_POLICY_VERSION=1.0
SELINUX_POLICY_BUILD=31.el9s
wget ${URL}/${SELINUX_POLICY_PACKAGE}/${SELINUX_POLICY_VERSION}/${SELINUX_POLICY_BUILD}/noarch/${SELINUX_POLICY_PACKAGE}-${SELINUX_POLICY_VERSION}-${SELINUX_POLICY_BUILD}.noarch.rpm

we can now install the software as follows:

dnf install -y ${OVSPACKAGE}-${OVSVERSION}-${OVSBUILD}.${ARCH}.rpm  ${SELINUX_POLICY_PACKAGE}-${SELINUX_POLICY_VERSION}-${SELINUX_POLICY_BUILD}.noarch.rpm NetworkManager-ovs frr net-tools

start OpenVswitch and enable it at boot:

systemctl enable --now openvswitch

we need also to restart also NetworkManager, to make it load also the OpenVSwitch (OVS) module we just installed:

systemctl restart NetworkManager
SSH connect to the gw-ca-up1a001 host and repeat all the above installation steps.

Configure The gw-ca-ut1a001 Host

SSH connect again to "gw-ca-ut1a001" using Vagrant:

vagrant ssh gw-ca-ut1a001

then, since we are installing a fresh system and we need to always run statements that require administrative privileges, switch to the root user:

sudo su -

Initial Networking Setup

Since we are about to run several statements, it is convenient to specify resources using variables, so let's define the SERVICES_NETWORKS_NIC: this is the trunked NIC, used to transport all the VLANs used by all the other VMs you may create to provide end user services, such as HTTP, mail services, LDAP and such.

In my setup, it is as follows.

SERVICES_NETWORKS_NIC=eth1
If you are using the Predictable Network Interface Device name naming scheme, then interface names will start by "enp" - for example, SERVICES_NETWORKS_NIC must be set to "enp0s6".

Setup The OpenVSwitch Bridge

Since the SERVICES_NETWORK_NIC is attached to a network trunk providing VLANs, the best practice is to create a bridge, attach it to the SERVICES_NETWORK_NIC and then add to the bridge a dedicated port for each VLAN: the major advantage of this design pattern is the enables to enforce traffic policies using OpenFlow rules.

So the very first networking object we set up is the "br-test" OVS bridge - we operate using the "nmcli" command line utility, so to have it fully integrated in the default network initialisation process of the Linux distribution:

nmcli conn add type ovs-bridge conn.interface br-test con-name br-test

Binding The OVS Bridge To a NIC

In order to get the services' tagged VLAN network traffic, we must bind the "br-test" OVS bridge to the SERVICES_NETWORKS_NIC:

First we create the trunk port with the same name of the SERVICES_NETWORKS_NIC on the "br-test" OVS bridge - a trunk port is a port carrying several VLANs.

 nmcli conn add type ovs-port conn.interface ${SERVICES_NETWORKS_NIC} \
master br-test con-name br-test-trunk 

then we create the connection "br-test-trunk-e1", used to bind the SERVICES_NETWORKS_NIC trunk port of the "br-test" OVS bridge to the SERVICES_NETWORKS_NIC NIC.

nmcli conn add type ethernet conn.interface ${SERVICES_NETWORKS_NIC} \
master br-test-trunk con-name br-test-trunk-e1

let's have a look to the OVS setup we did so far by typing:

ovs-vsctl show

the outcome must be as follows:

a95bd233-2284-4c7c-b65f-af6ae213b55f
    Bridge br-test
        Port eth1
            Interface eth1
                type: system
    ovs_version: "3.1.4"

Adding Tagged VLAN Interfaces

We are finally ready to add to the "br-test" OVS bridge the interfaces using tagged VLANs: we start by creating an interface dedicated to application servers ("as") of the "test" environment, Security Tier 1. 

First we create the new connection "br-test-vlan100 " for the the "br-test" port called "as-test-tier1" of type OVS port, set to use VLAN tag 100:

nmcli conn add type ovs-port conn.interface as-test-tier1 \
master br-test ovs-port.tag 100 con-name br-test-vlan100

then we create the "br-test-vlan100-e0" connection for the "as-test-tier1" OVS interface connected to the  "as-test-tier1" port of the "br-test" OVS bridge, and assign the "192.168.0.254/24" IP/subnet mask to it:

nmcli conn add type ovs-interface slave-type ovs-port conn.interface as-t1-e0 \
master br-test-vlan100 con-name br-test-vlan100-e0 \
ipv4.method static ipv4.address 192.168.0.254/24

let's check the current bridge settings:

ovs-vsctl show

the outcome must be as follows:

a95bd233-2284-4c7c-b65f-af6ae213b55f
    Bridge br-test
        Port eth1
            Interface eth1
                type: system
        Port as-test-tier1
            tag: 100
            Interface as-t1-e0
                type: internal
    ovs_version: "3.1.4"1.4"

so, the port "as-test-tier1" gets the VLAN's 100 traffic, and the "as-t1-e0" interface is attached to it.

If we look at the current IP configurations:

ip -c -4 addr

we will see the "as-test-tier1" interface (from the OVS bridge) with IP address "192.168.0.254"

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 10.211.55.151/24 brd 10.211.55.255 scope global dynamic noprefixroute eth0
       valid_lft 1516sec preferred_lft 1516sec
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 172.16.0.11/24 brd 172.16.0.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
6: as-t1-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 192.168.0.254/24 brd 192.168.0.255 scope global noprefixroute as-t1-e0
       valid_lft forever preferred_lft forever

we can go on, and configure the "db-test-tier1" OVS interface, set it up to fetch traffic from VLAN 101, and assign IP/mask "192.168.6.254/24":

nmcli conn add type ovs-port conn.interface db-test-tier1 \
master br-test ovs-port.tag 101 con-name br-test-vlan101
nmcli conn add type ovs-interface slave-type ovs-port conn.interface db-t1-e0 \
master br-test-vlan101 con-name br-test-vlan101-e0 \
ipv4.method static ipv4.address 192.168.6.254/24

Enable IP Routing

Conversely from routers, a Linux host is not supposed to route traffic across its network interfaces, so we must explicitly enable it.

The routing feature in Linux is called "IP forwarding", so let's enable it as follows:

sysctl net.ipv4.ip_forward=1

and set it to be enabled at system boot:

echo "net.ipv4.ip_forward=1" | sudo tee /etc/sysctl.d/99-frr.conf 

Disable Firewalld

Since this is a router, and we will enforce network policies using OpenFlow, we disable Firewalld:

systemctl stop firewalld
systemctl disable firewalld
systemctl mask firewalld

Configuring Routing Protocols

Enabling routing just means forwarding packages across network interfaces, but we are still missing a very important feature of routers: the routing protocols. In this post we are enabling OSPF routing protocol by using the FRR software package.

Since Free Range Routing is a suite of routing protocols, you must explicitly enable the components related to the protocol you want to use.

Enable Zebra

Conversely from the past, recent FRR versions automatically load Zebra, so there's no need to explicitly enable it.

If you are running an old FRR release, you can enable Zebra as follows:

sed -i 's/^[ ]*zebra[ ]*=.*/zebra=yes/' /etc/frr/daemons

Enable OSPF

We need to enable the OSPF routing protocol daemon in FRR - just set the "ospfd" directive to "yes" in the "/etc/frr/daemons" as follows:

sed -i 's/^[ ]*ospfd[ ]*=.*/ospfd=yes/' /etc/frr/daemons

Configure FRR

We must now configure OSPF - in this basic setup we only set:

  • the router's ID - we use the IP of the NETWORK_INFRA_NIC (172.16.0.1)
  • the advertised networks:
    • the router's infrastructural network (172.16.0.0/24)
    • the application servers test security tier 1 network (192.168.0.0/24)
    • the database servers test security tier 1 network (192.168.6.0/24)

Setup the "/etc/frr/frr.conf " file so to look like as follow:

!
frr version 8.3.1
frr defaults traditional
hostname gw-ca-ut1a001.netdevs.carcano.local
no ipv6 forwarding
!
router ospf
 ospf router-id 172.16.0.11
 network 172.16.0.0/24 area 0
 network 192.168.0.0/24 area 0
 network 192.168.6.0/24 area 0
exit
!

Start The FRR Service

Start the FRR service and enable it at boot:

 systemctl enable --now frr

Configure The gw-ca-uP1a001 Host

SSH connect again to "gw-ca-up1a001" using Vagrant:

vagrant ssh gw-ca-up1a001

then switch to the "root" user:

sudo su -

Initial Networking Setup

Since we are about to run several statements, it is convenient to specify resources using variables, so let's define the SERVICES_NETWORKS_NIC: this is the trunked NIC, used to transport all the VLAN used by all the other VMs you may create to provide end user services, such as HTTP, mail services, LDAP and such.

In my setup, they are as follows.

SERVICES_NETWORKS_NIC=eth1
If you are using the Predictable Network Interface Device name naming scheme, then interface names will start by "enp" - for example, if using Parallels, SERVICES_NETWORKS_NIC must be set to "enp0s6".

Setup The OpenVSwitch Bridge

Since the SERVICES_NETWORK_NIC is attached to a network trunk providing VLANs, the best practice is to create a bridge, attach it to the SERVICES_NETWORK_NIC and then add to the bridge a dedicated port for each VLAN: the major advantage of this design pattern is the enables to enforce traffic policies using OpenFlow rules.

So the very first networking object we set up is the "br-prod" OVS bridge - we operate using the "nmcli" command line utility, so to have it fully integrated in the default network initialisation process of the Linux distribution:

nmcli conn add type ovs-bridge conn.interface br-prod con-name br-prod

Binding The OVS Bridge To a NIC

In order to get the services' tagged VLAN network traffic, we must bind the "br-prod" OVS bridge to the SERVICES_NETWORKS_NIC:

First we create the trunk port with the same name of the SERVICES_NETWORKS_NIC on the "br-prod" OVS bridge - a trunk port is a port carrying several VLANs.

 nmcli conn add type ovs-port conn.interface ${SERVICES_NETWORKS_NIC} \
master br-prod con-name br-prod-trunk 

then we create the connection "br-prod-trunk-e1", used to bind the SERVICES_NETWORKS_NIC trunk port of the "br-prod" OVS bridge to the SERVICES_NETWORKS_NIC NIC.

nmcli conn add type ethernet conn.interface ${SERVICES_NETWORKS_NIC} \
master br-prod-trunk con-name br-prod-trunk-e1

Adding Tagged VLAN Interfaces

We are finally ready to add to the "br-prod" OVS bridge the interfaces using tagged VLANs: we start by creating an interface dedicated to the shared services servers ("shared"), Security Tier 1. 

First we add the new connection "br-prod-vlan130 " for the the "br-prod" port called "shared-tier1" of type OVS port, set to use VLAN tag 130 

nmcli conn add type ovs-port conn.interface shared-tier1 \
master br-prod ovs-port.tag 130 con-name br-prod-vlan130

then we create the "br-prod-vlan130-e0" connection for the "shared-tier1" OVS interface to connect to the "shared-tier1" interface of the "br-prod" OVS bridge, and assign the "192.168.30.254/24" IP/subnet mask to it:

nmcli conn add type ovs-interface slave-type ovs-port conn.interface shared-t1-e0 \
master br-prod-vlan130 con-name br-prod-vlan130-e0 \
ipv4.method static ipv4.address 192.168.30.254/24

we can go on, and configure the "nas-tier1" OVS interface, set it up to fetch traffic from VLAN 131, and assign IP/mask "192.168.36.254/24"

nmcli conn add type ovs-port conn.interface nas-tier1 \
master br-prod ovs-port.tag 131 con-name br-prod-vlan131
nmcli conn add type ovs-interface slave-type ovs-port conn.interface nas-t1-e0 \
master br-prod-vlan131 con-name br-prod-vlan131-e0 \
ipv4.method static ipv4.address 192.168.36.254/24

let's check the current bridge settings:

ovs-vsctl show

the outcome must be as follows:

fefff595-d586-42eb-a149-eb5d3a7f9a65
    Bridge br-prod
        Port shared-tier1
            tag: 130
            Interface shared-t1-e0
                type: internal
        Port eth1
            Interface eth1
                type: system
        Port nas-tier1
            tag: 131
            Interface nas-t1-e0
                type: internal
    ovs_version: "3.1.4"1.4"

If we look at the current IP configurations:

ip -c -4 addr

we see both the "shared-tier1" interface (from the OVS bridge) with IP address "192.168.30.254", and the "nas-t1-e0" interface, with IP address "192.168.36.254":

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 10.211.55.150/24 brd 10.211.55.255 scope global dynamic noprefixroute eth0
       valid_lft 1177sec preferred_lft 1177sec
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 172.16.0.12/24 brd 172.16.0.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
6: shared-t1-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 192.168.30.254/24 brd 192.168.30.255 scope global noprefixroute shared-t1-e0
       valid_lft forever preferred_lft forever
7: nas-t1-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 192.168.36.254/24 brd 192.168.36.255 scope global noprefixroute nas-t1-e0
       valid_lft forever preferred_lft forever

Enable IP Routing

Conversely from routers, a Linux host is not supposed to route traffic across its network interfaces, so we must explicitly enable it.

The routing feature in Linux is called "IP forwarding", so let's enable it as follows:

sysctl net.ipv4.ip_forward=1

and set it to be enabled at system boot:

echo "net.ipv4.ip_forward=1" | sudo tee /etc/sysctl.d/99-frr.conf 

Disable Firewalld

Since this is a router, and we will enforce network policies using OpenFlow, we disable Firewalld:

systemctl stop firewalld
systemctl disable firewalld
systemctl mask firewalld

Configuring Routing Protocols

Enabling routing just means forwarding packages across network interfaces, but we are still missing a very important feature of routers: the routing protocols. In this post we are enabling OSPF routing protocol by using the FRR software package.

Since Free Range Routing is a suite of routing protocols, you must explicitly enable the components related to the protocol you want to use.

Enable Zebra

Conversely from the past, recent FRR versions automatically load Zebra, so there's no need to explicitly enable it.

If you are running an old FRR release, you can enable Zebra as follows:

sed -i 's/^[ ]*zebra[ ]*=.*/zebra=yes/' /etc/frr/daemons

Enable OSPF

We need to enable the OSPF routing protocol daemon in FRR - just set the "ospfd" directive to "yes" in the "/etc/frr/daemons" as follows:

sed -i 's/^[ ]*ospfd[ ]*=.*/ospfd=yes/' /etc/frr/daemons

Configure FRR

We must now configure OSPF - in this basic setup we only set:

  • the router's ID - we use the IP of the NETWORK_INFRA_NIC (172.16.0.2)
  • the advertised networks:
    • the router's infrastructural network (172.16.0.0/24)
    • the shared services security tier 1 network (192.168.30.0/24)
    • the nas security tier 1 network (192.168.36.0/24)

Setup the "/etc/frr/frr.conf " file so to look like as follow:

!
frr version 8.3.1
frr defaults traditional
hostname gw-ca-up1a001.netdevs.carcano.local
no ipv6 forwarding
!
router ospf
 ospf router-id 172.16.0.12
 network 172.16.0.0/24 area 0
 network 192.168.30.0/24 area 0
 network 192.168.36.0/24 area 0
exit
!

Start The FRR Service

Start the FRR service and enable it at boot:

 systemctl enable --now frr

A First Go On FRR

It has finally come the time to play a little bit with the FRR's command line.

The FRR console is provide by the "vtysh" command line tool - launch it as follows:

vtysh

vtysh command line syntax is very close to Cisco's Internetworking Operating System (IOS).

For example, try running the same statement you would run on a Cisco to show the running configuration:

show run

the outcome must be as follows:


Hello, this is FRRouting (version 8.3.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

gw-ca-up1a001.netdevs.carcano.local# show run
Building configuration...

Current configuration:
!
frr version 8.3.1
frr defaults traditional
hostname gw-ca-up1a001.netdevs.carcano.local
no ipv6 forwarding
!
router ospf
 ospf router-id 172.16.0.2
 network 192.168.30.0/24 area 0
 network 192.168.36.0/24 area 0
exit
!
end

let's now have a look to the interfaces' status:

show int

the outcome is:

Interface eth0 is up, line protocol is up
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 2 metric 0 mtu 1500 speed 4294967295 
  flags: <UP,BROADCAST,RUNNING,MULTICAST>
  Type: Ethernet
  HWaddr: 08:00:27:82:4b:f9
  inet 10.0.2.15/24
  inet6 fe80::ed57:f769:b01e:8b78/64
  Interface Type Other
  Interface Slave Type None
  protodown: off 
Interface eth1 is up, line protocol is up
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 3 metric 0 mtu 1500 speed 4294967295 
  flags: <UP,BROADCAST,RUNNING,MULTICAST>
  Type: Ethernet
  HWaddr: 08:00:27:39:4a:30
  Interface Type Other
  Interface Slave Type Other
  protodown: off 
Interface eth2 is up, line protocol is up
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 4 metric 0 mtu 1500 speed 4294967295 
  flags: <UP,BROADCAST,RUNNING,MULTICAST>
  Type: Ethernet
  HWaddr: 08:00:27:f3:1c:71
  inet 172.16.0.2/24
  inet6 fe80::cd73:3b9:8c90:d7a6/64
  Interface Type Other
  Interface Slave Type None
  protodown: off 
Interface lo is up, line protocol is up
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 1 metric 0 mtu 65536 speed 0 
  flags: <UP,LOOPBACK,RUNNING>
  Type: Loopback
  Interface Type Other
  Interface Slave Type None
  protodown: off 
Interface nas-t1-e0 is up, line protocol is up
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 8 metric 0 mtu 1500 speed 0 
  flags: <UP,BROADCAST,RUNNING,MULTICAST>
  Type: Ethernet
  HWaddr: f6:af:a7:91:ea:c5
  inet 192.168.36.254/24
  inet6 fe80::ed88:781b:f5e9:2714/64
  Interface Type Other
  Interface Slave Type None
  protodown: off 
Interface ovs-system is down
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 5 metric 0 mtu 1500 speed 0 
  flags: <BROADCAST,MULTICAST>
  Type: Ethernet
  HWaddr: a2:c3:85:77:68:a0
  Interface Type Other
  Interface Slave Type None
  protodown: off 
Interface shared-t1-e0 is up, line protocol is up
  Link ups:       0    last: (never)
  Link downs:     0    last: (never)
  vrf: default
  index 7 metric 0 mtu 1500 speed 0 
  flags: <UP,BROADCAST,RUNNING,MULTICAST>
  Type: Ethernet
  HWaddr: ca:5b:a9:80:43:b2
  inet 192.168.30.254/24
  inet6 fe80::bed7:63a1:2822:cc5f/64
  Interface Type Other
  Interface Slave Type None
  protodown: off  

as you'd do on a Cisco, you can display the OSPF neighbor as follows:

show ip ospf neighbor

the output is:

Neighbor ID     Pri State           Up Time         Dead Time Address         Interface                        RXmtL RqstL DBsmL
172.16.0.1        1 Full/DR         3m14s             36.711s 172.16.0.1      eth2:172.16.0.2                      0     0     0

and again, as you'd do on a Cisco, you can display the IP routes as follows:

show ip route

Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:

Even a small contribution is always welcome!

the output is:

Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

K>* 0.0.0.0/0 [0/100] via 10.0.2.2, eth0, src 10.0.2.15, 00:01:15
C>* 10.0.2.0/24 is directly connected, eth0, 00:01:15
O   172.16.0.0/24 [110/1] is directly connected, eth2, weight 1, 00:01:07
C>* 172.16.0.0/24 is directly connected, eth2, 00:01:15
O>* 192.168.0.0/24 [110/11] via 172.16.0.1, eth2, weight 1, 00:01:03
O>* 192.168.6.0/24 [110/11] via 172.16.0.1, eth2, weight 1, 00:01:03
O   192.168.30.0/24 [110/10] is directly connected, shared-t1-e0, weight 1, 00:01:15
C>* 192.168.30.0/24 is directly connected, shared-t1-e0, 00:01:15
O   192.168.36.0/24 [110/10] is directly connected, nas-t1-e0, weight 1, 00:01:15
C>* 192.168.36.0/24 is directly connected, nas-t1-e0, 00:01:15

As you can see by the lines starting with the "O" letter, we have received from the other VM the "192.168.0.0/24" and "192.168.6/24" routes via OSPF protocol.

Same way as on a Cisco, we can display some more details by running:

show ip ospf route

the outcome is:

============ OSPF network routing table ============
N    172.16.0.0/24         [1] area: 0.0.0.0
                           directly attached to eth2
N    192.168.0.0/24        [11] area: 0.0.0.0
                           via 172.16.0.1, eth2
N    192.168.6.0/24        [11] area: 0.0.0.0
                           via 172.16.0.1, eth2
N    192.168.30.0/24       [10] area: 0.0.0.0
                           directly attached to shared-t1-e0
N    192.168.36.0/24       [10] area: 0.0.0.0
                           directly attached to nas-t1-e0

============ OSPF router routing table =============

============ OSPF external routing table ===========

Exit from the vtysh console by typing "exit":

exit

What About OpenFlow?

As we said, OpenVSwitch supports OpenFlow - we can easily see the configured policy by running the following statement:

ovs-ofctl -O OpenFlow13 dump-flows br-prod

the output is as follows:

cookie=0x0, duration=5943.914s, table=0, n_packets=1956, n_bytes=209514, priority=0 actions=NORMAL

so, the default OpenFlow rule (as you should expect) is to let any kind of traffic pass.

Footnotes

Here it ends this post dedicated to FRR on top of OpenVSwitch. Of course there are a lot more things we could say if you really want to deep dive, but I think that what we saw is more than enough to let you continue by yourself to explore this amazing topic.

I hope you enjoyed it, and if you liked it please share this post on Linkedin: if I see it arouses enough interest, we can go on this topic spending some time writing a post explaining how to set OpenFlow rules.

I hate blogs with pop-ups, ads and all the (even worse) other stuff that distracts from the topics you're reading and violates your privacy. I want to offer my readers the best experience possible for free, ... but please be wary that for me it's not really free: on top of the raw costs of running the blog, I usually spend on average 50-60 hours writing each post. I offer all this for free because I think it's nice to help people, but if you think something in this blog has helped you professionally and you want to give concrete support, your contribution is very much appreciated: you can just use the above button.

1 thought on “Free Range Routing (FRR) and OpenVSwitch on Oracle Linux

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>