OpenVSwitch (OVS) is the pillar used by several emblazoned software, such as OpenStack or Red Hat's OpenShift to set up their Software Defined Networks (SDN): it enables users to quickly and easily implement multiple bridges to which connect Virtual Machines or Containers.

These bridges can be left standalone, creating isolated networks, or interconnected to the machine (or VM) NICs, providing bidirectional access to the network segment the NIC is connected to. In addition to that, it also enables the set up VxLAN Tunnel EndPoint (VTEP) on these bridges, enabling interconnecting OVS bridges from different machines. Last but not least, it also enforces traffic policies defined using OpenFlow.

The SDN tutorial - OpenFlow with OpenVSwitch on Oracle Linux, starts from where we left in the "Free Range Routing (FRR) And OpenVSwitch On OracleLinux" post, extending its Lab and provides a practical guide on how to write and set OpenFlow rules on OpenVSwitch.

Provision The Lab

In this post we are using the following networks we already set up in the "Free Range Routing (FRR) And OpenVSwitch On OracleLinux" post:

Name

Subnet CIDR

Domain

Description

Application Servers Security Tier 1

192.168.0.0/24

as-t1.carcano.local

The network used to home Test Security Tier 1 environment's Application Servers

Database Servers Security Tier 1

192.168.6.0/24

db-t1.carcano.local

The network used to home Test Security Tier 1 environment's database servers

Management Security Tier 1

10.0.0.0/8

mgmt-t1.carcano.local

The network used for operating and management every Test Security Tier 1 environment's server

Having a dedicated management network provides several security and availability benefits: it provides you a trusted network you can use to always reach your hosts, either physical or virtual, enabling to operate using SSH and Datacenter Automation tools, PXE boot them or running backups using dedicated networking policies (for example traffic shaping) as well as security policies and even dedicated firewalls. Mind that it is necessary to have a dedicated management network for each Security Tier - if security is a concern, don't forget to have a couple of jump hosts for each management network. In my personal experience using dedicated management networks is absolutely a best practice.

In order to play a little bit with OpenFLow we add the following hosts:

  • as-ca-ut1a001
  • pg-ca-ut1a001

So the set of the Lab's hosts used in this post is:

Hostname

Services Subnet/Domain(s)

Management Subnet/Domain

Description

gw-ca-ut1a001

  • as-t1.carcano.local
  • dbt-1.carcano.local

 

 

mgmt-t1.carcano.local

the Test Security Tier 1 environment's gateway: it provides routing and enforce network policies set through OpenFlow.

as-ca-ut1a001

as-t1.carcano.local

mgmt-t1.carcano.local

a Test Security Tier 1 environment's Application Server - in this post we actually don't install anything on it: we use it only as a source host that needs to connect to a database machine on the Database ServersSecurity Tier 1 network

pg-ca-ut1a001

dbt-1.carcano.local

mgmt-t1.carcano.local

a Test Security Tier 1 environment's  PostgreSQL Database server

When dealing with dual homed hosts scenarios, the best practice is to register both the services FQDN and the management FQDN in the DNS - for example "as-ca-ut1a001.as-t1.carcano.local" and "as-ca-ut1a001.mgmt-t1.carcano.local".

Deploying Using Vagrant

In order to add the two above VM, it is necessary to extend the Vagrantfile shown in the previous post as follows:

  • add the setup_service_interface provisioning SHELL script - it is used to automatically configure networking, including the VLAN
  • add the install_postgresql_client and install_postgresql_server provisioning SHELL script - it is used to automatically PostgreSQL Server
  • add the "as-ca-ut1a001" and "pg-ca-ut1a001" VMs to the "host_vms" list of dictionaries
  • add some conditionals to the statement lists, so to customize the VM properly, using the service_class property as the matching criteria

For your convenience, this is how the whole Vagrantfile must now look like after these changes:

# -*- mode: ruby -*-
# vi: set ft=ruby :

$wipe_network_interface = <<-'SCRIPT'
DEV=$(ip -4 addr | grep -v "altname" | grep ${1} -B 1 | head -n 1 | awk '{print $2}' |tr -d :)
[[ -z "${DEV}" ]] && exit 0
CONN_NAME=$(nmcli -t -f NAME,DEVICE con | grep "${DEV}" | grep -v "br-" | cut -d : -f 1)
nmcli conn modify "${CONN_NAME}" ipv4.method auto
nmcli conn modify "${CONN_NAME}" ipv4.address ''
nmcli conn modify "${CONN_NAME}" ipv4.method disabled
nmcli conn modify "${CONN_NAME}" autoconnect no
SCRIPT

$setup_services_interface = <<-'SCRIPT'
DEV=$(ip -4 addr | grep -v "altname" | grep ${1} -B 1 | head -n 1 | awk '{print $2}' |tr -d :)
VLAN=$2
CONN_UUID=$(nmcli -t -f UUID,DEVICE con | grep "${DEV}" | grep -v "\.${VLAN}" | cut -d : -f 1)
nmcli conn delete "${CONN_UUID}"
CONN_UUID=$(nmcli -t -f UUID,DEVICE con | grep "${DEV}" | grep -v "\.${VLAN}" | cut -d : -f 1)
[[ -n "${CONN_UUID}" ]] && nmcli conn delete "${CONN_UUID}"
[[ $(nmcli -t -f NAME conn | grep ${DEV}-vlan.${VLAN}) ]] && exit 0
IP=$3
MASK=$4
ROUTES=$5
nmcli con add type vlan con-name ${DEV}-vlan.${VLAN} ifname ${DEV}.${VLAN} dev ${DEV} id ${VLAN} ip4 ${IP}/${MASK}
nmcli con mod "${DEV}-vlan.${VLAN}" +ipv4.routes "${ROUTES}"
nmcli con up "${DEV}-vlan.${VLAN}"
[[ "$(systemctl is-enabled firewalld)" == "enabled" ]] || systemctl enable firewalld
[[ "$(systemctl is-active firewalld)" == "active" ]] || systemctl start firewalld
firewall-cmd --permanent --new-zone=services1
firewall-cmd --permanent --zone=services1 --add-interface=${DEV}.${VLAN}
firewall-cmd --reload
SCRIPT

$install_postgresql_client = <<-'SCRIPT'
ARCH=$(uname -i)
dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-${ARCH}/pgdg-redhat-repo-latest.noarch.rpm
dnf install -y net-tools postgresql16
SCRIPT

$install_postgresql_server = <<-'SCRIPT'
ARCH=$(uname -i)
dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-${ARCH}/pgdg-redhat-repo-latest.noarch.rpm
dnf install -y net-tools postgresql16 postgresql16-server
firewall-cmd --permanent --zone=services1 --add-service postgresql
firewall-cmd --reload
/usr/pgsql-16/bin/postgresql-16-setup initdb
echo "listen_addresses = '*'" >> /var/lib/pgsql/16/data/postgresql.conf
echo "host    all             all             192.168.0.0/24         scram-sha-256" >> /var/lib/pgsql/16/data/pg_hba.conf
systemctl enable --now postgresql-16
sudo -u postgres psql -c "alter user postgres with password 'grimoire'"
SCRIPT

$reboot = <<-'SCRIPT'
sudo shutdown -r now
SCRIPT

VAGRANTFILE_API_VERSION = "2"
Vagrant.require_version ">= 1.5"

host_vms=[
  {
    :hostname => "gw-ca-ut1a001",
    :domain => "netdevs.carcano.local",
    :infra_net_ip => "172.16.0.11",
    :core_net_temporary_ip => "192.168.253.253",
    :box => "grimoire/ol92",
    :ram => 2048,
    :cpu => 2,
    :service_class => "netdev"
  },
  {
    :hostname => "gw-ca-up1a001",
    :domain => "netdevs.carcano.local",
    :infra_net_ip => "172.16.0.12",
    :core_net_temporary_ip => "192.168.254.254",
    :box => "grimoire/ol92",
    :ram => 2048,
    :cpu => 2,
    :service_class => "netdev"
  },
  {
    :hostname => "as-ca-ut1a001",
    :domain => "netdevs.carcano.local",
    :core_net_temporary_ip => "192.168.253.11",
    :services_net_ip => "192.168.0.10",
    :services_net_mask => "24",
    :services_net_vlan => "100",
    :summary_route => "192.168.0.0/16 192.168.0.254",
    :box => "grimoire/ol92",
    :ram => 2048,
    :cpu => 2,
    :service_class => "ws"
  },
  {
    :hostname => "pg-ca-ut1a001",
    :domain => "netdevs.carcano.local",
    :core_net_temporary_ip => "192.168.253.12",
    :services_net_ip => "192.168.6.10",
    :services_net_mask => "24",
    :services_net_vlan => "101",
    :summary_route => "192.168.0.0/16 192.168.6.254",
    :box => "grimoire/ol92",
    :ram => 2048,
    :cpu => 2,
    :service_class => "postgresql"
  }
]

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  host_vms.each do |machine|
    config.vm.define machine[:hostname] do |node |
      node.vm.box = machine[:box]
      node.vm.hostname = "#{machine[:hostname]}.#{machine[:domain]}"
      node.vm.network "private_network", ip:  machine[:core_net_temporary_ip]
      if machine[:service_class] == "netdev"
        node.vm.network "private_network", ip: machine[:infra_net_ip]
      end

      node.vm.provider :virtualbox do |vm|
        vm.name = "grimoire_#{machine[:hostname]}"
        vm.cpus = machine[:cpu]
        vm.customize [ "modifyvm", :id, "--memory", machine[:ram] ]
        if machine[:service_class] == "netdev"
          vm.customize [ "modifyvm", :id, "--nicpromisc2", "allow-vms" ]
        end
      end

      node.vm.provider :parallels do |vm|
        vm.name = "grimoire_#{machine[:hostname]}"
        vm.memory =  machine[:ram]
        vm.cpus = machine[:cpu]
        vm.update_guest_tools = false
        vm.optimize_power_consumption = false
      end

      if machine[:service_class] == "netdev"
        node.vm.provision :shell, :args => machine[:core_net_temporary_ip], inline: $wipe_network_interface, run: "always"
        node.vm.provision :shell, inline: $reboot, run: "always"
      end
      if machine[:service_class] == "ws"
        node.vm.provision :shell, :args => [ machine[:core_net_temporary_ip], machine[:services_net_vlan], machine[:services_net_ip], machine[:services_net_mask], machine[:summary_route] ], inline: $setup_services_interface, run: "always"
        node.vm.provision :shell, inline: $install_postgresql_client
      end
      if machine[:service_class] == "postgresql"
        node.vm.provision :shell, :args => [ machine[:core_net_temporary_ip], machine[:services_net_vlan], machine[:services_net_ip], machine[:services_net_mask], machine[:summary_route] ], inline: $setup_services_interface, run: "always"
        node.vm.provision :shell, inline: $install_postgresql_server
      end

    end
  end
end

we can now provision the new VMs by simply running:

vagrant up as-ca-ut1a001 pg-ca-ut1a001 gw-ca-ut1a001

The Example Use Case

Vagrant takes care of fully provisioning both the "as-ca-ut1a001" and "pg-ca-ut1a001" VMs: besides configuring networking,

  • on "pg-ca-ut1a001", it installs and configure PostgreSQL server
  • on "as-ca-ut1a001" it installs the PostgreSQL client components

Our example use case is configuring OpenFlow traffic policies that:

  • permit ARP traffic
  • permit ICMP (ping)
  • permit connecting from the "as-ca-ut1a001" host to the PostgreSQL instance on the "pg-ca-ut1a001" host

As we said the default Openflow flow on OpenVSwitch permits everything: this means that PostgreSQL connection is already permitted - we can easily check it by SSH connecting to "as-ca-ut1a001" using Vagrant:

vagrant ssh as-ca-ut1a001

then connect to the PostgreSQL instance running on the "pg-ca-ut1a001" server:

psql -h 192.168.6.10 -U postgres 

when asked, just enter '"grimoire" as the password.

Once connected, just try listing the available databases:

\l

the list is as follows:

   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    | ICU Locale | ICU Rules |   Access privileges   
-----------+----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
 postgres  | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |            |           | 
 template0 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |            |           | =c/postgres          +
           |          |          |                 |             |             |            |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |            |           | =c/postgres          +
           |          |          |                 |             |             |            |           | postgres=CTc/postgres
(3 rows)

We successfully reached the PostgreSQL instance because the default OpenFlow flow in OpenVSwitch is set to permit everything, but this is not acceptable by the security perspective in a segregated networking environment.

Software Defined Networks

Before specifically talking about OpenFlow, since it is a protocol used by Software Defined Network (SDN), it is certainly worth the effort to spend a few words on them too.

Software Defined Network (SDN) is an architectural paradigm that separates the logic that controls how to forward the traffic (it calls it the control plane) from the underlying system that forwards traffic (that calls the data plane). Everything is then managed through the management plane.

Control separation has many benefits like:

  • Light-weighted devices: since intelligence is at the controller side, network equipments such switches and routers can be slimmed down, reducing their CAPEX (CAPital EXpenses) compared to over-priced high-end routing and switching equipments
  • Central management: everything can be configured, monitored from a central controller: this eases also getting a complete view and troubleshooting, reducing OPEX (OPerational and maintenance EXpenses) too.

Lowering CAPEX and OPEX is obviously a good deal to succeed when asking for budgets to C-Levels.

As of the architectural separation, we distinguish three layers:

Data Plane

the networking equipments (router, switches) which forms the network that actually forward the traffic

Control Plane

it is the brain of the SDN infrastructure: it exchanges:

  • protocol updates
  • system management messages

it also maintains:

  • the Routing Information Base (RIB), that is the routing table used to exchange route updates with the routers
  • the Forwarding Information Base (FIB), that is an ordered list with the most specific route for each IP prefix at the top, created using the same data of the stable RIB table

So long story short is that the control plane manages and maintains a lot of things, such as details about link state, topology statistics details, and more.

It is here that are implemented real world network use-cases like switching, routing, L2 VPN, L3 VPN, firewall security rules, DNS, DHCP, and clustering.

All of these are implemented into a component called SDN controller: it exposes two types of interfaces:

  • Northbound interface: APIs (typically REST based) is meant for communication with upper, Application layer and would be in general realized through REST APIs of SDN controllers
  • Southbound interface: is meant for communication with lower, Infrastructure layer of network elements and would be in general realized through southbound protocols – Openflow, Netconf, Ovsdb, etc.

Examples of SDN Controllers are OpenDaylight, ONOS (Open Network Operating System), NOX/POX, Floodlight - they are all open source. An example of a commercial SDN controller is Cisco Open SDN Controller – it's based on OpenDaylight .

Application Layer

an area open to develop innovative network applications such as automation, configuration and management, monitoring, troubleshooting, policies and security

OpenFlow

Since SDN works with heterogeneous hardware of different vendors, a standard protocol is needed, and here is where OpenFlow comes into play: Openflow is a standard protocol (at the time of writing this post the current version is 1.5.1.) provided by Open Networking Foundation (ONF) that states specification for communication between SDN controllers and network equipment: routing decisions are taken by SDN controllers and let forwarding rules, security rules being pushed on switches of the underlying network.

Within OpenFlow, it's the switch that connects to the OpenFlow controller (default port is TCP/6653) either plain or using TLS, although the opposite is still possible.

The following example shows the command you would issue to connect the br-test bridge created on the gw-ca-ut1a001 VM to a OpenDaylight SDN controller listening on port TCP/6640 of a host with IP 172.16.10.11.

ovs-vsctl set-controller br-test tcp:172.16.10.11:6640
Configuring flows on a SDN controller is dependent by the controller you use, so it is off-topic for this post.: my aim instead (as often) is to show how things work under the hood, and so we will learn how to add Openflow's flows to OpenVSwitch simply by using the command line.

OpenFlow's Flows

Since the VM interconnecting the networks and implementing network policies is the "gw-ca-ut1a001" host, SSH connect to it using Vagrant:

vagrant ssh gw-ca-ut1a001

then switch to the "root" user:

sudo su -

Dumping The Currently Set Flows

Let's start by having a look to the default OpenFlow flow set on OVS - type the following statement

ovs-ofctl -O OpenFlow13 dump-flows br-test

the outcome is:

cookie=0x0, duration=5943.914s, table=0, n_packets=1956, n_bytes=209514, priority=0 actions=NORMAL

as you probably guessed, this is a "permit ALL" flow we talked about a while ago: we are about to configure a more restrictive policy.

Delete Every Flow

Before configuring a new policy, we must flush (delete all) the already defined flows:

ovs-ofctl del-flows --protocols=OpenFlow13 br-test

By now, since there are no flows defined, OpenVSwitch stops forwarding packages - we can easily verify if from the "as-ca-ut1a001" host.

Once connected, flush every network cache:

sudo ip -s -s neigh flush all

then try to ping the IP (192.168.0.254) of the "gw-ca-ut1a001" gateway on the same subnet where "as-ca-ut1a001" host is homed:

ping -c 1 192.168.0.254

the outcome is as follows:

PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data.

--- 192.168.0.254 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

as expected, every packet is lost.

Mind that not only IP packets are dropped, ... even Layer 2 packages, such as ARP are.

Indeed, if we try to use the "arping" command line tool:

arping -c1 192.168.0.254

we receive no response:

ARPING 192.168.0.254 from 192.168.0.10 enp0s6.100
Sent 1 probes (1 broadcast(s))
Received 0 response(s)

and if we have a look to the ARP table:

arp -a

the outcome will look like is as follows:

? (192.168.0.254) at <incomplete> on enp0s6.100
? (10.211.55.2) at be:d0:74:a0:4b:64 [ether] on enp0s5
prl-local-ns-server.shared (10.211.55.1) at 00:1c:42:00:00:18 [ether] on enp0s5

The "<incomplete>" MAC address next to the "192.168.0.254" IP means that the ARP resolution is not working.

The Structure Of a Flow

Before writing flows, it is best to know the structure of a flow.

OpenFlow Flows basically carry three types of information:

  1. Match fields: it is the criteria to match packets based on header fields
    1. ARP fields, ICMP fields, MPLS field
    2. L2 (source destination ethernet addresses, VLAN ID, VLAN priority, etc.)
    3. L3 (IPv4/IPv6 source destination address, protocol type, DSCP, etc.)
    4. L4 fields (TCP/UDP/SCTP source destination port), .
  2. Actions: what to do with a packet matching the criteria
    1. drop
    2. forward on some port of switch
    3. modify packet (push/pop VLAN ID, push/pop MPLS label, increment/decrement IP TTL)
    4. forward to a specific queue of ports, etc.
  3. Counters: track how many packets matched the flow

Adding Flows 

Let's see now how to add flows to OpenVSwitch using the "ovs-ofctl" command line utility:

Deny Everything Flow

Let's add an explicit drop flow for every packet sent to any of the subnets among the "192.168.0.0/16" supernet:

ovs-ofctl -O OpenFlow13 add-flow br-test --protocols=OpenFlow13 \
"cookie=0x1, priority=1, table=0, tcp, nw_dst=192.168.0.0/16, nw_proto=6, actions=drop"

Permit ARP Flow

The very first thing to permit is ARP, otherwise the hosts will not be able to guess MAC addresses from IP addresses and nothing will work.

Just add the following flows:

ovs-ofctl -O OpenFlow13 add-flow br-test --protocols=OpenFlow13 \
"cookie=0x2, priority=65535, table=0, priority=65535, arp,action=normal"

let's try again the same arping from the "as-ca-ut1a001" host:

arping -c1 192.168.0.254

this time we get 1 response:

ARPING 192.168.0.254 from 192.168.0.10 enp0s6.100
Unicast reply from 192.168.0.254 [0A:DE:DC:E1:27:8C]  3.634ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)

of course, if for now, if we try to ping the IP "192.168.0.254":

ping -c 1 192.168.0.254

the outcome is:

PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data.

--- 192.168.0.254 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

So, as expected, every packet is lost. But if this time we look at the ARP table:

arp -a

the outcome looks like as follows:

? (192.168.0.254) at 0a:de:dc:e1:27:8c [ether] on enp0s6.100
? (10.211.55.2) at be:d0:74:a0:4b:64 [ether] on enp0s5
prl-local-ns-server.shared (10.211.55.1) at 00:1c:42:00:00:18 [ether] on enp0s5

so, as expected, this time the MAC address for the "192.168.0.254" IP is now resolved.

Permit ICMP Flows

Despite we are setting up quite a strict environment, since this is a Lab,  it is safe to enable ICMP packages anyway - just add the following flows:

ovs-ofctl -O OpenFlow13 add-flow br-test --protocols=OpenFlow13 \
"cookie=0x3, priority=65535, table=0, icmp, icmp_type=0, icmp_code=0 actions=normal"
ovs-ofctl -O OpenFlow13 add-flow br-test --protocols=OpenFlow13 \
"cookie=0x4, priority=65535, table=0, icmp, icmp_type=8, icmp_code=0 actions=normal"

this time, from the "as-ca-ut1a001" host, we try pinging the "gw-ca-ut1a001" interface on the database subnet :

ping -c 1 192.168.36.254

as expected, this time it works:

PING 192.168.36.254 (192.168.36.254) 56(84) bytes of data.
64 bytes from 192.168.36.254: icmp_seq=1 ttl=63 time=1.31 ms

--- 192.168.36.254 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.312/1.312/1.312/0.000 ms

let's have a look to the flows defined on the "br-test" of the gw-ca-ut1a001 host:

ovs-ofctl -O OpenFlow13 dump-flows br-test

the outcome is:

cookie=0x2, duration=17.119s, table=0, n_packets=4, n_bytes=176, priority=65535,arp actions=NORMAL
 cookie=0x3, duration=17.107s, table=0, n_packets=3, n_bytes=294, priority=65535,icmp,icmp_type=0,icmp_code=0 actions=NORMAL
 cookie=0x4, duration=17.095s, table=0, n_packets=3, n_bytes=306, priority=65535,icmp,icmp_type=8,icmp_code=0 actions=NORMAL
 cookie=0x1, duration=153.049s, table=0, n_packets=0, n_bytes=0, priority=1,tcp,nw_dst=192.168.0.0/16 actions=drop
Look how the report displays also the number of packets matching each flow, the summary number of bytes and the duration in seconds.

Permit PostgreSQL Flows

We are still missing a flow to enable access to the PostgreSQL service on the "pg-ca-ut1a001" host -  from the "as-ca-ut1a001" host, we try running:

psql -h 192.168.6.10 -U postgres 

this time the connection hangs and we have to press "CTRL+C" to terminate it.

This is because of the "deny everything" flow we added.

To permit access again, on the gw-ca-ut1a001 host, we must add the following couple of flows:

ovs-ofctl -O OpenFlow13 add-flow br-test --protocols=OpenFlow13 \
"cookie=0x5, priority=65000, table=0, tcp, nw_src=192.168.0.0/24, nw_dst=192.168.6.0/24, tp_dst=5432, nw_proto=6, actions=normal"
ovs-ofctl -O OpenFlow13 add-flow br-test --protocols=OpenFlow13 \
"cookie=0x6, priority=65000, table=0, tcp, nw_src=192.168.6.0/24, nw_dst=192.168.0.0/24, tp_src=5432, tcp_flags=+ack, nw_proto=6, actions=normal"

on the "as-ca-ut1a001" host, run again the following statement:

psql -h 192.168.6.10 -U postgres 

this time you must be able to connect to the PostgreSQL service on the "pg-ca-ut1a001" host - the password is "grimoire".

OpenFlow Flows Life time

As you probably are inferring, the flows added using the "ovs-ofctl" command line tool are ephemeral, and so they get lost after a system restart: as we saw, OpenFlow flows are fetched from the configured SDN controller.

Indeed, if you restart the "gw-ca-ut1a001" VM:

shutdown -r now

then log in again, switch to the "root" user, and dump the currently set flows:

ovs-ofctl -O OpenFlow13 dump-flows br-test

you get again the default "permit all" flow:

cookie=0x0, duration=5943.914s, table=0, n_packets=1956, n_bytes=209514, priority=0 actions=NORMAL

A Script To Automatically Load Openflow Flows From A File

For the sake of completeness - but consider it just a learning exercise, here is an example of a script that loads Openflow flows from a file and that can even be set as a one-shot Systemd service unit so to have them loaded at boot time.

This script and the related Systemd unit are just a toy - I wrote them only to enable my labs to implement network policies making use of Openflow without having to provision a VM dedicated to a SDN controller such as OpenDayLight. 

Prerequisites

The script that loads the Openflow rules, the Systemd service unit and the configuration file must be created on a gateway VM running OpenVSwitch. In this Lab we set up them on the "gw-ca-ut1a001" host.

The script detects the changes applied to the file containing the Openflow flows to be loaded by using the "git" command line tool.

Because of this requisite, it is necessary to install git as follows:

dnf install -y git

then just create the directories where to store the script and the file containing the Openflow's flows:

mkdir -m 755 /opt/grimoire /opt/grimoire/bin /opt/grimoire/etc

OpenFlow Rules Loader Script

Create the "/opt/grimoire/bin/of-flow-load.sh" with the following contents:

#!/bin/bash
BRIDGE="${1}"
SCRIPT=$(readlink -f $0)
SCRIPTPATH=$(dirname ${SCRIPT})
CFG_FILE="${SCRIPTPATH}/../etc/of-flows.txt"
cd ${SCRIPTPATH}/../etc
if [[ ! -d .git ]]; then
  echo "Initializing the Openflow configuration repository"
  [ -f /tmp/openflow.lastcommitted ] && rm /tmp/openflow.lastcommitted
  git init
  git config user.name "System"
  git config user.email "me@foo.org"
  echo "foo.sh" > .gitignore 
  git add .gitignore
fi
git add ${CFG_FILE}
git commit -m "fix"
CURRENT=$(git log -1 --pretty=format:"%h")
if [[ ! -f /tmp/openflow.lastcommitted ]]; then
  echo "Performing a full load ..."
  ovs-ofctl del-flows --protocols=OpenFlow10,OpenFlow13 ${BRIDGE}
  while IFS= read -r line
  do
    [[ "$line" =~ ^#.*$ ]] && continue
    COMMAND=$(echo $line | sed "s/^\"/ovs-ofctl add-flow ${BRIDGE} --protocols=OpenFlow10,OpenFlow13 \"/g")
    [ -n "${COMMAND}" ] && echo "Executing '${COMMAND}'"
    eval ${COMMAND}
  done < ${CFG_FILE}
else
  PREVIOUS=$(cat /tmp/openflow.lastcommitted)
  #echo PREVIOUIS=$PREVIOUS
  #=$(git log -2 --pretty=format:"%h"|tail -n 1)
  if [[ "${CURRENT}" == "$PREVIOUS" ]]; then
     echo "We are alread at $PREVIOUS, so there's nothing to do, ... exiting"
     exit 0
  fi
  ADD=$(git diff $PREVIOUS..$CURRENT |grep -- '+"')
  REMOVE=$(git diff $PREVIOUS..$CURRENT |grep -- '-"')
  #echo CURRENT=$CURRENT
  #echo PREVIOUS=$PREVIOUS
  while IFS= read -r line
  do
      COMMAND=$(echo $line| sed "s+^-\"\(cookie=0x[0-9]*\).*+ovs-ofctl del-flows ${BRIDGE} --protocols=OpenFlow10,OpenFlow13 \"\1/-1\"+")
      [ -n "${COMMAND}" ] && echo "Executing '${COMMAND}'"
      eval ${COMMAND}
  done <<< "$REMOVE"
  
  while IFS= read -r line
  do
    #echo $line
    COMMAND=$(echo $line | sed "s/^+\"/ovs-ofctl add-flow ${BRIDGE} --protocols=OpenFlow10,OpenFlow13 \"/g")
     [ -n "${COMMAND}" ] && echo "Executing '${COMMAND}'"
    eval ${COMMAND}
  done <<< "$ADD"
fi
echo ${CURRENT} > /tmp/openflow.lastcommitted

this script loads the flows defined in the "/opt/grimoire/etc/of-flows.txt" file in the OpenVSwitch's bridge specified as the first argument when launching it.

chmod 755 /opt/grimoire/bin/of-flow-load.sh 

OpenFlow Service 

The last step is creating the "/etc/systemd/system/openflow.service" with the following contents:

[Unit]
Description=Open vSwitch
Before=network.target network.service
After=network-pre.target ovsdb-server.service ovs-vswitchd.service openvswitch.service
PartOf=network.target
Requires=ovsdb-server.service
Requires=ovs-vswitchd.service
Requires=openvswitch

[Service]
Type=idle
ExecStartPre=-rm /tmp/openflow.lastcommitted
ExecStart=/opt/grimoire/bin/of-flow-load.sh br-test

[Install]
WantedBy=multi-user.target

and of course reload Systemd:

systemctl daemon-reload

then, enable the "openflow" service to start at boot time:

systemctl enable openflow

OpenFlow Flows File

The last missing bit is just the "/opt/grimoire/etc/of-flows.txt" file - create it with the flows we saw so far:

# drop everything sent to our whole /16 private subnet
"cookie=0x1, priority=1, table=0, tcp, nw_dst=192.168.0.0/16, nw_proto=6, actions=drop"
# permit ARP 
"cookie=0x2, priority=65535, table=0, priority=65535, arp,action=normal"
# permit ICMP (PING)
"cookie=0x3, priority=65535, table=0, icmp, icmp_type=0, icmp_code=0 actions=normal"
"cookie=0x4, priority=65535, table=0, icmp, icmp_type=8, icmp_code=0 actions=normal"
# permit access to PostgreSQL service from 192.168.0.0/24 (www-t1) to 192.168.6.0/23 (db-t1) 
"cookie=0x5, priority=65000, table=0, tcp, nw_src=192.168.0.0/24, nw_dst=192.168.6.0/24, tp_dst=5432, nw_proto=6, actions=normal"
"cookie=0x6, priority=65000, table=0, tcp, nw_src=192.168.6.0/24, nw_dst=192.168.0.0/24, tp_src=5432, tcp_flags=+ack, nw_proto=6, actions=normal"

then start the "openflow" service to have them loaded:

systemctl start openflow

Since the service has been configured to be loaded at boot, let's try a system reboot to see if it is really working as expected: 

shutdown -r now

Once logged in again, and switched to the "root" user, if we dump the flows set on the "br-test":

ovs-ofctl -O OpenFlow13 dump-flows br-test

we will see the flows loaded by the script:

 cookie=0x2, duration=52.946s, table=0, n_packets=4, n_bytes=168, priority=65535,arp actions=NORMAL
 cookie=0x3, duration=52.933s, table=0, n_packets=0, n_bytes=0, priority=65535,icmp,icmp_type=0,icmp_code=0 actions=NORMAL
 cookie=0x4, duration=52.921s, table=0, n_packets=0, n_bytes=0, priority=65535,icmp,icmp_type=8,icmp_code=0 actions=NORMAL
 cookie=0x5, duration=52.909s, table=0, n_packets=0, n_bytes=0, priority=65000,tcp,nw_src=192.168.0.0/24,nw_dst=192.168.6.0/24,tp_dst=5432 actions=NORMAL
 cookie=0x6, duration=52.896s, table=0, n_packets=0, n_bytes=0, priority=65000,tcp,nw_src=192.168.6.0/24,nw_dst=192.168.0.0/24,tp_src=5432,tcp_flags=+ack actions=NORMAL
 cookie=0x1, duration=52.959s, table=0, n_packets=0, n_bytes=0, priority=1,tcp,nw_dst=192.168.0.0/16 actions=drop

OpenFlow Flows Sample Snippets

The last part of this post is a cheatsheet of snippets you can use as a reference when writing flows:

SSH From A Jump Station To A Subnet

A very common use case is to permit SSH access from a jump host to a subnet - the following snippet permits SSH from the host with IP 192.168.254.253 to the hosts in the 192.168.149.0/24 subnet. 

# SSH from jump-ci-upa002 to any host of 192.168.149.0/24 subnet
"cookie=0x11, priority=65000, table=0, tcp, nw_src=192.168.254.253, nw_dst=192.168.149.0/24, tp_dst=22, nw_proto=6, actions=normal"
"cookie=0x12, priority=65000, table=0, tcp, nw_src=192.168.149.0/24, nw_dst=192.168.254.253, tp_src=22, tcp_flags=+ack, nw_proto=6, actions=normal"

Please note the use of the "+ack" flag in the second flow.

Access To A FreeIPA (Red Hat Directory Server)

Another common use case is permit to any host access to a Red Hat Directory Server (FreeIPA) - this time, since the amount of services, there are necessary more flows:

# FreeIPA Services on dir-ci-up3a001
"cookie=0x31, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=80, nw_proto=6, actions=normal"
"cookie=0x32, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=80, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x33, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=443, nw_proto=6, actions=normal"
"cookie=0x34, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=443, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x35, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=389, nw_proto=6, actions=normal"
"cookie=0x36, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=389, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x37, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=636, nw_proto=6, actions=normal"
"cookie=0x38, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=636, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x39, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=88, nw_proto=6, actions=normal"
"cookie=0x40, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=88, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x41, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=464, nw_proto=6, actions=normal"
"cookie=0x42, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=464, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x43, priority=65000, table=0, tcp, nw_dst=192.168.150.10, tp_dst=53, nw_proto=6, actions=normal"
"cookie=0x44, priority=65000, table=0, tcp, nw_src=192.168.150.10, tp_src=53, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x45, priority=65000, table=0, udp, nw_dst=192.168.150.10, tp_dst=53, nw_proto=17, actions=normal"
"cookie=0x46, priority=65000, table=0, udp, nw_src=192.168.150.10, tp_src=53, nw_proto=17, actions=normal"
"cookie=0x47, priority=65000, table=0, udp, nw_dst=192.168.150.10, tp_dst=88, nw_proto=17, actions=normal"
"cookie=0x48, priority=65000, table=0, udp, nw_src=192.168.150.10, tp_src=88, nw_proto=17, actions=normal"
"cookie=0x49, priority=65000, table=0, udp, nw_dst=192.168.150.10, tp_dst=464, nw_proto=17, actions=normal"
"cookie=0x50, priority=65000, table=0, udp, nw_src=192.168.150.10, tp_src=464, nw_proto=17, actions=normal"
"cookie=0x51, priority=65000, table=0, udp, nw_dst=192.168.150.10, tp_dst=123, nw_proto=17, actions=normal"
"cookie=0x52, priority=65000, table=0, udp, nw_src=192.168.150.10, tp_src=123, nw_proto=17, actions=normal"

the flows in the above snippet permit access from everywhere to the IPA services of the host with IP 192.168.150.10.

Access To A NFS Server

Another very common use case is permitting access to a NFS server:

# NFS Services on fss-ci-up3a001
"cookie=0x75, priority=65000, table=0, tcp, nw_dst=192.168.152.10, tp_dst=111, nw_proto=6, actions=normal"
"cookie=0x76, priority=65000, table=0, tcp, nw_src=192.168.152.10, tp_src=111, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x77, priority=65000, table=0, udp, nw_dst=192.168.152.10, tp_dst=111, nw_proto=17, actions=normal"
"cookie=0x78, priority=65000, table=0, udp, nw_src=192.168.152.10, tp_src=111, nw_proto=17, actions=normal"
"cookie=0x79, priority=65000, table=0, tcp, nw_dst=192.168.152.10, tp_dst=20048, nw_proto=6, actions=normal"
"cookie=0x80, priority=65000, table=0, tcp, nw_src=192.168.152.10, tp_src=20048, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x81, priority=65000, table=0, udp, nw_dst=192.168.152.10, tp_dst=20048, nw_proto=17, actions=normal"
"cookie=0x82, priority=65000, table=0, udp, nw_src=192.168.152.10, tp_src=20048, nw_proto=17, actions=normal"
"cookie=0x83, priority=65000, table=0, tcp, nw_dst=192.168.152.10, tp_dst=662, nw_proto=6, actions=normal"
"cookie=0x84, priority=65000, table=0, tcp, nw_src=192.168.152.10, tp_src=662, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x85, priority=65000, table=0, udp, nw_dst=192.168.152.10, tp_dst=662, nw_proto=17, actions=normal"
"cookie=0x86, priority=65000, table=0, udp, nw_src=192.168.152.10, tp_src=662, nw_proto=17, actions=normal"
"cookie=0x87, priority=65000, table=0, tcp, nw_dst=192.168.152.10, tp_dst=2049, nw_proto=6, actions=normal"
"cookie=0x88, priority=65000, table=0, tcp, nw_src=192.168.152.10, tp_src=2049, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x89, priority=65000, table=0, tcp, nw_dst=192.168.152.10, tp_dst=32803, nw_proto=6, actions=normal"
"cookie=0x90, priority=65000, table=0, tcp, nw_src=192.168.152.10, tp_src=32803, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x91, priority=65000, table=0, udp, nw_dst=192.168.152.10, tp_dst=32769, nw_proto=17, actions=normal"
"cookie=0x92, priority=65000, table=0, udp, nw_src=192.168.152.10, tp_src=32769, nw_proto=17, actions=normal"
"cookie=0x93, priority=65000, table=0, tcp, nw_dst=192.168.152.10, tp_dst=875, nw_proto=6, actions=normal"
"cookie=0x94, priority=65000, table=0, tcp, nw_src=192.168.152.10, tp_src=875, tcp_flags=+ack, nw_proto=6, actions=normal"
"cookie=0x95, priority=65000, table=0, udp, nw_dst=192.168.152.10, tp_dst=875, nw_proto=17, actions=normal"
"cookie=0x96, priority=65000, table=0, udp, nw_src=192.168.152.10, tp_src=875, nw_proto=17, actions=normal"

the flows in the above snippet permit access from everywhere to the NFS services of the host with IP 192.168.152.10: even here, because of the amount of services, there are necessary more flows.

Footnotes

Here it ends this post dedicated to configuring OpenFlow flows: we gradually learned how to set up flows using the "ovs-ofctl" command line tool, and even how to persist them for playing with our labs. 

In the next post we will talk about VxLAN, going through the various available technologies, as usual seeing everything in action.

Writing a post like this takes a lot of hours. I'm doing it for the only pleasure of sharing knowledge and thoughts, but all of this does not come for free: it is a time consuming volunteering task. This blog is not affiliated to anybody, does not show advertisements nor sells data of visitors. The only goal of this blog is to make ideas flow. So please, if you liked this post, spend a little of your time to share it on Linkedin or Twitter using the buttons below: seeing that posts are actually read is the only way I have to understand if I'm really sharing thoughts or if I'm just wasting time and I'd better give and 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>