Skip to content

Instantly share code, notes, and snippets.

@ssomayyajula
Last active May 24, 2016 19:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ssomayyajula/52644f2b0347839b6afc to your computer and use it in GitHub Desktop.
Save ssomayyajula/52644f2b0347839b6afc to your computer and use it in GitHub Desktop.

Prereqs: General understanding of NetKAT syntax & semantics + software-defined networking (SDN)

Background: VNOs and the NetKAT pipeline

NetKAT is a network programming language designed by Prof. Nate Foster et al. based on Kleene algebra with tests, a system where regular expressions allow network paths to be described and boolean predicates to perform packet testing and processing.

Generally speaking, networks consist of programmable switches that route traffic (discrete packets of information) between nodes (hosts and other switches). Although we could program the physical network directly, it is useful to add a layer of abstraction, a virtual network (VN), so we can program/reason about the network without dealing with intermediate switches and nodes. Thus, we work with an abstraction called a Virtual Network Object (VNO) that describes a virtual network and sits on top of the physical topology as well as the physical ingress/egress predicates (which determine where packets enter and exit the physical network) that defines:

  1. a virtual topology (what the programmer desires to see the network as, not physically)
  2. virtual ingress/egress predicates (where packets may enter the VN)
  3. a virtual ingress policy (determines what packets may enter the VN)
  4. a virtual policy (the packet processing directives that happen within the VN)
  5. a virtual relation (which maps the virtual topology to the physical one)

Case study: The Big Switch

Let's look at an example: the big switch. Consider the following physical topology, where [x] indicates a switch and y indicates one of the switch's ports.

       2[1]5
      /     \
     1       1
   3[2]4   6[5]7
  /    |   |    \
 2     2   5     5
[3]   [4] [6]   [7]
 0     0   0     0

This physical topology is represented by the following expression of the form sw_1@p_1 => sw_2@p_2 | ...

(* Physical topology *)
1@2=>2@1 | 2@3=>3@2 | 2@4=>4@2 |
2@1=>1@2 | 3@2=>2@3 | 4@2=>2@4 |
1@5=>5@1 | 5@6=>6@5 | 5@7=>7@5 |
5@1=>1@5 | 6@5=>5@6 | 7@5=>5@7

Now that we have a map of the network, we need to specify where traffic will come in and go out. So, the physical ingress and egress predicates take on the form (switch = #_1 and port = #_2) or .... In this case, they state that traffic comes from and goes out switches 3, 4, 6, and 7 at each of their ports #0.

(switch=3 and port=0) or
(switch=4 and port=0) or
(switch=6 and port=0) or
(switch=7 and port=0)

Now let's look at the virtual topology. This consolidates the bottom four switches of the physical network for programmatic convenience (the entire raison d'etre of VNs). Going off of the name, we can see that there's a "big switch" in the middle that dispatches traffic to each of its ports.

  4
3[1]6
  7

Since there are no switches to push traffic to (within the VN), the equivalent NetKAT expression is simply to drop all traffic.

(* Virtual topology *)
drop

As we did with the physical network, we have to define virtual ingress and egress predicates that show where traffic enter and exit the VNO. Once again, they are the same. Notice that we use vswitch and vport to differentiate from their physical counterparts.

(vswitch=1 and vport=3) or
(vswitch=1 and vport=4) or
(vswitch=1 and vport=6) or
(vswitch=1 and vport=7)

The virtual policy in this case routes traffic destined for certain IPs to their respective switches, fulfilling the "big switch" functionality that was desired.

(* Virtual policy *)
if ip4Dst=10.0.0.3 then vport:=3 else
if ip4Dst=10.0.0.4 then vport:=4 else
if ip4Dst=10.0.0.6 then vport:=6 else
if ip4Dst=10.0.0.7 then vport:=7 else drop

Last but not least, we need to define the relationship between the virtual and physical topology, or else this wouldn't work at all. Using and expressions, we can map traffic that is on certain virtual switches and ports to the actual physical switches and ports.

(* Virtual relation *)
((vswitch=1 and vport=3) and (switch=3 and port=0)) or
((vswitch=1 and vport=4) and (switch=4 and port=0)) or
((vswitch=1 and vport=6) and (switch=6 and port=0)) or
((vswitch=1 and vport=7) and (switch=7 and port=0))

In terms of execution, we can "compile" the VNO and generate OpenFlow forwarding tables for every physical switch and achieve the desired behavior. The question is--how are all these disparate NetKAT programs combined in compilation? To answer that, we must look at the compiler pipeline.

There are three types of programs we have to consider during compilation.

  • The most basic kind is a local NetKAT program, which describes the behavior of a physical network in terms of hop-by-hop processing. It is written in the form in . (p . t)* . p . out where in and out are the physical ingress and egress predicates, respectively; p and t are the policy and network topology, respectively. in, out, and t are actually not needed, as p directly implements the desired behavior of the network.

  • A global program, on the other hand, may mix the policy and topology implementations and/or elide forwarding at intermediate switches, so there needs to be some way to convert it to a local program so it can be directly compiled. Note that global programs still operate on the physical topology.

  • Even more abstract than a global program, a virtual program implements processing in terms of a virtual topology, which was done in the VNO.

The local compiler handles the first case by directly compiling virtual programs to OpenFlow forwarding tables via an intermediate representation called a forwarding decision diagram. The global compiler generates a local program from a global program by creating special NetKAT automata. The virtual compiler handles the last case by taking a virtual program, virtual topology, and virtual relation and generates a global program that relies on a virtual fabric that jumps between virtual and physical switches. As we can see, all roads lead to the local compiler, so everything becomes a forwarding table in the end.

In terms of the VNO, the virtual compiler processes the virtual policies and predicates (given the virtual topology and relation), after which the global compiler creates a local program, namely the physical policy, while it, the physical predicates, and physical topology are passed to the local compiler. This explanation is of course overly simplified.

Now that we have a general sense of how the compilation process works, we can generate the flowtables for the big switch example: in the frenetic folder, start ./frenetic.native virtual-shell and input the aforementioned components. You should get a JSON dump of the flowtables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment