Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all articles
Browse latest Browse all 3805

Mixed Hypervisor with Openvswitch and Openflow Network virtualisation Part 2

$
0
0

This is part two of my blog post on my OpenFlow and Openvswitch lab.
This will be my last blog post for some time until I settle into my new Role.

 

In my previous blog Post http://communities.vmware.com/blogs/kevinbarrass/2013/03/13/mixed-hypervisor-with-openvswitch-and-openflow-network-virtualisation I showed how you can build a lab using several mixed Hypervisors KVM/XEN/XenServer all using Openvswitch and build a virtual network across all hosts using Openflow and GRE tunnels.

 

In the second part of this blog post I will show how I used Wireshark and the Open Flow Protocol "OFP" dissector to decode the OFP packets and get an idea of what is happening as well as viewing the flow tables on each Openvswitch "OVS".
You can find details of the OFP dissector on the website: http://www.noxrepo.org/2012/03/openflow-wireshark-dissector-on-windows/

To simplify number of packets I have captured, I have reduced the number of hosts from 4 to 2 with a single GRE tunnel as shown in the lab diagram below:

OVS blog 2 hosts.gif

We will need to know the OVS port-name to Port number on each host to be able to interpret the OFP messages; you can do this by typing the command "sudo ovs-dpctl show br0" on each host. In the case of my lab it gives the below port name->port number mappings.

 

Host1 OVS port name to port number mappings.

host1 show port name to port number.png

Host2 OVS port name to port number mappings.

host2 show port name to port number.png


As the above lab is running on VMware Workstation using VMnet8 (NAT) it is easy to use Wireshark on the same computer running Workstation to capture all traffic on that network as it is flooded over VMnet8.

What I will try to do during this lab test is sending a ping from VM1 to VM2, then capture the OFP packets. I will then decode some of the OFP packets and try to explain what each packet is doing, there is some duplication of OFP packets i.e. one for ICMP echo-request then one for ICMP echo-reply, I will decode the 1st for each flow.
Please bear in mind I'm very new to OVS and Openflow so I may well have mistakes in my interpretation on how this lab works and the Openflow Protocol/OVS. I would recommend building a similar lab reading the Openflow Switch specification and have a play

So in the minimal lab I have started up the POX Openflow controller as before, this time with just two OVS's connected. I have a Windows VM on each host/OVS one with the IP address 172.16.0.1 the second with the IP address 172.16.0.2. I then run a ping with a single ICMP echo-request from 172.16.0.1 to 172.16.0.2. Below is the Wireshark capture of the OFP packets related to both the ARP requests and ICMP echo request/reply.

 

all OFP packets.png

 

From the above OFP packet capture:
A. OFP Packets 197 and 224 in block A is related to VM01 on host1/OVS sending an ARP request for VM2's MAC address.
B. OFP packets 234 and 235 in block B are the OFP packets related to the ARP response from VM02 on host2/OVS.
C. OFP packets 239 and 240 in block C that relates to the ICMP echo-request from VM01 on host1/OVS to VM02.

 

ARP request OFP Packet IN Host1.png

Now that we have sent the ICMP echo-request from VM01 to VM02, VM01 generates an ARP request for VM02's MAC address. This is received by host1's OVS who has no matching flow for this packet so send an OFP Packet-in to the NOX Openflow controller.

In the above decoded packet capture of the OFP packet in for the ARP-request:
A: This OFP is of a type "packet-in" with a version of 0x01
B: The buffer ID of the buffered packet that caused this OFP packet.
C: What OVS port the packet was received on, in this case port 2.
D: Reason the OFP was generated; in this case it was due to no local flow entry matching this ARP request packet.
E: Frame data containing details of the received packet that can be used to construct a flow entry.
F: shows summary of the OFP packet.

 

ARP request OFP Packet OUT Host1.png

The NOX Openflow controller now receives the previous OFP packet and using the forwarding.l2_learning module makes a policy decision in this case as the ARP request is a broadcast so the NOX controller will instruct the OVS using a OFP packet-out to flood the packet out all ports except those blocked by spanning-tree STP "not used in this lab" or the source OVS port.

In the above decoded packet capture of the OFP packet out for the ARP-request:
A: This OFP is of a type "packet-out" with a version of 0x01
B: The buffer ID of the buffered packet on the OVS related to this packet-out.
C: What OVS port the packet was received on, in this case port 2.
D: Action type, in this case to output to a switch port.
E: Action to take, in this case to Flood the packet in buffer ID 288 out of all ports expect the input port and ports disabled by STP.
F: shows summary of the OFP packet.

At this point the ARP request from VM01 is flooded out of host1/OVS and received by host2/OVS. Host2/OVS then goes through the above process with this ARP request but I will not decode these as we have already examined similar  an OFP ARP-request packet above.

 

ARP reply OFP Packet IN Host2.png

At this point VM02 has received the ARP request and VM02 will send an ARP-reply back directly to the MAC address of VM01. This is received by host2's OVS who has no matching flow for this packet so send an OFP Packet-in to the NOX Openflow controller.

In the above decoded packet capture of the OFP packet in for the ARP-reply:
A: This OFP is of a type "packet-out" with a version of 0x01
B: The buffer ID of the buffered packet that caused this OFP packet.
C: What OVS port the packet was received on, in this case port 3.
D: Reason the OFP was generated, in this case it was due to no local flow entry matching this ARP reply packet.
E: Frame data containing details of the received packet that can be used to construct a flow entry.
F: shows summary of the OFP packet.

ARP reply OFP Flow Mod host2.png

 

The NOX Openflow controller receives the previous OFP packet from host2/OVS and using the forwarding.l2_learning module makes a policy decision. In this case as the ARP reply is not a broadcast packet, instead of a packet-out to flood the packet the NOX controller creates a specific flow entry and sends OFP flow-mod to the OVS. the OVS will install this Flow and then send the buffered packet out matching this flow.

In the above decoded packet capture of the OFP flow-mod for the ARP-reply:
A: This OFP is of a type "flow-mod" with a version of 0x01
B: The specific details used to create the flow.
C: Idle time to discard this flow when inactive, and max time before the next packet of this flow is punted back to the NOX Openflow controller for NOX to then install or not and matching flow back on the OVS.
D: The buffer ID of the buffered packet that caused this OFP packet.
E: Action type, in this case to output to a switch port.
F: Action to take, in this case to send all packets matching this flow including the one in buffer matching the ID 289 out of OVS port 1.
G: shows summary of the OFP packet.

As this ARP-reply passes to host1/OVS a similar flow will be installed by the Openflow controller NOX onto host1/OVS, with the ARP-reply eventually reaching VM01.

 

ICMP echo req OFP packet in host1.png

 

VM01 will then send an ICMP echo request as before this ICMP echo-request will reach host1/OVS and as there is no matching flow on host1/OVS the OVS will send an OFP packet type packet-in to the NOX Openflow controller.
The NOX Openflow controller receives the this OFP packet from host1/OVS and using the forwarding.l2_learning module makes a policy decision. The NOX controller creates a specific flow entry and sends OFP flow-mod to the OVS. The OVS will install this Flow and then send the buffered packet out matching this flow.

In the above decoded packet capture of the OFP flow-mod for the ARP-reply:
A: This OFP is of a type "packet-in" with a version of 0x01
B: The buffer ID of the buffered packet that caused this OFP packet.
C: What OVS port the packet was received on, in this case port 2.
D: Reason the OFP was generated; in this case it was due to no local flow entry matching this ICMP echo-request packet.
E: Frame data containing details of the received packet that can be used to construct a flow entry.
F: shows summary of the OFP packet.

 

ICMP echo req OFP Flow Mod host1.png

 

The NOX Openflow controller receives the previous OFP packet from host1/OVS and using the forwarding.l2_learning module makes a policy decision. The controller then creates a specific flow entry and sends OFP flow-mod to the OVS. The OVS will install this Flow and then send the buffered packet out matching this flow.

In the above decoded packet capture of the OFP flow-mod for the ICMP echo-request:
A: This OFP is of a type "flow-mod" with a version of 0x01
B: The specific details used to create the flow.
C: Idle time to discard this flow when inactive, and max time before the next packet of this flow is punted back to the NOX Openflow controller for NOX to then install or not and matching flow back on the OVS.
D: The buffer ID of the buffered packet that caused this OFP packet in.
E: Action type, in this case to output to a switch port.
F: Action to take, in this case to send all packets matching this flow including the one in buffer matching the ID 290 out of OVS port 1.
G: shows summary of the OFP packet.

The ICMP echo-request will then be tunnelled over to host2/OVS using GRE and host2/OVS will go through the same process and have a similar flow installed by the NOX Openflow controller. A similar process will then happen in reverse for the ICMP echo-reply.

All the flows being installed by NOX here are reactive flows, i.e. NOX did not determine the full network topology and install pre-emptive flows it is reacting to packet-in to then reactively install flows into the OVS Bridge br0 that originated the OFP packet-in.

To view these flows that have been installed into the OVS data path for bridge br0 you can run the command "sudo ovs-dpctl dump-flows br0" which will dump the installed flows as shown in the screenshot below:

 

ovs-dpctl dump flows.png

 

You can also run the command "sudo ovs-dpctl show br0 -s" to get port statistics such as received/transmitted packets as show in the screenshot below:

get port counters.png


That is the end of this my blog post on Mixed Hypervisor with Openvswitch and Openflow Network virtualisation I hope it was of some use to anyone and as before I'm open to feedback and any corrections on anything I may have miss-interpreted.

Thanks for reading.

Kind Regards
Kevin Barrass


Viewing all articles
Browse latest Browse all 3805

Latest Images

Trending Articles



Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>