OpFlex Is Not An OpenFlow Killer

There has been a flurry of press around Cisco’s release of OpFlex. If you want the nitty gritty details, please read the IETF draft available here. What exactly is OpFlex? The IETF draft sums it up nicely:

The OpFlex architecture provides a distributed control system based on a declarative policy information model. The policies are defined at a logically centralized policy repository (PR) and enforced within a set of distributed policy elements (PE). The PR communicates with the subordinate PEs using the OpFlex Control protocol. This protocol allows for bidirectional communication of policy, events, statistics, and faults.

It’s clear that Cisco intends to make OpFlex an open standard together with it’s partners in the vendor, provider and Open Source communities. We’re working hard to make that a reality. On the Open Source front, I’m leading a team of people who are working hard on the code around this new OpFlex Policy Agent. We intend to fully Open Source the OpFlex Policy Agent under the Apache 2.0 license. We’re excited to build an Open Community around this work, and in the coming months we’ll be looking to get more companies and individuals involved as we move this out into the Open. As you can imagine, starting an Open Source project from scratch takes time and planning, and we’re doing our best on all of these fronts.

What Exactly Is OpFlex?

So, now that you know what OpFlex is, what exactly is it not? Well, if you’ve read the article here, you may think it’s a, quote, “OpenFlow Killer.” Well a headline such as this may make for good SEO, it’s not true. OpFlex is meant to be embedded in the device or host via the Policy Agent. This could be a virtual switch such as Open vSwitch, a whitebox switch, a load balancer, a firewall, or any other element which can enforce policy. To show how this all fits together, the following diagram can be used:

+––––––––––––––+                                
|              |                                
|  Policy      |                                
|  Authority   |                                
|              |                                
|              |                                
|              |                                
+––––––+–––––––+                                
       |                                        
       | Policy                                 
       | Messaging                              
       |                                        
       |                                        
+––––––+–––––––+                   +––––––––––––––+
|              |                   |              |
| OpFlex       |                   |  Host or     |
| Policy       +––––––––––––––––---+  Device      |
| Agent        |  Device           |              |
|              |  Programmability  |              |
|              |  Functions        |              |
+––––––––––––––+                   +––––––––––––––+

As you can see from the diagram, the key pieces of OpFlex are the Policy Authority and the Policy Agent. Policies defined in the Policy Authority are resolved asynchronously by the Policy Agent. The policies are then rendered in the system specific implementation into the programmability functions provided by the device or system. Examples of programmability functions could be the OpenFlow and OVSDB protocols supported by Open vSwitch, APIs provided by Arista’s EOS, the onePK APIs supported on Cisco gear, or even a CLI interface to a firewall device. OpFlex doesn’t replace the programmability mechanisms provided by the system, it works in tandem with them to enforce the policy. This is the key piece which many have missed.

In Conclusion

We’re excited about the potential of OpFlex and the Policy Agent. OpFlex is not meant to replace Open vSwitch, nor any other host or system programmability layer. Open vSwitch is a great Open Source project for which Cisco is a contributor. We plan to continue working closely with the Open vSwitch community, as well as projects such as OpenCompute, OpenDaylight, and the ONF. And just as these are all vibrant Open Source projects and communities, we hope to get the OpFlex Policy Agent into a similar state over the coming months and build a community for people who want to enable Open Source policy.

Workaround for ODL in Neutron

With the Icehouse release of Neutron impending, we’ve unfortunately uncovered a bug which is affecting ODL integration with Neutron. This bug was introduced by this commit, and the reality is better CI for the ODL plugin would have caught this. I’m going to work to enable this better CI in the near future. The workaround for this is to add the following in your nova.conf:

vif_plugging_timeout = 10
vif_plugging_is_fatal = False

I’m working on fixing this right now. Implementing the fix above in your nova.conf will get you running again with ODL and OpenStack Neutron. Thanks to Simon Pasquire for finding this bug and noting the workaround as well!

Engineering Artifacts Themselves Are No Longer the Source of Sustainable Advantage and/or Innovation

My good friend Dave Meyer just wrote a great blog post at SDN Central available here. A key point which Dave makes is this:

Now, if we step back and ask what is implied by these three observations, you begin to see an important and profound macro trend: Engineering artifacts themselves are no longer the source of sustainable advantage and/or innovation. Rather, sustainable advantage is achieved through engineering systems, organizational culture, and the people and process that comprise the community (and/or organization). Open-source community, code, and associated engineering systems are coming together in a way that is fundamentally transforming the network industry.

Dave is spot on with his analysis here. We need to stop thinking about artifacts as being only things such as installable software, firmware images, or even pieces of hardware. In the Open Source world, the community is an artifact. Vibrant mailing lists filled with discussion are artifacts. The tool chains are artifacts. The CI/CD systems become artifacts. All of these things become equally as important as the final result, because they each can substantially affect the final result, and without them working in harmony, you can’t get the final result.

When you grasp this concept, it really blows your mind.

OpenDaylight Integration with OpenStack has merged into Icehouse!

As OpenStack marches towards it’s Icehouse release this spring, some work I’ve been doing has finally merged upstream. This week, both the OpenDaylight ML2 MechanismDriver and devstack support for OpenDaylight merged upstream. This was a huge effort which spans the efforts of many people. This was the first step in solidifying the integration of OpenDaylight with OpenStack Neutron, and we have many additional things we can do. To get a first taste of running the two together, please see the video of the OpenDaylight Summit presentation myself, Madhu Venugopal, and Brent Salisbury did in early February.

Taking OpenDaylight For a Test Run With OpenStack Neutron

Now that the patches have merged upstream, trying this out is extremely simple. If you’re running a single node, you can simple setup the Neutron portion of your local.conf as follows and OpenDaylight will be downloaded and run as a top-level devstack service:

# ODL WITH ML2
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
enable_service odl-server odl-compute

That’s all that’s required. When you run “stack.sh”, you will see a screen called “odl-server” which will be OpenDaylight. And Neutron will use OpenDaylight to satisfy the requirements of virtual tenant networks.

If you’re trying this with a multi-node setup, you can use the above for your controller. And on your compute nodes, try this addition to local.conf:

enable_service odl-compute

That will configure the host to use Open vSwitch and set it up to point at OpenDaylight. It will also ensure Nova is setup to use Neutron for networking API calls.

Running OpenDaylight Outside of devstack

If you’re running OpenDaylight outside of devstack, you can configure your control node like this:

Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
enable_service odl-compute
ODL_MGR_IP=x.x.x.x

Just replace x.x.x.x in the ODL_MGR_IP line with the IP of your running OpenDaylight instance.

For compute hosts, all you need to add is this:

enable_service odl-compute
ODL_MGR_IP=x.x.x.x

Again, just replace x.x.x.x in the ODL_MGR_IP line with the IP of your running OpenDaylight instance. Your control and compute nodes will now utilize an external OpenDaylight controller.

Additional Configuration Values

There are some additional things you can configure for OpenDaylight. They are:

  • ODL_ARGS: This value can be set to options to pass OpenDaylight. The default is “-XX:MaxPermSize=384m”. An example would be like this:ODL_ARGS=”-Xmx1024m -XX:MaxPermSize=512m”
  • ODL_BOOT_WAIT: This value indicates how long to sleep after starting OpenDaylight before proceeding with the rest of devstack. The default value is 60 seconds. An example is like this:
    ODL_BOOT_WAIT=70

The Future of Open Source SDN

OpenDaylight is progressing at a very fast pace. The current release being worked on (Helium) is going to stabilize and scale the platform even more. With features like Group Policy being added, the future looks increasingly awesome for OpenDaylight. And now you can use it to scale your Neutron networks as well. How cool is that?

Vote for my OpenStack Talks!

I am very lucky to be a part of six great presentation submissions for the upcoming OpenStack Summit in Atlanta. The OpenStack Foundation uses voting to help decide which of these talks, panels, and tutorials will be scheduled. I would appreciate your vote for my submissions! I’ll highlight them below.

  • Using OpenStack Within An OpenStack Environment: This is a talk which will be similar to the tutorial Madhu, Brent, Ryan and I did at the OpenDaylight Summit, except it will be less tutorial focused and more presentation based. We’ll show you how to use the latest OpenDaylight Hydrogen release with OpenStack Neutron. I’m hoping to add Madhu and Brent to present alongside Dave Meyer and myself.
  • OpenStack Integration with OpenDaylight: State of the Union and Future Directions: In this talk, Dave Meyer and I will go over OpenDaylight for people who are new to this Open Source SDN controller. We’ll cover a bit of history and go over what’s supported in the Hydrogen release. And we’ll cover a bit about the future direction. This session will be great for people new to OpenDaylight.
  • The Future of OpenStack Networking: This is a panel discussion we put together to discuss future directions of OpenStack Networking. Some great people are proposed on this panel with me: Lew Tucker, who leads Cloud Computing at Cisco and is one of the sharpest minds in the industry; Dan Dumitriu, CEO of Midokura and a trailblazer in the SDN industry; Chris Wright, Technical Director of SDN at Red Hat, and one of the best known names in Open Source networking; and Nils Swart from Plexxi, who is leading their Open Source strategy. A great group of people, this will be a fun panel if it’s accepted.
  • Network Policy Abstractions in Neutron: There is work going on in Neutron to add a new Policy API layer here. This will make it easier for application developers and deployers to work with the networking subsystem in OpenStack. This is a talk which will go into details on this. I’m happy to be joined by Mohammad Banikazemi from IBM and Stephen Wong from Midokura. Anyone interested in advanced policy discussions will want to attend this discussion should it be approved.
  • An Overview of Open Source Backends for Neutron: This talk will walk people through the options available in the Open Source world for OpenStack Neutron networking. Salvatore Orlando, who is one of the original Neutron core team members, will be my co-presenter, should this talk get approved. This talk would be great for people looking at deployment options which are pure open source.
  • Hackathon Syllabus: This is a great talk submitted by Sean Roberts from Yahoo!, and we plan to cover how to hold a Hackathon Style user group meeting. We’ll be joined by Colin McNamara from Nexus IS and Mark Voelker from Cisco. This will be great for people who are looking at forming User Groups focused on hacking!

Please consider any or all of my talks above. I appreciate your votes and look forward to continuing to share my knowledge around Open Source networking!

TimeMachine on a QNAP TS-659

For the last few months, TimeMachine has been failing for me between my Macs and a QNAP TS-659 NAS I have. The NAS is running firmware 4.0.3, and I would consistently get the error “Cannot connect” when trying to connect. I was able to work around this with the following command run manually:

sudo tmutil setdestination -ap afp://TimeMachine@192.168.64.38/TMBackup

Running that in a terminal window allowed me to then use the GUI to select that disk for backups and things started rolling again. In the above, make sure to replace “192.168.64.38″ with your IP or hostname of the QNAP. It will ask you for your password when you run the command. I found all of this on the forum thread here for reference.

Getting Started With OpenDaylight and OpenStack

If you’re a fan of networking, you are no doubt very excited by all of the recent excitement in the industry as of late. And there is no larger area of innovation in networking at the moment than Open Source networking. Two of the projects at the forefront of Open Source networking innovation are OpenStack Neutron and OpenDaylight. OpenStack Neutron is driving an API around networking for Infrastructure as a Service Clouds, and has been very successful at driving mindshare in this area. There are a large number of Plugins and ML2 MechanismDrivers for Neutron in existence already. However, so far, there is no OpenDaylight integration with OpenStack, at least upstream. I am pleased to announce that a team of us are working on making this happen, however. We have a blueprint filed and we are actively working towards the support in OpenDaylight required to support the Neutron APIs. What I’m going to show you in this blog post is how to take what we currently have for a test run and try it out yourself.

OpenDaylight Integration with OpenStack: The Details

OpenDaylight is a highly scalable controller written in Java. It is designed from the start to be modular. Perhaps the best way to understand the Modular nature of OpenDaylight is to look at an architecture diagram of it:

OpenDaylight Hydrogen Release

OpenDaylight Hydrogen Release Architecture Diagram

You can see all the pieces of OpenDaylight, and there are quite a few. Because of the modular nature of OpenDaylight it makes heavy use of the OSGI framework. I’m not going to go into extreme details of how this works, but suffice to say it allows for anyone to write a bundle which can run and interact with other bundles in OpenDaylight.

As part of this, there exists a few bundles which are relevant to the OpenStack integration efforts:

Each of those bundles provides a necessary component in the OpenStack integration. The NeutronAPIService provides an abstraction of the Neutron APIs into OpenDaylight. It caches all of the Neutron objects inside of OpenDaylight providing access to this information to anything in OpenDaylight which requires it. The OVSDB and OpenFlow OSGI bundles in OpenDaylight provide the code which actually programs things on each compute host. They allow for the creation and deletion of tunnel ports, flow programming for ports as they come and go, and bridge creation and deletion on the host.

The main benefit of the above is that each compute host no longer needs an Open vSwitch Agent running on each host. The combination of OpenFlow and OVSDB provide the equivalent functionality as the Open vSwitch Agent.

OpenDaylight and OpenStack: Getting Started

To test out the latest OpenDaylight Modular Layer2 MechanismDriver, you will need the following:

  • A machine to run the OpenDaylight Controller
  • A machine to run the OpenStack control software
  • At least one machine to run the OpenStack Compute service to run virtual machines

Now, you can combine some of the things above, and you should most certainly run all of the above as virtual machines. I personally run all of the above as virtual machines on VMware Fusion and have one VM in which I run OpenDaylight, one VM in which I run the OpenStack control software, and 3 other VMs in which I run OpenStack compute services. A fairly minimum setup would be 3 VMs, however: One to run the OpenDaylight controller, one to run OpenStack control and compute services, and another one to run only OpenStack compute services.

In either case, your topology will look very similar to the following diagram:

OpenStack+OpenDaylight

OpenStack and OpenDaylight Integration

OpenDaylight and OpenStack: Building and Installing OpenDaylight

Lets get started with the actual configuration of the system now. The first piece is your OpenDaylight VM. To build and install this, follow the steps below. I should note a much larger view of building the controller is on the wiki page here, the instructions below are mostly meant to get you going very fast without having to read that wiki page in detail.

mkdir ~/odl
cd odl
git clone https://git.opendaylight.org/gerrit/p/controller.git
cd opendaylight/distribution/opendaylight/
mvn clean install
cd ~/odl
git clone https://git.opendaylight.org/gerrit/p/ovsdb.git
cd ovsdb

At this point, you can cut and paste the script below as “build_ovsdb.sh” and use that to build OVSDB and copy the bundles over to the controller:

#!/bin/sh
git pull
cd neutron
echo "Refreshing ovsdb/neutron.."
pwd
mvn clean install
cd ../northbound/ovsdb/
echo "Refreshing northbound/ovsdb.. "
pwd
mvn clean install
cd ../../ovsdb
echo "Refreshing ovsdb/ovsdb.."
pwd
mvn clean install
cd ..
cp ~/odl/ovsdb/neutron/target/ovsdb.neutron-0.5.0-SNAPSHOT.jar ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins/
cp ~/odl/ovsdb/northbound/ovsdb/target/ovsdb.northbound-0.5.0-SNAPSHOT.jar ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins/
cp ~/odl/ovsdb/ovsdb/target/ovsdb-0.5.0-SNAPSHOT.jar ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins/
echo "done!"

Once you’ve created the script, simply make sure it has execute permissions (chmod +x build_ovsdb.sh) and run it and you will have the OVSDB bundles created and installed into the plugins directory. To verify they are there, look in the following location:

  • odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins

The next step is to modify the “of.address” variable in the “configuration/config.ini” file. This file is relative to the odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage directiory. Fire up vi and add the management IP address for your ODL instance as the value for of.address.

Now it’s time to fire up your controller! To do that, execute the following:

cd ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight
./run.sh

Once the controller is running, you will want to disable the SimpleForwarding Application, so do the following:

  • In the OSGI console, run “lb | grep simple” to find the bundle ID of the simpleforwarding application.
  • Run “stop <bundle ID>” to disable simpleforwarding.
  • Run “lb | grep simple” to verify it is in the “Resolved” state.

The entire thing looks like below:

osgi> lb | grep simple
 132|Active | 4|samples.simpleforwarding (0.4.1.SNAPSHOT)
 true
 osgi> stop 132
 osgi> lb | grep simple
 132|Resolved | 4|samples.simpleforwarding (0.4.1.SNAPSHOT)
 true
 osgi>

OpenStack and OpenDaylight: Readying the devstack nodes

At this point, you have an OpenDaylight controller running. Now it’s time to fire up your devstack nodes. You will need at least two virtual machines ready for this. They can run anything which devstack supports. I am ardent user of Fedora Linux, so that’s what I use, but Ubuntu works fine as well. Note if you’re using Ubuntu 12.04 LTS, that particular variant of Ubuntu is using OVS 1.4, which is quite a bit old. Fedora 19 uses a much newer version of OVS.

One thing to note is that you should make sure you have passwordless “sudo” access setup for the account you’re running devstack as.

So, the next thing to do on each node is to checkout devstack:

cd ~/
git clone git://github.com/openstack-dev/devstack.git
cd devstack
git remote add opendaylight https://github.com/CiscoSystems/devstack.git
git fetch opendaylight
git checkout opendaylight

Run the above on each devstack node. It will checkout the customer OpenDaylight devstack branch. Now to configure your local.conf files.

On the control node, your local.conf will look like the below:

[[local|localrc]]
LOGFILE=stack.sh.log
#SCREEN_LOGDIR=/opt/stack/data/log
#LOG_COLOR=False
#OFFLINE=True
RECLONE=yes

# Only uncomment the below two lines if you are running on Fedora
disable_service rabbit
enable_service qpid
disable_service n-cpu
enable_service n-cond
disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service quantum
enable_service tempest

Q_HOST=$SERVICE_HOST
HOST_IP=192.168.64.193

Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
ENABLE_TENANT_TUNNELS=True
NEUTRON_REPO=https://github.com/CiscoSystems/neutron.git
NEUTRON_BRANCH=odl_ml2

VNCSERVER_PROXYCLIENT_ADDRESS=192.168.64.193
VNCSERVER_LISTEN=0.0.0.0

HOST_NAME=km-dhcp-64-193.kmestery.cisco.com
SERVICE_HOST_NAME=${HOST_NAME}
SERVICE_HOST=192.168.64.193

FLOATING_RANGE=192.168.210.0/24
PUBLIC_NETWORK_GATEWAY=192.168.75.254
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
QPID_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[agent]
minimize_polling=True

[ml2_odl]
url=http://192.168.64.131:8080/controller/nb/v2/neutron
username=admin
password=admin

You should note in the above you will want to change the following:

  • HOST_IP: This is the management IP of the control host itself.
  •  VNCSERVER_PROXYCLIENT_ADDRESS: The management IP address of the control node itself.
  • HOST_NAME: The host name of the control node.
  • SERVICE_HOST: The management IP of the control node.
  • The “url” parameter in the ml2_odl section near the bottom: Make sure the url and credentials match your OpenDaylight configuration. If you didn’t change the default username password for ODL, you can leave those bits alone.

Once you have that done, the next step is to setup your local.conf for the compute nodes:

[[local|localrc]]
LOGFILE=stack.sh.log
#LOG_COLOR=False
#SCREEN_LOGDIR=/opt/stack/data/log
#OFFLINE=true
RECLONE=yes

disable_all_services
enable_service neutron nova n-cpu quantum n-novnc qpid

HOST_NAME=km-dhcp-64-197.kmestery.cisco.com
HOST_IP=192.168.64.197
SERVICE_HOST_NAME=km-dhcp-64-193.kmestery.cisco.com
SERVICE_HOST=192.168.64.193
VNCSERVER_PROXYCLIENT_ADDRESS=192.168.64.197
VNCSERVER_LISTEN=0.0.0.0

FLOATING_RANGE=192.168.210.0/24

NEUTRON_REPO=https://github.com/CiscoSystems/neutron.git
NEUTRON_BRANCH=odl_ml2
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,linuxbridge
ENABLE_TENANT_TUNNELS=True
Q_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
QPID_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[agent]
minimize_polling=True

[ml2_odl]
url=http://192.168.64.131:8080/controller/nb/v2/neutron
username=admin
password=admin

Again, the parts to edit above on the compute nodes are:

  • HOST_NAME: The host name of each compute node.
  • HOST_IP: The management IP address of each host.
  • SERVICE_HOST_NAME: The hostname of the control node.
  • SERVICE_HOST: The management IP of the control node.
  • ml2_odl: Modify the IP address there for the ODL controller.

Each local.conf file should be saved in the ~/devstack directory on each control and/or compute host.

Now you should be able to run “stack.sh” on all of the nodes (control and each compute) by doing this:

  • cd ~/devstack
  • ./stack.sh

Once that completes, you should have a functioning OpenStack setup with OpenDaylight.

Possible Issues With devstack on Fedora

One possible issue you may hit if you’re using a fresh VM on Fedora is mysql errors. You will see keystone errors and mysql access errors in the stack.sh run. To get around this, follow the workaround listed in this post here. It’s worked for me every time I hit this error running devstack on Fedora. One other issue with Fedora is that the latest devstack fails to kill all the nova processes when you run “unstack.sh.” To workaround this, simply run the following after “unstack.sh”:

  • killall nova-api nova-cert nova-scheduler nova-consoleauth nova-api nova-conductor

OpenStack and OpenDaylight: Verifying The Install

At this point, you should have the entire system up and running. To verify this, you can do the following:

  • Point your web browser at the OpenStack Horizon GUI:
    • http://<control node IP>/auth/login/
    • Login using “admin/admin” and you can see you OpenStack install.
  • Point your web browser at the OpenDaylight GUI:
    • http://<odl IP>:8080/
    • Login using “admin/admin”

You can play around in the GUIs, launch VMs, etc. As you launch VMs, you will see ODL create tunnel ports and links between compute hosts, which will become visible with a refresh in the OpenDaylight GUI.

OpenStack and OpenDaylight: Getting Help

The most appropriate place to get help at this early stage is on #opendaylight-ovsdb on Freenode. A long list of OpenStack Neutron and OpenDaylight developers hang out there and can provide help. Besides myself (IRC nick “mestery”), you can also expect to find the following people online:

  • Madhu Venugopal (IRC Nick “Madhu”)
  • Brent Salisbury (IRC Nick “networkstatic”)
  • Keith Burns (IRC Nick “alagalah”)
  • Florian Otel (IRC Nick “FlorianOtel”)

You can ping any of us and we should be able to help you debug any issues. Florian in particular has some VM images which may expedite the process above for folks trying this out for the first time. The instructions above were meant to walk through all of the steps necessary to get this up and running from scratch.

OpenStack and OpenDaylight: What’s Next

In a future post, I will walk through debugging this setup and using it see flows and how different pieces interact. In particular, I’ll walk through debugging this system so you understand exactly how things are done when networks, subnets, ports, routers and other OpenStack Neutron API objects are created and how OpenDaylight handles programming them onto each host.

LinuxCon Wrapup

Last week I was in New Orleans for LinuxCon. This was my first LinuxCon event, and it was pretty awesome. The event was co-located with a smattering of other Open Technology events as well:

As you can see, that’s a lot of events to pack into a single week. I was focused on participating in the LinuxCon keynotes and sessions for the early part of the week. I also worked the OpenDaylight and OpenStack booths for a while on both Monday and Tuesday. It was great to meet so many people and to have amazing conversations around all of the spectacular Open Source technologies I have the pleasure of being involved with. People were very interested in talking about Open Source SDN technologies and how they relate to cloud environments. I had many great discussions around the integration of OpenDaylight with OpenStack Neutron as well.

Ian and Kyle with the Linux Penguin

Ian and Kyle with the Linux Penguin

Panels and Presentations

Wednesday was the day I was on the OpenDaylight Panel, which opened the OpenDaylight Mini-Summit. I was joined by Chris Wright, Dave Meyer, James Bottomley, and Chiradeep Vittal. The topics during the panel were pretty awesome! The audience had lots of great questions. I find a really enjoy panel discussions because it allows for maximum audience participation, and really delivers the exact topics the audience wants to discuss. A great summary of the panel can be found here by the Enterprise Networking Planet site. The key takeaway from me, as captured by the summary, is this:

  • “We need to start thinking in terms of applications,” Mestery said. “But clearly we have a lot of work to do.”

 

OpenDaylight Panel

OpenDaylight Panel

I also presented at the Linux Plumbers Conference on the topics of LISP and NSH in Open vSwitch. The slides for this presentation are now on Slideshare. My co-presenter for this discussion was Vina Ermagan. This was a very technical discussion which took place during the net-virt track of the Plumbers Conference. Overall, the feedback was very well received. And not long afterwards, the work we’ve been doing on NSH in OVS was posted to the OVS mailing list by Pritesh Kothari, another member of my team at Cisco. We believe the work around NSH and LISP is just another example of open protocols being contributed back into Linux and Open vSwitch.

Podcasts

While at LinuxCon, I had the please to be on two podcasts with the wonderful hosts of The Cloudcast, Aaron Delp and Brian Gracely. Aaron and Brian do a great job with The Cloudcast, and I felt like the two podcasts I was a part of were a great way to spread the word of both OpenStack Neutron and OpenDaylight. The podcasts I was a part of were:

  • OpenDaylight and SDN Evolution: This podcast was a great venue for myself, Chris Wright, and Brent Salisbury to talk about OpenDaylight for people who may not be familiar with it. We also talked a a lot about how it will likely integrate into OpenStack Neutron. That work is ongoing right now. Below is a video of what this integration will look like:

  • OpenStack Neutron and SDN: In this podcast, the discussion revolved around what to expect out of OpenStack Neutron in the Havana release of OpenStack, along with the future of OpenStack Neutron post Havana. I was joined by Mark McClain, PTL for OpenStack Neutron, and Ian Wells, my colleague from Cisco.
OpenDaylight Roundtable

OpenDaylight Roundtable

Summary

Overall, LinuxCon was a great event. Spending time with people who you normally interact with on IRC, mailing lists, and conference calls is always a good thing. Hashing out complex technical problems is always somehow easier when it’s done over beers at the end of a long day. I look forward to attending future LinuxCons!

Vote for my OpenStack Presentations for the Hong Kong Summit!

Voting for the OpenStack Summit is now open! To vote for OpenStack Presentations for the Summit in Hong Kong, use the link provided here. The presentations being voted on now are for the conference portion of the event. There are a lot of great presentations out there. I’d like to highlight the ones I am lucky enough to be a part of here.

  • OpenStack Neutron Modular Layer 2 Plugin Deep Dive: This is a presentation myself and Robert Kukura from Red Hat are putting together. The Module Layer 2 (ML2) Neutron plugin is a new plugin for the Havana release. The main feature of this plugin is it’s ability to run multiple MechanismDrivers at the same time. This talk will go into detail on ML2, including deploying it, running Neutron with it, and how it works with multiple network technologies underneath. Bob and I are hoping to do a live demo of a deployment with multiple MechanismDrivers as well!
  • OpenDaylight: An Open Source SDN for your OpenStack Cloud: I am very excited about this presentation. OpenDaylight is a brand new Open Source SDN controller. I’m putting this talk together with some great people from the Open Source SDN community: Chris Wright from Red Hat, Anees Shaikh from IBM, and Stephan Baucke from Erricson. We hope to go over some background on OpenDaylight, and then talk about how we see it fitting into OpenStack deployments. This is a session not to miss!
  • Agile Network With OpenStack: This is a presentation I’m putting together with Rohit Agarwalla, and it will include information on the existing Cisco Neutron plugin and how it works with Nexus switches, Nexus 1000v, CSR 1000v, Dynamic Fabric Automation, and onePK. Rohit will be giving a demo of how the Cisco plugin can help provide automation and ease deployment of your OpenStack cloud.
  • Federating OpenStack User Groups: This is a panel discussion with my good friends and fellow OpenStack User Group founders Mark Voelker, Shannon McFarland, Colin McNamara, and Sean Roberts. This will be a continuation panel discussion from the Portland Summit and will focus on how User Groups can collaborate to extend the reach of OpenStack by sharing content, speakers, and other materials.
  • OpenStack Associate Engineer – Basic Install and Operate Workshop: This is a session I am proud to be a part of organized by Colin McNamara. It includes myself, Mark Voelker, Sean Roberts, and Shannon McFarland as well, and will be a two day course that will help to equip trainers with the skills necessary to install, deploy, and manage a three node OpenStack installation. This is an exciting offering and we hope it opens the doors for people new to OpenStack!

So please go ahead and vote for sessions for the upcoming Summit. There are a lot of great presentations out there, and I hope you find the ones I’m a part of to vote for them in addition to many others!

I am a member of the OpenStack Neutron Core Team!

So, it’s now official: I am a member of the OpenStack Neutron core team. I was voted onto the team last week and made official at the weekly Neutron meeting this past Monday. I will initially focus on the Open Source plugins (Open vSwitch, LinuxBridge) and the Modular Layer 2 (ML2) plugin. I wanted to thank Mark McClain for nominating me! The OpenStack Neutron core team is a great group of developers to work with, I’m very excited to continue contributing to OpenStack Neutron going forward!