Vote for my OpenStack Talks!

I am very lucky to be a part of six great presentation submissions for the upcoming OpenStack Summit in Atlanta. The OpenStack Foundation uses voting to help decide which of these talks, panels, and tutorials will be scheduled. I would appreciate your vote for my submissions! I’ll highlight them below.

  • Using OpenStack Within An OpenStack Environment: This is a talk which will be similar to the tutorial Madhu, Brent, Ryan and I did at the OpenDaylight Summit, except it will be less tutorial focused and more presentation based. We’ll show you how to use the latest OpenDaylight Hydrogen release with OpenStack Neutron. I’m hoping to add Madhu and Brent to present alongside Dave Meyer and myself.
  • OpenStack Integration with OpenDaylight: State of the Union and Future Directions: In this talk, Dave Meyer and I will go over OpenDaylight for people who are new to this Open Source SDN controller. We’ll cover a bit of history and go over what’s supported in the Hydrogen release. And we’ll cover a bit about the future direction. This session will be great for people new to OpenDaylight.
  • The Future of OpenStack Networking: This is a panel discussion we put together to discuss future directions of OpenStack Networking. Some great people are proposed on this panel with me: Lew Tucker, who leads Cloud Computing at Cisco and is one of the sharpest minds in the industry; Dan Dumitriu, CEO of Midokura and a trailblazer in the SDN industry; Chris Wright, Technical Director of SDN at Red Hat, and one of the best known names in Open Source networking; and Nils Swart from Plexxi, who is leading their Open Source strategy. A great group of people, this will be a fun panel if it’s accepted.
  • Network Policy Abstractions in Neutron: There is work going on in Neutron to add a new Policy API layer here. This will make it easier for application developers and deployers to work with the networking subsystem in OpenStack. This is a talk which will go into details on this. I’m happy to be joined by Mohammad Banikazemi from IBM and Stephen Wong from Midokura. Anyone interested in advanced policy discussions will want to attend this discussion should it be approved.
  • An Overview of Open Source Backends for Neutron: This talk will walk people through the options available in the Open Source world for OpenStack Neutron networking. Salvatore Orlando, who is one of the original Neutron core team members, will be my co-presenter, should this talk get approved. This talk would be great for people looking at deployment options which are pure open source.
  • Hackathon Syllabus: This is a great talk submitted by Sean Roberts from Yahoo!, and we plan to cover how to hold a Hackathon Style user group meeting. We’ll be joined by Colin McNamara from Nexus IS and Mark Voelker from Cisco. This will be great for people who are looking at forming User Groups focused on hacking!

Please consider any or all of my talks above. I appreciate your votes and look forward to continuing to share my knowledge around Open Source networking!

TimeMachine on a QNAP TS-659

For the last few months, TimeMachine has been failing for me between my Macs and a QNAP TS-659 NAS I have. The NAS is running firmware 4.0.3, and I would consistently get the error “Cannot connect” when trying to connect. I was able to work around this with the following command run manually:

sudo tmutil setdestination -ap afp://TimeMachine@192.168.64.38/TMBackup

Running that in a terminal window allowed me to then use the GUI to select that disk for backups and things started rolling again. In the above, make sure to replace “192.168.64.38” with your IP or hostname of the QNAP. It will ask you for your password when you run the command. I found all of this on the forum thread here for reference.

Getting Started With OpenDaylight and OpenStack

If you’re a fan of networking, you are no doubt very excited by all of the recent excitement in the industry as of late. And there is no larger area of innovation in networking at the moment than Open Source networking. Two of the projects at the forefront of Open Source networking innovation are OpenStack Neutron and OpenDaylight. OpenStack Neutron is driving an API around networking for Infrastructure as a Service Clouds, and has been very successful at driving mindshare in this area. There are a large number of Plugins and ML2 MechanismDrivers for Neutron in existence already. However, so far, there is no OpenDaylight integration with OpenStack, at least upstream. I am pleased to announce that a team of us are working on making this happen, however. We have a blueprint filed and we are actively working towards the support in OpenDaylight required to support the Neutron APIs. What I’m going to show you in this blog post is how to take what we currently have for a test run and try it out yourself.

OpenDaylight Integration with OpenStack: The Details

OpenDaylight is a highly scalable controller written in Java. It is designed from the start to be modular. Perhaps the best way to understand the Modular nature of OpenDaylight is to look at an architecture diagram of it:

OpenDaylight Hydrogen Release

OpenDaylight Hydrogen Release Architecture Diagram

You can see all the pieces of OpenDaylight, and there are quite a few. Because of the modular nature of OpenDaylight it makes heavy use of the OSGI framework. I’m not going to go into extreme details of how this works, but suffice to say it allows for anyone to write a bundle which can run and interact with other bundles in OpenDaylight.

As part of this, there exists a few bundles which are relevant to the OpenStack integration efforts:

Each of those bundles provides a necessary component in the OpenStack integration. The NeutronAPIService provides an abstraction of the Neutron APIs into OpenDaylight. It caches all of the Neutron objects inside of OpenDaylight providing access to this information to anything in OpenDaylight which requires it. The OVSDB and OpenFlow OSGI bundles in OpenDaylight provide the code which actually programs things on each compute host. They allow for the creation and deletion of tunnel ports, flow programming for ports as they come and go, and bridge creation and deletion on the host.

The main benefit of the above is that each compute host no longer needs an Open vSwitch Agent running on each host. The combination of OpenFlow and OVSDB provide the equivalent functionality as the Open vSwitch Agent.

OpenDaylight and OpenStack: Getting Started

To test out the latest OpenDaylight Modular Layer2 MechanismDriver, you will need the following:

  • A machine to run the OpenDaylight Controller
  • A machine to run the OpenStack control software
  • At least one machine to run the OpenStack Compute service to run virtual machines

Now, you can combine some of the things above, and you should most certainly run all of the above as virtual machines. I personally run all of the above as virtual machines on VMware Fusion and have one VM in which I run OpenDaylight, one VM in which I run the OpenStack control software, and 3 other VMs in which I run OpenStack compute services. A fairly minimum setup would be 3 VMs, however: One to run the OpenDaylight controller, one to run OpenStack control and compute services, and another one to run only OpenStack compute services.

In either case, your topology will look very similar to the following diagram:

OpenStack+OpenDaylight

OpenStack and OpenDaylight Integration

OpenDaylight and OpenStack: Building and Installing OpenDaylight

Lets get started with the actual configuration of the system now. The first piece is your OpenDaylight VM. To build and install this, follow the steps below. I should note a much larger view of building the controller is on the wiki page here, the instructions below are mostly meant to get you going very fast without having to read that wiki page in detail.

mkdir ~/odl
cd odl
git clone https://git.opendaylight.org/gerrit/p/controller.git
cd opendaylight/distribution/opendaylight/
mvn clean install
cd ~/odl
git clone https://git.opendaylight.org/gerrit/p/ovsdb.git
cd ovsdb

At this point, you can cut and paste the script below as “build_ovsdb.sh” and use that to build OVSDB and copy the bundles over to the controller:

#!/bin/sh
git pull
cd neutron
echo "Refreshing ovsdb/neutron.."
pwd
mvn clean install
cd ../northbound/ovsdb/
echo "Refreshing northbound/ovsdb.. "
pwd
mvn clean install
cd ../../ovsdb
echo "Refreshing ovsdb/ovsdb.."
pwd
mvn clean install
cd ..
cp ~/odl/ovsdb/neutron/target/ovsdb.neutron-0.5.0-SNAPSHOT.jar ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins/
cp ~/odl/ovsdb/northbound/ovsdb/target/ovsdb.northbound-0.5.0-SNAPSHOT.jar ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins/
cp ~/odl/ovsdb/ovsdb/target/ovsdb-0.5.0-SNAPSHOT.jar ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins/
echo "done!"

Once you’ve created the script, simply make sure it has execute permissions (chmod +x build_ovsdb.sh) and run it and you will have the OVSDB bundles created and installed into the plugins directory. To verify they are there, look in the following location:

  • odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/plugins

The next step is to modify the “of.address” variable in the “configuration/config.ini” file. This file is relative to the odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage directiory. Fire up vi and add the management IP address for your ODL instance as the value for of.address.

Now it’s time to fire up your controller! To do that, execute the following:

cd ~/odl/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight
./run.sh

Once the controller is running, you will want to disable the SimpleForwarding Application, so do the following:

  • In the OSGI console, run “lb | grep simple” to find the bundle ID of the simpleforwarding application.
  • Run “stop <bundle ID>” to disable simpleforwarding.
  • Run “lb | grep simple” to verify it is in the “Resolved” state.

The entire thing looks like below:

osgi> lb | grep simple
 132|Active | 4|samples.simpleforwarding (0.4.1.SNAPSHOT)
 true
 osgi> stop 132
 osgi> lb | grep simple
 132|Resolved | 4|samples.simpleforwarding (0.4.1.SNAPSHOT)
 true
 osgi>

OpenStack and OpenDaylight: Readying the devstack nodes

At this point, you have an OpenDaylight controller running. Now it’s time to fire up your devstack nodes. You will need at least two virtual machines ready for this. They can run anything which devstack supports. I am ardent user of Fedora Linux, so that’s what I use, but Ubuntu works fine as well. Note if you’re using Ubuntu 12.04 LTS, that particular variant of Ubuntu is using OVS 1.4, which is quite a bit old. Fedora 19 uses a much newer version of OVS.

One thing to note is that you should make sure you have passwordless “sudo” access setup for the account you’re running devstack as.

So, the next thing to do on each node is to checkout devstack:

cd ~/
git clone git://github.com/openstack-dev/devstack.git
cd devstack
git remote add opendaylight https://github.com/CiscoSystems/devstack.git
git fetch opendaylight
git checkout opendaylight

Run the above on each devstack node. It will checkout the customer OpenDaylight devstack branch. Now to configure your local.conf files.

On the control node, your local.conf will look like the below:

[[local|localrc]]
LOGFILE=stack.sh.log
#SCREEN_LOGDIR=/opt/stack/data/log
#LOG_COLOR=False
#OFFLINE=True
RECLONE=yes

# Only uncomment the below two lines if you are running on Fedora
disable_service rabbit
enable_service qpid
disable_service n-cpu
enable_service n-cond
disable_service n-net
enable_service q-svc
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service quantum
enable_service tempest

Q_HOST=$SERVICE_HOST
HOST_IP=192.168.64.193

Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,logger
ENABLE_TENANT_TUNNELS=True
NEUTRON_REPO=https://github.com/CiscoSystems/neutron.git
NEUTRON_BRANCH=odl_ml2

VNCSERVER_PROXYCLIENT_ADDRESS=192.168.64.193
VNCSERVER_LISTEN=0.0.0.0

HOST_NAME=km-dhcp-64-193.kmestery.cisco.com
SERVICE_HOST_NAME=${HOST_NAME}
SERVICE_HOST=192.168.64.193

FLOATING_RANGE=192.168.210.0/24
PUBLIC_NETWORK_GATEWAY=192.168.75.254
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
QPID_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[agent]
minimize_polling=True

[ml2_odl]
url=http://192.168.64.131:8080/controller/nb/v2/neutron
username=admin
password=admin

You should note in the above you will want to change the following:

  • HOST_IP: This is the management IP of the control host itself.
  •  VNCSERVER_PROXYCLIENT_ADDRESS: The management IP address of the control node itself.
  • HOST_NAME: The host name of the control node.
  • SERVICE_HOST: The management IP of the control node.
  • The “url” parameter in the ml2_odl section near the bottom: Make sure the url and credentials match your OpenDaylight configuration. If you didn’t change the default username password for ODL, you can leave those bits alone.

Once you have that done, the next step is to setup your local.conf for the compute nodes:

[[local|localrc]]
LOGFILE=stack.sh.log
#LOG_COLOR=False
#SCREEN_LOGDIR=/opt/stack/data/log
#OFFLINE=true
RECLONE=yes

disable_all_services
enable_service neutron nova n-cpu quantum n-novnc qpid

HOST_NAME=km-dhcp-64-197.kmestery.cisco.com
HOST_IP=192.168.64.197
SERVICE_HOST_NAME=km-dhcp-64-193.kmestery.cisco.com
SERVICE_HOST=192.168.64.193
VNCSERVER_PROXYCLIENT_ADDRESS=192.168.64.197
VNCSERVER_LISTEN=0.0.0.0

FLOATING_RANGE=192.168.210.0/24

NEUTRON_REPO=https://github.com/CiscoSystems/neutron.git
NEUTRON_BRANCH=odl_ml2
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight,linuxbridge
ENABLE_TENANT_TUNNELS=True
Q_HOST=$SERVICE_HOST

MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
QPID_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

[[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
[agent]
minimize_polling=True

[ml2_odl]
url=http://192.168.64.131:8080/controller/nb/v2/neutron
username=admin
password=admin

Again, the parts to edit above on the compute nodes are:

  • HOST_NAME: The host name of each compute node.
  • HOST_IP: The management IP address of each host.
  • SERVICE_HOST_NAME: The hostname of the control node.
  • SERVICE_HOST: The management IP of the control node.
  • ml2_odl: Modify the IP address there for the ODL controller.

Each local.conf file should be saved in the ~/devstack directory on each control and/or compute host.

Now you should be able to run “stack.sh” on all of the nodes (control and each compute) by doing this:

  • cd ~/devstack
  • ./stack.sh

Once that completes, you should have a functioning OpenStack setup with OpenDaylight.

Possible Issues With devstack on Fedora

One possible issue you may hit if you’re using a fresh VM on Fedora is mysql errors. You will see keystone errors and mysql access errors in the stack.sh run. To get around this, follow the workaround listed in this post here. It’s worked for me every time I hit this error running devstack on Fedora. One other issue with Fedora is that the latest devstack fails to kill all the nova processes when you run “unstack.sh.” To workaround this, simply run the following after “unstack.sh”:

  • killall nova-api nova-cert nova-scheduler nova-consoleauth nova-api nova-conductor

OpenStack and OpenDaylight: Verifying The Install

At this point, you should have the entire system up and running. To verify this, you can do the following:

  • Point your web browser at the OpenStack Horizon GUI:
    • http://<control node IP>/auth/login/
    • Login using “admin/admin” and you can see you OpenStack install.
  • Point your web browser at the OpenDaylight GUI:
    • http://<odl IP>:8080/
    • Login using “admin/admin”

You can play around in the GUIs, launch VMs, etc. As you launch VMs, you will see ODL create tunnel ports and links between compute hosts, which will become visible with a refresh in the OpenDaylight GUI.

OpenStack and OpenDaylight: Getting Help

The most appropriate place to get help at this early stage is on #opendaylight-ovsdb on Freenode. A long list of OpenStack Neutron and OpenDaylight developers hang out there and can provide help. Besides myself (IRC nick “mestery”), you can also expect to find the following people online:

  • Madhu Venugopal (IRC Nick “Madhu”)
  • Brent Salisbury (IRC Nick “networkstatic”)
  • Keith Burns (IRC Nick “alagalah”)
  • Florian Otel (IRC Nick “FlorianOtel”)

You can ping any of us and we should be able to help you debug any issues. Florian in particular has some VM images which may expedite the process above for folks trying this out for the first time. The instructions above were meant to walk through all of the steps necessary to get this up and running from scratch.

OpenStack and OpenDaylight: What’s Next

In a future post, I will walk through debugging this setup and using it see flows and how different pieces interact. In particular, I’ll walk through debugging this system so you understand exactly how things are done when networks, subnets, ports, routers and other OpenStack Neutron API objects are created and how OpenDaylight handles programming them onto each host.

LinuxCon Wrapup

Last week I was in New Orleans for LinuxCon. This was my first LinuxCon event, and it was pretty awesome. The event was co-located with a smattering of other Open Technology events as well:

As you can see, that’s a lot of events to pack into a single week. I was focused on participating in the LinuxCon keynotes and sessions for the early part of the week. I also worked the OpenDaylight and OpenStack booths for a while on both Monday and Tuesday. It was great to meet so many people and to have amazing conversations around all of the spectacular Open Source technologies I have the pleasure of being involved with. People were very interested in talking about Open Source SDN technologies and how they relate to cloud environments. I had many great discussions around the integration of OpenDaylight with OpenStack Neutron as well.

Ian and Kyle with the Linux Penguin

Ian and Kyle with the Linux Penguin

Panels and Presentations

Wednesday was the day I was on the OpenDaylight Panel, which opened the OpenDaylight Mini-Summit. I was joined by Chris Wright, Dave Meyer, James Bottomley, and Chiradeep Vittal. The topics during the panel were pretty awesome! The audience had lots of great questions. I find a really enjoy panel discussions because it allows for maximum audience participation, and really delivers the exact topics the audience wants to discuss. A great summary of the panel can be found here by the Enterprise Networking Planet site. The key takeaway from me, as captured by the summary, is this:

  • “We need to start thinking in terms of applications,” Mestery said. “But clearly we have a lot of work to do.”

 

OpenDaylight Panel

OpenDaylight Panel

I also presented at the Linux Plumbers Conference on the topics of LISP and NSH in Open vSwitch. The slides for this presentation are now on Slideshare. My co-presenter for this discussion was Vina Ermagan. This was a very technical discussion which took place during the net-virt track of the Plumbers Conference. Overall, the feedback was very well received. And not long afterwards, the work we’ve been doing on NSH in OVS was posted to the OVS mailing list by Pritesh Kothari, another member of my team at Cisco. We believe the work around NSH and LISP is just another example of open protocols being contributed back into Linux and Open vSwitch.

Podcasts

While at LinuxCon, I had the please to be on two podcasts with the wonderful hosts of The Cloudcast, Aaron Delp and Brian Gracely. Aaron and Brian do a great job with The Cloudcast, and I felt like the two podcasts I was a part of were a great way to spread the word of both OpenStack Neutron and OpenDaylight. The podcasts I was a part of were:

  • OpenDaylight and SDN Evolution: This podcast was a great venue for myself, Chris Wright, and Brent Salisbury to talk about OpenDaylight for people who may not be familiar with it. We also talked a a lot about how it will likely integrate into OpenStack Neutron. That work is ongoing right now. Below is a video of what this integration will look like:

  • OpenStack Neutron and SDN: In this podcast, the discussion revolved around what to expect out of OpenStack Neutron in the Havana release of OpenStack, along with the future of OpenStack Neutron post Havana. I was joined by Mark McClain, PTL for OpenStack Neutron, and Ian Wells, my colleague from Cisco.
OpenDaylight Roundtable

OpenDaylight Roundtable

Summary

Overall, LinuxCon was a great event. Spending time with people who you normally interact with on IRC, mailing lists, and conference calls is always a good thing. Hashing out complex technical problems is always somehow easier when it’s done over beers at the end of a long day. I look forward to attending future LinuxCons!

Vote for my OpenStack Presentations for the Hong Kong Summit!

Voting for the OpenStack Summit is now open! To vote for OpenStack Presentations for the Summit in Hong Kong, use the link provided here. The presentations being voted on now are for the conference portion of the event. There are a lot of great presentations out there. I’d like to highlight the ones I am lucky enough to be a part of here.

  • OpenStack Neutron Modular Layer 2 Plugin Deep Dive: This is a presentation myself and Robert Kukura from Red Hat are putting together. The Module Layer 2 (ML2) Neutron plugin is a new plugin for the Havana release. The main feature of this plugin is it’s ability to run multiple MechanismDrivers at the same time. This talk will go into detail on ML2, including deploying it, running Neutron with it, and how it works with multiple network technologies underneath. Bob and I are hoping to do a live demo of a deployment with multiple MechanismDrivers as well!
  • OpenDaylight: An Open Source SDN for your OpenStack Cloud: I am very excited about this presentation. OpenDaylight is a brand new Open Source SDN controller. I’m putting this talk together with some great people from the Open Source SDN community: Chris Wright from Red Hat, Anees Shaikh from IBM, and Stephan Baucke from Erricson. We hope to go over some background on OpenDaylight, and then talk about how we see it fitting into OpenStack deployments. This is a session not to miss!
  • Agile Network With OpenStack: This is a presentation I’m putting together with Rohit Agarwalla, and it will include information on the existing Cisco Neutron plugin and how it works with Nexus switches, Nexus 1000v, CSR 1000v, Dynamic Fabric Automation, and onePK. Rohit will be giving a demo of how the Cisco plugin can help provide automation and ease deployment of your OpenStack cloud.
  • Federating OpenStack User Groups: This is a panel discussion with my good friends and fellow OpenStack User Group founders Mark Voelker, Shannon McFarland, Colin McNamara, and Sean Roberts. This will be a continuation panel discussion from the Portland Summit and will focus on how User Groups can collaborate to extend the reach of OpenStack by sharing content, speakers, and other materials.
  • OpenStack Associate Engineer – Basic Install and Operate Workshop: This is a session I am proud to be a part of organized by Colin McNamara. It includes myself, Mark Voelker, Sean Roberts, and Shannon McFarland as well, and will be a two day course that will help to equip trainers with the skills necessary to install, deploy, and manage a three node OpenStack installation. This is an exciting offering and we hope it opens the doors for people new to OpenStack!

So please go ahead and vote for sessions for the upcoming Summit. There are a lot of great presentations out there, and I hope you find the ones I’m a part of to vote for them in addition to many others!

I am a member of the OpenStack Neutron Core Team!

So, it’s now official: I am a member of the OpenStack Neutron core team. I was voted onto the team last week and made official at the weekly Neutron meeting this past Monday. I will initially focus on the Open Source plugins (Open vSwitch, LinuxBridge) and the Modular Layer 2 (ML2) plugin. I wanted to thank Mark McClain for nominating me! The OpenStack Neutron core team is a great group of developers to work with, I’m very excited to continue contributing to OpenStack Neutron going forward!

OpenStack Summit Portland Aftermath

Last week I attended the OpenStack Summit in Portland. This was my fifth OpenStack Summit, and a lot has changed since I attended my first OpenStack Summit in Santa Clara in 2011. Everything about this spring’s event was bigger: The crowds, the demos, the design summits. It was pretty awesome to see how far OpenStack has come, and even more exciting to see how much is left to be done. So many new ideas around virtual machine scheduling, orchestration, and automation were discussed this week. I thought I’d share some thoughts around the Summit now that things have really sunk in from last week.

Is It Time to Separate the Conference and the Design Summit?

OpenStack Networking Design Summit Session

OpenStack Networking Design Summit Session

With the growth of the conference, and the increased attendance by folks new to OpenStack, the question was asked by many folks if the time has come to split the event into a separate Conference and Design Summit. Particularly on Monday, the Design Summit rooms were packed with people, almost to the point of overflowing. The photo above was taken in the OpenStack Networking (formally the project known as Quantum), but was fairly representative of most Design Summit Sessions. For the most part, the design sessions withstood the influx of people and proceeded as they have in past conferences. And certainly having users participate in design sessions is a good thing. But the scale the conference has now attained means the organizers will need to keep a close on eye on this going forward to ensure relevant design sessions are still attainable by attendees interested in this portion of the event.

OpenStack Networking Is Still Hot

With regards to the design summit sessions and the conference in general, the interest in networking in OpenStack is at an all time high. The Networking Design Summit sessions were packed with attendees, and the discussions were very vibrant and exciting. For the most part, the discussions around Networking in OpenStack are all moving beyond basic L2 networks and into higher level items such as service insertion, VPNs, firewalls, and even L3 networks. There was a lot of good material discussed, and some great blueprints (see here and here, among others) are all set to land in Havana.

OpenStack Networking Design Summit Session

OpenStack Networking Design Summit Session

In addition to the design discussions around OpenStack Networking, there were panels, conference sessions, and plenty of hallway conversations on the topic. Almost all the networking vendors had a strong presence at the Summit including Cisco (disclosure: I work for Cisco), Brocade, Ericsson, VMware/Nicira, Big Switch, PLUMgrid, and others. The level of interest in networking around OpenStack was truly amazing.

Which leads me to my next observation.

How Many Panels on SDN Does a Single Conference Need?

It’s obvious Software Defined Networking is hot now. And per my prior observation, it’s obvious that OpenStack Networking is hot. So it would seem the two fit together nicely, and in fact, they do. But how many panel discussions around SDN and OpenStack does one conference need? There were at least two of these, and it seemed like there was a large amount of “SDN washing” going on at this conference. To some extent, this was bound to eventually happen. As technologies mature and more and more people and money are thrown at them, the hype level goes crazy. Trying to level set the conversation, especially in the Design Summit sessions, and ensure an even discourse will become increasingly challenging going forward.

Customers, Customers, and More Customers

This conference had the real feel of actual customers deploying OpenStack. Take a look at the video of the Day 2 Keynote which featured Bloomberg, Best Buy, and Comcast for a taste of how some large enterprise customers are deploying and using OpenStack. But even beyond those big three, it was easy to walk around the conference floor and bump into many other people who are in the process of deploying OpenStack into their own data centers. Most of these people come to the OpenStack party for one of two reasons: Price and scalability. But once they enter the ecosystem, they realize there is much more to OpenStack than simple economics and scalability. As I’ve written before, OpenStack is a community, and deploying OpenStack in your datacenter makes you an honorary member of that community. To some customers, the idea of open collaboration with vendors and solutions providers is a new idea. But this type of open collaboration is the way forward, and I think ultimately, this is what will help to keep customers utilizing OpenStack to solve their real business needs.

Some Thoughts Before the OpenStack Summit in Portland

As we get closer to the OpenStack Summit next week in Portland, I wanted to reflect back on the last 6 months of my community involvement with OpenStack. It was almost 6 months ago when I created the Minnesota OpenStack Meetup in an attempt to drive some discussions, education, collaboration, and community around OpenStack in the Twin Cities. Since that time, the Minnesota OpenStack Meetup group has grown to over 120 members (at 127 at the time of this writing). We have members from all over the United States, as well as the world. I’ve really been happy to see people joining our discussions and participating in sharing their interest and knowledge around OpenStack.

Minnesota OpenStack Meetup

Minnesota OpenStack Meetup

We’ve had some really great discussions around topics like the following:

Our last Meetup was actually a combined Meetup with the local DevOps Meetup group, in which we spent some time mingling amongst group members and sharing ideas around different cloud platforms and how they relate to OpenStack as on on-premise IaaS cloud. This event in particular was eye opening, in that it broadened our groups local reach by opening up our Meetup group to some new members from the DevOps Meetup group.

Kyle Presenting at the combined OpenStack and DevOps Meetup

Kyle Presenting at the combined OpenStack and DevOps Meetup

In addition to the OpenStack Meetups locally, I’ve also had the pleasure to participate in some Meetup groups in other cities as well. In early March, I was fortunate enough to be invited to the first ever Triangle OpenStack Meetup to present on OpenStack Networking. It was great to be a part of another group of people driving discussions and collaboration around OpenStack. Thanks to my friends Mark Voelker and Amy Lewis for having me!

Mark Voelker opening the inaugural Triangle OpenStack Meetup in North Carolina.

Mark Voelker opening the inaugural Triangle OpenStack Meetup in North Carolina.

Related to OpenStack, I was happy to be able to be in the bay area in March to participate in the Bay Area Network Virtualization Meetup meeting on Open vSwitch. My friend Ben Pfaff gave a great talk on the history of Open vSwitch, and it’s future as well. In addition, he gave an eye opening demo on programming Open vSwitch. His demo source is available here. Since Open vSwitch is typically the first plugin people use with OpenStack Networking, and since most of the open source plugins use it (in addition to some commercial ones), it has increasing relevance here. I hope to present in the future at this Meetup on integrating LISP with Open vSwitch and OpenStack!

Bay Area Network Virtualization Meetup talk on Open vSwitch

Bay Area Network Virtualization Meetup talk on Open vSwitch

Looking back on all of these community events, it’s great to think back on all of the great discussions that have come up, all of the knowledge that has been shared, and all of the new friends I’ve met. Building communities around OpenStack has been a great experience. By bringing people together to share ideas and learn from each other, I hope that I’ve been able to open people’s eyes to the power of OpenStack, both from a technology perspective, as well as from a community perspective.

Looking forward to seeing everyone at the Summit next week!

Multi-node OpenStack Folsom devstack

Recently, I had a need to create a multi-node OpenStack Folsom deployment with Quantum. I needed to test out some deployment scenarios for a customer. To make things even more interesting, I wanted to test it out with the recent VXLAN changes in Open vSwitch which went upstream. I thought others may be interested in this as well. I’m planning to document this for Grizzly as well, but the steps should be mostly the same. Also, I’ve opened a blueprint for the Grizzly release to enable the selection of either GRE or VXLAN tunnels when using the Open vSwitch plugin with Quantum.

First Steps

To get started, you’ll need to setup two machines you can use for this. I chose Fedora 17, but Ubuntu 12.04 will work just as nicely. I also chose to install Fedora 17 into virtual machines. And just a quick plug for deployment options here: If you’re not using something like Cobbler in your lab to automate Linux installs, you really need to. I’ve got Cobbler setup to automate installs of Ubuntu 12.04, CentOS 6.3, and Fedora 17 in my lab. I can PXE boot VM images or physical machines and with a simple menu selection walk away and come back 30 minutes later to a fully installed system. When you’re spinning up a large number of devstack installs, this turns out to be very handy. Colin McNamara has a great blog post to get you started with Cobbler.

Make sure to give each VM 2 virtual interfaces, if you go that route, or that your physical hosts have 2 interfaces. The first one will be used for management traffic, the second one will be used for the external network to access your tenant VMs. I’ll assume eth0 and eth1 here.

At this point you should have your 2 VMs or physical hosts up and running with Fedora 17 or Ubuntu 12.04.

Upgrading Open vSwitch on Your Hosts

To enable VXLAN tunnels in Open vSwitch, you need to pull the latest from master, build it, and install it. I’ll show the instructions for Fedora 17 below, which include building RPMs, but for Ubuntu it should be similar except for the RPM building part. I did this as root, to build the kernel module that seems to work best.

yum install rpm-build
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
git clone git://openvswitch.org/openvswitch
cd openvswitch
./configure --with-linux=/lib/modules/$(uname -r)/build
make dist
cp openvswitch-1.9.90.tar.gz ~/rpmbuild/SOURCES
rpmbuild -bb rhel/openvswitch.spec && rpmbuild -bb -D "kversion $(uname -r)" -D "kflavors default" rhel/openvswitch-kmod-rhel6.spec
rpm -Uhv ~/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.9.90-1.el6.x86_64.rpm ~/rpmbuild/RPMS/x86_64/openvswitch-1.9.90-1.x86_64.rpm

At this point, reboot your host and you should have the latest Open vSwitch installed. Copy the RPMs from this build host over to your other host, install them the same way, and reboot that host. On each host, the output of “ovs-vsctl show” should indicate 1.9.90 as below:

[root@km-dhcp-64-217 ~]# ovs-vsctl show
55bd458a-291b-4ee6-9ff1-1a3383779e02
    Bridge "br1"
        Port "eth1"
            Interface "eth1"
        Port "br1"
            Interface "br1"
                type: internal
    Bridge "br2"
        Port "vxlan3"
            Interface "vxlan3"
                type: vxlan
                options: {key=flow, remote_ip="192.168.1.13"}
        Port "br2"
            Interface "br2"
                type: internal
    ovs_version: "1.9.90"
[root@km-dhcp-64-217 ~]#

devstack

Getting devstack installed and running is pretty easy. Here’s how to do it. Make sure you do this as a non-root user. Make sure you add passwordless sudo access for this user as well (add “<username> ALL=(ALL)      NOPASSWD: ALL” to /etc/sudoers before running devstack).

git clone git://github.com/openstack-dev/devstack.git
git checkout stable/folsom

At this point, you should have a Folsom version of devstack setup. You now need to populate your localrc files for both your control node as well as your compute node. See examples below:

Control node localrc

#OFFLINE=True
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service quantum
#enable_service ryu

HOST_NAME=$(hostname)
SERVICE_HOST_NAME=${HOST_NAME}
SERVICE_HOST=192.168.64.188

FLOATING_RANGE=192.168.100.0/24
Q_PLUGIN=openvswitch

#LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

Q_HOST=$SERVICE_HOST
Q_USE_NAMESPACE=False
ENABLE_TENANT_TUNNELS=True
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

SCHEDULER=nova.scheduler.simple.SimpleScheduler

# compute service
NOVA_BRANCH=stable/folsom

# volume service
CINDER_BRANCH=stable/folsom

# image catalog service
GLANCE_BRANCH=stable/folsom

# unified auth system (manages accounts/tokens)
KEYSTONE_BRANCH=stable/folsom

# quantum service
QUANTUM_BRANCH=stable/folsom

# django powered web control panel for openstack
HORIZON_BRANCH=stable/folsom

compute node localrc:

#OFFLINE=true
disable_all_services
enable_service rabbit n-cpu quantum q-agt

HOST_NAME=$(hostname)
SERVICE_HOST_NAME=km-dhcp-64-188
SERVICE_HOST=192.168.64.188

FLOATING_RANGE=192.168.100.0/24
Q_PLUGIN=openvswitch

#LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

Q_HOST=$SERVICE_HOST
Q_USE_NAMESPACE=False
ENABLE_TENANT_TUNNELS=True
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

# compute service
NOVA_BRANCH=stable/folsom

# volume service
CINDER_BRANCH=stable/folsom

# image catalog service
GLANCE_BRANCH=stable/folsom

# unified auth system (manages accounts/tokens)
KEYSTONE_BRANCH=stable/folsom

# quantum service
QUANTUM_BRANCH=stable/folsom

# django powered web control panel for openstack
HORIZON_BRANCH=stable/folsom

For the compute localrc, make sure you change SERVICE_HOST to be the IP on your control node you want to use. Also, pick an appropriate floating IP range if you want to use floating IP addresses. On the compute node, make sure to change SERVICE_HOST and SERVICE_HOST_NAME appropriately. Also, once you’ve run devstack on each host, you can uncomment the “OFFLINE=True” to speed it up on subsequent runs.

Post devstack tasks

I had to do the following tasks on my setup to workaround a few things. Fedora 17 does not come with nodejs installed by default, so Horizon will not work out of the box. To install nodejs, follow these instructions. I performed these as root as well, but sudo would work with the “make install” step as well.

yum install -y gcc-c++
git clone git://github.com/joyent/node.git
cd node
./configure
make && make install

Next, to work around a Nova metadata issue I was having, I added some IP configuration to eth1 by doing “sudo ifconfig eth1 up 169.254.169.254″. I also added eth1 to br-ext on the control node. This is the interface which will be used for external access to your tenant VMs via their floating IP addresses.

You will also need to apply a small patch to Quantum on the control node. This is to make Quantum create VXLAN tunnels instead of GRE tunnels. The patch is below and you should be able to apply it manually quite easily:

[kmestery@km-dhcp-64-188 quantum]$ git diff quantum/agent/linux/ovs_lib.py

diff --git a/quantum/agent/linux/ovs_lib.py b/quantum/agent/linux/ovs_lib.py
index ec4194d..a0f6bbf 100644
--- a/quantum/agent/linux/ovs_lib.py
+++ b/quantum/agent/linux/ovs_lib.py
@@ -159,7 +159,7 @@ class OVSBridge:

     def add_tunnel_port(self, port_name, remote_ip):
         self.run_vsctl(["add-port", self.br_name, port_name])
- self.set_db_attribute("Interface", port_name, "type", "gre")
+ self.set_db_attribute("Interface", port_name, "type", "vxlan")
         self.set_db_attribute("Interface", port_name, "options:remote_ip",
                               remote_ip)
         self.set_db_attribute("Interface", port_name, "options:in_key", "flow")
[kmestery@km-dhcp-64-188 quantum]$

Running devstack

At this point, you should be ready to run devstack. Go ahead and run it on the control node first (cd devstack ; ./stack.sh). Next, run it on the compute host (cd devstack ; ./stack.sh).

To access the consoles of your devstack installs, execute “screen -r stack” on each host. This pops you into a screen session with each session handling the output of a particular OpenStack component. To move around in the screen window you can use “ctrl-a-p” and “ctrl-a-n” to do move to the previous and next windows. “ctrl-a-ESC” will freeze the window and let you use vi commands to scroll back. “ESC” will unfreeze it.

Summary: You’re a Cloud Master Now!

If you’ve followed this guide, you should have an OpenStack Folsom Cloud running in your lab now with the Open vSwitch Quantum plugin running and VXLAN tunnels between hosts! A followup post will show you how to create multiple tenants and verify Quantum is segregating traffic by utilizing VXLAN tunnels between hosts with a different VNI for each tenant.

Welcome to the world of cloud computing on OpenStack!

Welcome to Cloud Computing!

Welcome to Cloud Computing

 

OpenStack, Community, and You

Minnesota OpenStack Meetup

Minnesota OpenStack Meetup

Yesterday I hosted the first Minnesota OpenStack Meetup at the local Cisco office in Bloomington. It was an event I had been planning for about 2 months. I was very excited to meet with other Stackers in the Twin Cities. But the story starts much before this, I’m getting ahead of myself a bit here. Let me backup and tell you the full story of how the Minnesota OpenStack Meetup came to be.

The Minnesota Tech Scene

As my friends and some readers may know, I work remotely for Cisco. I live in Minnesota, not in Silicon Valley. What most people outside of Minnesota likely don’t know is there exists a pretty thriving tech scene here. A lot of the roots of Minnesota’s tech scene, certainly the one I’ve grown up with, come from the roots of Cray Inc and Control Data Corporation. From these early tech giants, many companies have grown in Minnesota over the last 30 years. Like any area, Minnesota has some sweet spots with regards to specific areas of technology. One such area is storage, and in particular storage networking. Look no further than companies who currently have offices in Minnesota with development happening in the storage area: Dell/Compellent, Symantec, EMC/Isilon, Quantum, Cray, SGI, Qlogic. All of these companies have been doing great work in various areas around storage, storage networking, data protection, highly scalable filesystems, and other infrastructure layer projects and products.

Minnesota OpenStack

I recently changed roles at Cisco, and my new role allows me increasing involvement in Open Source technologies. Specifically, I am becoming more involved with OpenStack. One of the things I wanted to do was find other people interested in OpenStack in the Minnesota area. So I went to meetup.com to try and find an OpenStack Meetup group. There existed none at the time. Minnesota had other groups, some of which had hundreds of members, so I knew there was interest for meetups around technology. I set out to create the Minnesota OpenStack Meetup at this point, hoping to find and grow interest in OpenStack in the Minnesota (and likely western Wisconsin) areas.

Planning For the Initial Meetup

I had roughly two months to plan for the initial meeting. My initial focus was on securing a space to host the meeting. This was made slightly difficult by not having a rough idea of how many people would attend. I made the call early on to secure a room at the local Cisco office which would hold around 40 people. Part of me thought having 40 people would be unrealistic for an initial meetup, while another part of me thought getting more than 40 people would be a good problem to have. With the room secured, I turned my attention to an agenda. I’m good friends on Twitter with Colin McNamara, and I had seen his spectacular presentation he gave at the San Diego OpenStack Summit around “Surviving Your First Checkin“.  The presentation was exactly what you would want to show to a new Meetup audience interested in participating in the OpenStack community. I reached out to Colin, and he was kind of enough to fly out to Minnesota and give his presentation at our inaugural meeting. Colin and I talked about what to do after his presentation, and we decided the best thing would be to have everyone do a live devstack install (e.g. a devstack installfest).

Colin doing his thing as presentor

Colin doing his thing as presentor

The Day of the Meetup

This way to the Minnesota OpenStack Meetup!

This way to the Minnesota OpenStack Meetup!

The day of the Meetup I was able to get to the Cisco office well in advance and make sure the room was ready. Colin arrive early, and was able to setup before folks started arriving. We ended up having around 20 people show up for the initial meeting. I was able to provide drinks and pizza for folks, make initial introductions of everyone, and Colin was able to give his presentation. Afterwards, we helped everyone get devstack up and running (despite the oddly flakey wireless at the Cisco office, who would have guessed?).

The Result

I have to say the inaugural Minnesota OpenStack Meetup was a success. It turns out we have a broad diversity of interest in OpenStack in the Minnesota area. We currently have 36 members of our Meetup. There are people interested in developing OpenStack, people who are INTERESTED in deploying it in production, people who HAVE deployed it in production. There were folks who had just heard of it and wanted to learn more. Other people had their customers asking about it, so wanted to sharpen their own understanding about it. It was great to meet everyone who attended and plant the seeds of an OpenStack community in Minnesota.

Community Is Critical In Open Source

And this brings me to something very important to me. Community. Read the definition from the Wikipedia article linked there, and let it sink in. Working on Open Source projects is about community. It’s about involvement. It’s about working for the greater good of something important to you. My experience in shepherding the Minnesota OpenStack Meetup has shown me that all it takes is one person to  plant the seed. If one person does that, other people will help provide water and nourishment to help the flower grow. In Open Source, there are many ways to contribute and be a part of the community. You can write code. You can test code. You can write documentation. You can spread the word. You can start a Meetup. You can present at conferences. You can answer questions on mailing lists. You can edit a wiki. You can get excited and make something happen. It’s all about community. It’s all about the power of Open Source. It’s about sharing your experiences with the world.

The slide below from Colin’s presentation sums it all up nicely.

Giving back

Giving back

So what are you waiting for? If there is no Meetup around OpenStack or other Open Source technology in your area, go ahead and start one. You’ll be surprised and encouraged by the response you will likely receive. And you will help to grow and strengthen an Open Source community in your area.