LinuxCon Wrapup

Last week I was in New Orleans for LinuxCon. This was my first LinuxCon event, and it was pretty awesome. The event was co-located with a smattering of other Open Technology events as well:

As you can see, that’s a lot of events to pack into a single week. I was focused on participating in the LinuxCon keynotes and sessions for the early part of the week. I also worked the OpenDaylight and OpenStack booths for a while on both Monday and Tuesday. It was great to meet so many people and to have amazing conversations around all of the spectacular Open Source technologies I have the pleasure of being involved with. People were very interested in talking about Open Source SDN technologies and how they relate to cloud environments. I had many great discussions around the integration of OpenDaylight with OpenStack Neutron as well.

Ian and Kyle with the Linux Penguin

Ian and Kyle with the Linux Penguin

Panels and Presentations

Wednesday was the day I was on the OpenDaylight Panel, which opened the OpenDaylight Mini-Summit. I was joined by Chris Wright, Dave Meyer, James Bottomley, and Chiradeep Vittal. The topics during the panel were pretty awesome! The audience had lots of great questions. I find a really enjoy panel discussions because it allows for maximum audience participation, and really delivers the exact topics the audience wants to discuss. A great summary of the panel can be found here by the Enterprise Networking Planet site. The key takeaway from me, as captured by the summary, is this:

  • “We need to start thinking in terms of applications,” Mestery said. “But clearly we have a lot of work to do.”

 

OpenDaylight Panel

OpenDaylight Panel

I also presented at the Linux Plumbers Conference on the topics of LISP and NSH in Open vSwitch. The slides for this presentation are now on Slideshare. My co-presenter for this discussion was Vina Ermagan. This was a very technical discussion which took place during the net-virt track of the Plumbers Conference. Overall, the feedback was very well received. And not long afterwards, the work we’ve been doing on NSH in OVS was posted to the OVS mailing list by Pritesh Kothari, another member of my team at Cisco. We believe the work around NSH and LISP is just another example of open protocols being contributed back into Linux and Open vSwitch.

Podcasts

While at LinuxCon, I had the please to be on two podcasts with the wonderful hosts of The Cloudcast, Aaron Delp and Brian Gracely. Aaron and Brian do a great job with The Cloudcast, and I felt like the two podcasts I was a part of were a great way to spread the word of both OpenStack Neutron and OpenDaylight. The podcasts I was a part of were:

  • OpenDaylight and SDN Evolution: This podcast was a great venue for myself, Chris Wright, and Brent Salisbury to talk about OpenDaylight for people who may not be familiar with it. We also talked a a lot about how it will likely integrate into OpenStack Neutron. That work is ongoing right now. Below is a video of what this integration will look like:

  • OpenStack Neutron and SDN: In this podcast, the discussion revolved around what to expect out of OpenStack Neutron in the Havana release of OpenStack, along with the future of OpenStack Neutron post Havana. I was joined by Mark McClain, PTL for OpenStack Neutron, and Ian Wells, my colleague from Cisco.
OpenDaylight Roundtable

OpenDaylight Roundtable

Summary

Overall, LinuxCon was a great event. Spending time with people who you normally interact with on IRC, mailing lists, and conference calls is always a good thing. Hashing out complex technical problems is always somehow easier when it’s done over beers at the end of a long day. I look forward to attending future LinuxCons!

Vote for my OpenStack Presentations for the Hong Kong Summit!

Voting for the OpenStack Summit is now open! To vote for OpenStack Presentations for the Summit in Hong Kong, use the link provided here. The presentations being voted on now are for the conference portion of the event. There are a lot of great presentations out there. I’d like to highlight the ones I am lucky enough to be a part of here.

  • OpenStack Neutron Modular Layer 2 Plugin Deep Dive: This is a presentation myself and Robert Kukura from Red Hat are putting together. The Module Layer 2 (ML2) Neutron plugin is a new plugin for the Havana release. The main feature of this plugin is it’s ability to run multiple MechanismDrivers at the same time. This talk will go into detail on ML2, including deploying it, running Neutron with it, and how it works with multiple network technologies underneath. Bob and I are hoping to do a live demo of a deployment with multiple MechanismDrivers as well!
  • OpenDaylight: An Open Source SDN for your OpenStack Cloud: I am very excited about this presentation. OpenDaylight is a brand new Open Source SDN controller. I’m putting this talk together with some great people from the Open Source SDN community: Chris Wright from Red Hat, Anees Shaikh from IBM, and Stephan Baucke from Erricson. We hope to go over some background on OpenDaylight, and then talk about how we see it fitting into OpenStack deployments. This is a session not to miss!
  • Agile Network With OpenStack: This is a presentation I’m putting together with Rohit Agarwalla, and it will include information on the existing Cisco Neutron plugin and how it works with Nexus switches, Nexus 1000v, CSR 1000v, Dynamic Fabric Automation, and onePK. Rohit will be giving a demo of how the Cisco plugin can help provide automation and ease deployment of your OpenStack cloud.
  • Federating OpenStack User Groups: This is a panel discussion with my good friends and fellow OpenStack User Group founders Mark Voelker, Shannon McFarland, Colin McNamara, and Sean Roberts. This will be a continuation panel discussion from the Portland Summit and will focus on how User Groups can collaborate to extend the reach of OpenStack by sharing content, speakers, and other materials.
  • OpenStack Associate Engineer – Basic Install and Operate Workshop: This is a session I am proud to be a part of organized by Colin McNamara. It includes myself, Mark Voelker, Sean Roberts, and Shannon McFarland as well, and will be a two day course that will help to equip trainers with the skills necessary to install, deploy, and manage a three node OpenStack installation. This is an exciting offering and we hope it opens the doors for people new to OpenStack!

So please go ahead and vote for sessions for the upcoming Summit. There are a lot of great presentations out there, and I hope you find the ones I’m a part of to vote for them in addition to many others!

I am a member of the OpenStack Neutron Core Team!

So, it’s now official: I am a member of the OpenStack Neutron core team. I was voted onto the team last week and made official at the weekly Neutron meeting this past Monday. I will initially focus on the Open Source plugins (Open vSwitch, LinuxBridge) and the Modular Layer 2 (ML2) plugin. I wanted to thank Mark McClain for nominating me! The OpenStack Neutron core team is a great group of developers to work with, I’m very excited to continue contributing to OpenStack Neutron going forward!

OpenStack Summit Portland Aftermath

Last week I attended the OpenStack Summit in Portland. This was my fifth OpenStack Summit, and a lot has changed since I attended my first OpenStack Summit in Santa Clara in 2011. Everything about this spring’s event was bigger: The crowds, the demos, the design summits. It was pretty awesome to see how far OpenStack has come, and even more exciting to see how much is left to be done. So many new ideas around virtual machine scheduling, orchestration, and automation were discussed this week. I thought I’d share some thoughts around the Summit now that things have really sunk in from last week.

Is It Time to Separate the Conference and the Design Summit?

OpenStack Networking Design Summit Session

OpenStack Networking Design Summit Session

With the growth of the conference, and the increased attendance by folks new to OpenStack, the question was asked by many folks if the time has come to split the event into a separate Conference and Design Summit. Particularly on Monday, the Design Summit rooms were packed with people, almost to the point of overflowing. The photo above was taken in the OpenStack Networking (formally the project known as Quantum), but was fairly representative of most Design Summit Sessions. For the most part, the design sessions withstood the influx of people and proceeded as they have in past conferences. And certainly having users participate in design sessions is a good thing. But the scale the conference has now attained means the organizers will need to keep a close on eye on this going forward to ensure relevant design sessions are still attainable by attendees interested in this portion of the event.

OpenStack Networking Is Still Hot

With regards to the design summit sessions and the conference in general, the interest in networking in OpenStack is at an all time high. The Networking Design Summit sessions were packed with attendees, and the discussions were very vibrant and exciting. For the most part, the discussions around Networking in OpenStack are all moving beyond basic L2 networks and into higher level items such as service insertion, VPNs, firewalls, and even L3 networks. There was a lot of good material discussed, and some great blueprints (see here and here, among others) are all set to land in Havana.

OpenStack Networking Design Summit Session

OpenStack Networking Design Summit Session

In addition to the design discussions around OpenStack Networking, there were panels, conference sessions, and plenty of hallway conversations on the topic. Almost all the networking vendors had a strong presence at the Summit including Cisco (disclosure: I work for Cisco), Brocade, Ericsson, VMware/Nicira, Big Switch, PLUMgrid, and others. The level of interest in networking around OpenStack was truly amazing.

Which leads me to my next observation.

How Many Panels on SDN Does a Single Conference Need?

It’s obvious Software Defined Networking is hot now. And per my prior observation, it’s obvious that OpenStack Networking is hot. So it would seem the two fit together nicely, and in fact, they do. But how many panel discussions around SDN and OpenStack does one conference need? There were at least two of these, and it seemed like there was a large amount of “SDN washing” going on at this conference. To some extent, this was bound to eventually happen. As technologies mature and more and more people and money are thrown at them, the hype level goes crazy. Trying to level set the conversation, especially in the Design Summit sessions, and ensure an even discourse will become increasingly challenging going forward.

Customers, Customers, and More Customers

This conference had the real feel of actual customers deploying OpenStack. Take a look at the video of the Day 2 Keynote which featured Bloomberg, Best Buy, and Comcast for a taste of how some large enterprise customers are deploying and using OpenStack. But even beyond those big three, it was easy to walk around the conference floor and bump into many other people who are in the process of deploying OpenStack into their own data centers. Most of these people come to the OpenStack party for one of two reasons: Price and scalability. But once they enter the ecosystem, they realize there is much more to OpenStack than simple economics and scalability. As I’ve written before, OpenStack is a community, and deploying OpenStack in your datacenter makes you an honorary member of that community. To some customers, the idea of open collaboration with vendors and solutions providers is a new idea. But this type of open collaboration is the way forward, and I think ultimately, this is what will help to keep customers utilizing OpenStack to solve their real business needs.

Some Thoughts Before the OpenStack Summit in Portland

As we get closer to the OpenStack Summit next week in Portland, I wanted to reflect back on the last 6 months of my community involvement with OpenStack. It was almost 6 months ago when I created the Minnesota OpenStack Meetup in an attempt to drive some discussions, education, collaboration, and community around OpenStack in the Twin Cities. Since that time, the Minnesota OpenStack Meetup group has grown to over 120 members (at 127 at the time of this writing). We have members from all over the United States, as well as the world. I’ve really been happy to see people joining our discussions and participating in sharing their interest and knowledge around OpenStack.

Minnesota OpenStack Meetup

Minnesota OpenStack Meetup

We’ve had some really great discussions around topics like the following:

Our last Meetup was actually a combined Meetup with the local DevOps Meetup group, in which we spent some time mingling amongst group members and sharing ideas around different cloud platforms and how they relate to OpenStack as on on-premise IaaS cloud. This event in particular was eye opening, in that it broadened our groups local reach by opening up our Meetup group to some new members from the DevOps Meetup group.

Kyle Presenting at the combined OpenStack and DevOps Meetup

Kyle Presenting at the combined OpenStack and DevOps Meetup

In addition to the OpenStack Meetups locally, I’ve also had the pleasure to participate in some Meetup groups in other cities as well. In early March, I was fortunate enough to be invited to the first ever Triangle OpenStack Meetup to present on OpenStack Networking. It was great to be a part of another group of people driving discussions and collaboration around OpenStack. Thanks to my friends Mark Voelker and Amy Lewis for having me!

Mark Voelker opening the inaugural Triangle OpenStack Meetup in North Carolina.

Mark Voelker opening the inaugural Triangle OpenStack Meetup in North Carolina.

Related to OpenStack, I was happy to be able to be in the bay area in March to participate in the Bay Area Network Virtualization Meetup meeting on Open vSwitch. My friend Ben Pfaff gave a great talk on the history of Open vSwitch, and it’s future as well. In addition, he gave an eye opening demo on programming Open vSwitch. His demo source is available here. Since Open vSwitch is typically the first plugin people use with OpenStack Networking, and since most of the open source plugins use it (in addition to some commercial ones), it has increasing relevance here. I hope to present in the future at this Meetup on integrating LISP with Open vSwitch and OpenStack!

Bay Area Network Virtualization Meetup talk on Open vSwitch

Bay Area Network Virtualization Meetup talk on Open vSwitch

Looking back on all of these community events, it’s great to think back on all of the great discussions that have come up, all of the knowledge that has been shared, and all of the new friends I’ve met. Building communities around OpenStack has been a great experience. By bringing people together to share ideas and learn from each other, I hope that I’ve been able to open people’s eyes to the power of OpenStack, both from a technology perspective, as well as from a community perspective.

Looking forward to seeing everyone at the Summit next week!

Multi-node OpenStack Folsom devstack

Recently, I had a need to create a multi-node OpenStack Folsom deployment with Quantum. I needed to test out some deployment scenarios for a customer. To make things even more interesting, I wanted to test it out with the recent VXLAN changes in Open vSwitch which went upstream. I thought others may be interested in this as well. I’m planning to document this for Grizzly as well, but the steps should be mostly the same. Also, I’ve opened a blueprint for the Grizzly release to enable the selection of either GRE or VXLAN tunnels when using the Open vSwitch plugin with Quantum.

First Steps

To get started, you’ll need to setup two machines you can use for this. I chose Fedora 17, but Ubuntu 12.04 will work just as nicely. I also chose to install Fedora 17 into virtual machines. And just a quick plug for deployment options here: If you’re not using something like Cobbler in your lab to automate Linux installs, you really need to. I’ve got Cobbler setup to automate installs of Ubuntu 12.04, CentOS 6.3, and Fedora 17 in my lab. I can PXE boot VM images or physical machines and with a simple menu selection walk away and come back 30 minutes later to a fully installed system. When you’re spinning up a large number of devstack installs, this turns out to be very handy. Colin McNamara has a great blog post to get you started with Cobbler.

Make sure to give each VM 2 virtual interfaces, if you go that route, or that your physical hosts have 2 interfaces. The first one will be used for management traffic, the second one will be used for the external network to access your tenant VMs. I’ll assume eth0 and eth1 here.

At this point you should have your 2 VMs or physical hosts up and running with Fedora 17 or Ubuntu 12.04.

Upgrading Open vSwitch on Your Hosts

To enable VXLAN tunnels in Open vSwitch, you need to pull the latest from master, build it, and install it. I’ll show the instructions for Fedora 17 below, which include building RPMs, but for Ubuntu it should be similar except for the RPM building part. I did this as root, to build the kernel module that seems to work best.

yum install rpm-build
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
git clone git://openvswitch.org/openvswitch
cd openvswitch
./configure --with-linux=/lib/modules/$(uname -r)/build
make dist
cp openvswitch-1.9.90.tar.gz ~/rpmbuild/SOURCES
rpmbuild -bb rhel/openvswitch.spec && rpmbuild -bb -D "kversion $(uname -r)" -D "kflavors default" rhel/openvswitch-kmod-rhel6.spec
rpm -Uhv ~/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.9.90-1.el6.x86_64.rpm ~/rpmbuild/RPMS/x86_64/openvswitch-1.9.90-1.x86_64.rpm

At this point, reboot your host and you should have the latest Open vSwitch installed. Copy the RPMs from this build host over to your other host, install them the same way, and reboot that host. On each host, the output of “ovs-vsctl show” should indicate 1.9.90 as below:

[root@km-dhcp-64-217 ~]# ovs-vsctl show
55bd458a-291b-4ee6-9ff1-1a3383779e02
    Bridge "br1"
        Port "eth1"
            Interface "eth1"
        Port "br1"
            Interface "br1"
                type: internal
    Bridge "br2"
        Port "vxlan3"
            Interface "vxlan3"
                type: vxlan
                options: {key=flow, remote_ip="192.168.1.13"}
        Port "br2"
            Interface "br2"
                type: internal
    ovs_version: "1.9.90"
[root@km-dhcp-64-217 ~]#

devstack

Getting devstack installed and running is pretty easy. Here’s how to do it. Make sure you do this as a non-root user. Make sure you add passwordless sudo access for this user as well (add “<username> ALL=(ALL)      NOPASSWD: ALL” to /etc/sudoers before running devstack).

git clone git://github.com/openstack-dev/devstack.git
git checkout stable/folsom

At this point, you should have a Folsom version of devstack setup. You now need to populate your localrc files for both your control node as well as your compute node. See examples below:

Control node localrc

#OFFLINE=True
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service quantum
#enable_service ryu

HOST_NAME=$(hostname)
SERVICE_HOST_NAME=${HOST_NAME}
SERVICE_HOST=192.168.64.188

FLOATING_RANGE=192.168.100.0/24
Q_PLUGIN=openvswitch

#LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

Q_HOST=$SERVICE_HOST
Q_USE_NAMESPACE=False
ENABLE_TENANT_TUNNELS=True
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

SCHEDULER=nova.scheduler.simple.SimpleScheduler

# compute service
NOVA_BRANCH=stable/folsom

# volume service
CINDER_BRANCH=stable/folsom

# image catalog service
GLANCE_BRANCH=stable/folsom

# unified auth system (manages accounts/tokens)
KEYSTONE_BRANCH=stable/folsom

# quantum service
QUANTUM_BRANCH=stable/folsom

# django powered web control panel for openstack
HORIZON_BRANCH=stable/folsom

compute node localrc:

#OFFLINE=true
disable_all_services
enable_service rabbit n-cpu quantum q-agt

HOST_NAME=$(hostname)
SERVICE_HOST_NAME=km-dhcp-64-188
SERVICE_HOST=192.168.64.188

FLOATING_RANGE=192.168.100.0/24
Q_PLUGIN=openvswitch

#LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

Q_HOST=$SERVICE_HOST
Q_USE_NAMESPACE=False
ENABLE_TENANT_TUNNELS=True
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

# compute service
NOVA_BRANCH=stable/folsom

# volume service
CINDER_BRANCH=stable/folsom

# image catalog service
GLANCE_BRANCH=stable/folsom

# unified auth system (manages accounts/tokens)
KEYSTONE_BRANCH=stable/folsom

# quantum service
QUANTUM_BRANCH=stable/folsom

# django powered web control panel for openstack
HORIZON_BRANCH=stable/folsom

For the compute localrc, make sure you change SERVICE_HOST to be the IP on your control node you want to use. Also, pick an appropriate floating IP range if you want to use floating IP addresses. On the compute node, make sure to change SERVICE_HOST and SERVICE_HOST_NAME appropriately. Also, once you’ve run devstack on each host, you can uncomment the “OFFLINE=True” to speed it up on subsequent runs.

Post devstack tasks

I had to do the following tasks on my setup to workaround a few things. Fedora 17 does not come with nodejs installed by default, so Horizon will not work out of the box. To install nodejs, follow these instructions. I performed these as root as well, but sudo would work with the “make install” step as well.

yum install -y gcc-c++
git clone git://github.com/joyent/node.git
cd node
./configure
make && make install

Next, to work around a Nova metadata issue I was having, I added some IP configuration to eth1 by doing “sudo ifconfig eth1 up 169.254.169.254″. I also added eth1 to br-ext on the control node. This is the interface which will be used for external access to your tenant VMs via their floating IP addresses.

You will also need to apply a small patch to Quantum on the control node. This is to make Quantum create VXLAN tunnels instead of GRE tunnels. The patch is below and you should be able to apply it manually quite easily:

[kmestery@km-dhcp-64-188 quantum]$ git diff quantum/agent/linux/ovs_lib.py

diff --git a/quantum/agent/linux/ovs_lib.py b/quantum/agent/linux/ovs_lib.py
index ec4194d..a0f6bbf 100644
--- a/quantum/agent/linux/ovs_lib.py
+++ b/quantum/agent/linux/ovs_lib.py
@@ -159,7 +159,7 @@ class OVSBridge:

     def add_tunnel_port(self, port_name, remote_ip):
         self.run_vsctl(["add-port", self.br_name, port_name])
- self.set_db_attribute("Interface", port_name, "type", "gre")
+ self.set_db_attribute("Interface", port_name, "type", "vxlan")
         self.set_db_attribute("Interface", port_name, "options:remote_ip",
                               remote_ip)
         self.set_db_attribute("Interface", port_name, "options:in_key", "flow")
[kmestery@km-dhcp-64-188 quantum]$

Running devstack

At this point, you should be ready to run devstack. Go ahead and run it on the control node first (cd devstack ; ./stack.sh). Next, run it on the compute host (cd devstack ; ./stack.sh).

To access the consoles of your devstack installs, execute “screen -r stack” on each host. This pops you into a screen session with each session handling the output of a particular OpenStack component. To move around in the screen window you can use “ctrl-a-p” and “ctrl-a-n” to do move to the previous and next windows. “ctrl-a-ESC” will freeze the window and let you use vi commands to scroll back. “ESC” will unfreeze it.

Summary: You’re a Cloud Master Now!

If you’ve followed this guide, you should have an OpenStack Folsom Cloud running in your lab now with the Open vSwitch Quantum plugin running and VXLAN tunnels between hosts! A followup post will show you how to create multiple tenants and verify Quantum is segregating traffic by utilizing VXLAN tunnels between hosts with a different VNI for each tenant.

Welcome to the world of cloud computing on OpenStack!

Welcome to Cloud Computing!

Welcome to Cloud Computing

 

OpenStack, Community, and You

Minnesota OpenStack Meetup

Minnesota OpenStack Meetup

Yesterday I hosted the first Minnesota OpenStack Meetup at the local Cisco office in Bloomington. It was an event I had been planning for about 2 months. I was very excited to meet with other Stackers in the Twin Cities. But the story starts much before this, I’m getting ahead of myself a bit here. Let me backup and tell you the full story of how the Minnesota OpenStack Meetup came to be.

The Minnesota Tech Scene

As my friends and some readers may know, I work remotely for Cisco. I live in Minnesota, not in Silicon Valley. What most people outside of Minnesota likely don’t know is there exists a pretty thriving tech scene here. A lot of the roots of Minnesota’s tech scene, certainly the one I’ve grown up with, come from the roots of Cray Inc and Control Data Corporation. From these early tech giants, many companies have grown in Minnesota over the last 30 years. Like any area, Minnesota has some sweet spots with regards to specific areas of technology. One such area is storage, and in particular storage networking. Look no further than companies who currently have offices in Minnesota with development happening in the storage area: Dell/Compellent, Symantec, EMC/Isilon, Quantum, Cray, SGI, Qlogic. All of these companies have been doing great work in various areas around storage, storage networking, data protection, highly scalable filesystems, and other infrastructure layer projects and products.

Minnesota OpenStack

I recently changed roles at Cisco, and my new role allows me increasing involvement in Open Source technologies. Specifically, I am becoming more involved with OpenStack. One of the things I wanted to do was find other people interested in OpenStack in the Minnesota area. So I went to meetup.com to try and find an OpenStack Meetup group. There existed none at the time. Minnesota had other groups, some of which had hundreds of members, so I knew there was interest for meetups around technology. I set out to create the Minnesota OpenStack Meetup at this point, hoping to find and grow interest in OpenStack in the Minnesota (and likely western Wisconsin) areas.

Planning For the Initial Meetup

I had roughly two months to plan for the initial meeting. My initial focus was on securing a space to host the meeting. This was made slightly difficult by not having a rough idea of how many people would attend. I made the call early on to secure a room at the local Cisco office which would hold around 40 people. Part of me thought having 40 people would be unrealistic for an initial meetup, while another part of me thought getting more than 40 people would be a good problem to have. With the room secured, I turned my attention to an agenda. I’m good friends on Twitter with Colin McNamara, and I had seen his spectacular presentation he gave at the San Diego OpenStack Summit around “Surviving Your First Checkin“.  The presentation was exactly what you would want to show to a new Meetup audience interested in participating in the OpenStack community. I reached out to Colin, and he was kind of enough to fly out to Minnesota and give his presentation at our inaugural meeting. Colin and I talked about what to do after his presentation, and we decided the best thing would be to have everyone do a live devstack install (e.g. a devstack installfest).

Colin doing his thing as presentor

Colin doing his thing as presentor

The Day of the Meetup

This way to the Minnesota OpenStack Meetup!

This way to the Minnesota OpenStack Meetup!

The day of the Meetup I was able to get to the Cisco office well in advance and make sure the room was ready. Colin arrive early, and was able to setup before folks started arriving. We ended up having around 20 people show up for the initial meeting. I was able to provide drinks and pizza for folks, make initial introductions of everyone, and Colin was able to give his presentation. Afterwards, we helped everyone get devstack up and running (despite the oddly flakey wireless at the Cisco office, who would have guessed?).

The Result

I have to say the inaugural Minnesota OpenStack Meetup was a success. It turns out we have a broad diversity of interest in OpenStack in the Minnesota area. We currently have 36 members of our Meetup. There are people interested in developing OpenStack, people who are INTERESTED in deploying it in production, people who HAVE deployed it in production. There were folks who had just heard of it and wanted to learn more. Other people had their customers asking about it, so wanted to sharpen their own understanding about it. It was great to meet everyone who attended and plant the seeds of an OpenStack community in Minnesota.

Community Is Critical In Open Source

And this brings me to something very important to me. Community. Read the definition from the Wikipedia article linked there, and let it sink in. Working on Open Source projects is about community. It’s about involvement. It’s about working for the greater good of something important to you. My experience in shepherding the Minnesota OpenStack Meetup has shown me that all it takes is one person to  plant the seed. If one person does that, other people will help provide water and nourishment to help the flower grow. In Open Source, there are many ways to contribute and be a part of the community. You can write code. You can test code. You can write documentation. You can spread the word. You can start a Meetup. You can present at conferences. You can answer questions on mailing lists. You can edit a wiki. You can get excited and make something happen. It’s all about community. It’s all about the power of Open Source. It’s about sharing your experiences with the world.

The slide below from Colin’s presentation sums it all up nicely.

Giving back

Giving back

So what are you waiting for? If there is no Meetup around OpenStack or other Open Source technology in your area, go ahead and start one. You’ll be surprised and encouraged by the response you will likely receive. And you will help to grow and strengthen an Open Source community in your area.

Ryu and OpenStack: Like Peanut Butter and Jelly

OpenStack

OpenStack

 

Increasingly, I’ve been spending more and more time playing around with and utilizing OpenStack. If you’re looking for a highly configurable and quickly maturing cloud operating system, you can’t go wrong with OpenStack. One of the more interesting parts of OpenStack to a networking guy like me is Quantum. Quantum allows you to create rich topologies of virtual networks, encompassing as much or as little as you want by utilizing different plugins. The plugin architecture is a nice design point, because it allows open source projects as well as vendors the chance to add value and differentiate themselves at this layer. Rather than boiling things down to a commodity, Quantum provides for extensions so each plugin can expose additional information above and beyond the core API.

Ryu from Street Fighter

Ryu from Street Figher Fame

Ryu and OpenStackRyu is a network operating system for Software Defined Networks. (Note: Don’t confuse Ryu the network operating system with the image above, which is of the character Ryu from Street Fighter fame.) Ryu aims to provide a logically centralized control platform for your network, with well defined APIs at the top which make this easy to manage and to build rich applications on top. If this sounds like something you’ve heard before, perhaps it’s because it’s very similar to what Big Switch Networks is doing with their Floodlight platform. One of the main differences between Ryu and Floodlight is that Ryu is written in Python, as opposed to Floodlight which is written in Java. Also, Ryu is fully compliant with OpenFlow 1.0, 1.2, and the Nicira extensions in Open vSwitch. Ryy is started and maintained by the NTT laboratories OSRG group. Ryu is licensed under the Apache 2.0 license.

There is of course a Quantum plugin for Ryu, and it’s upstream and supports both the recent Folsom release, as well as the upcoming Grizzly release of OpenStack. Instructions for deploying the plugin are available on the Ryu webpage here. You can quite quickly download a Ryu image and load it on your favorite hypervisor and be up and running in not much time. I’ve loaded and run these images on VMware ESX, VMware Fusion and VirtualBox. The images are tested on Ubuntu with KVM, but they operate just fine with other systems as well.

Running the Ryu images on your Mac

As I mentioned above, the Ryu images run just fine out of the box on Ubuntu with KVM. But I wanted to run them on my Macbook Pro. I initially wanted to use VirtualBox, but later wanted to switch them over to VMware Fusion. To start with, I needed to get them on VirtualBox. To get the images running with VirtualBox, I used qemu-img convert to convert the images into a format which VirtualBox would understand. Something like this should work:

qemu-img convert -O vmdk ryuvm2.qcow2 ryuvm2.vmdk

With that conversion, I was able to easily boot the VMs in VirtualBox. Running them in Fusion would have been as simple as copying over the converted image and importing it, but I had already configured and had the image running on VirtualBox. I moved the image to Fusion to take advantage of nested virtualization (which VirtualBox doesn’t support). So I ended up converting the images one more time before importing the VirtualBox images. I used this command:

BoxManage clonehd ~/VirtualBox\ VMs/RyuVXLANController/Snapshots/\{e5aa0713-93d1-4a06-b367-c488f29a060e\}.vmdk RyuVXLAN-d1.vmdk --format VMDK

Once I did that, I now had my configured Ryu images running under VMware Fusion, with full nested virtualization support to run nested VMs at (near) full speed.

Ryu: Segmentation

The real power with Ryu is it’s ability to segment traffic amongst tenants by using OpenFlow rules on the hosts. For VM to VM traffic across hosts it uses GRE tunnels by default. So effectively, without burning VLANs, you are now able to create rich network topologies scaling to very high tenant limits. For something like OpenStack, this is very useful, as typically OpenStack deployments have many tenants, and this allows for scaling tenant networks on demand.

Summary

In summary, Ryu is a great platform for virtual networks when deployed with OpenStack Quantum. Scaling tenant networks utilizing a combination of OpenFlow and GRE tunnels is not only very cool, but very practical. Plus, how cool is it to be able to say you’re running an Open Source IaaS Cloud Operating System and utilizing an Open Source SDN Operating System for your networking needs? I think that’s a pretty awesome scenario.

Peanut Butter and Jelly

Ryu and OpenStack: Like Peanut Butter and Jelly

 

Automate all the things!

Automate all the things!

Automate all the things!

(thanks to Hyperbole & A Half & http://memegenerator.net/X-All-The-Things)

The image above got me thinking a lot about DevOps and automation. Increasingly, as large scale IaaS clouds are deployed, the first hurdle people hit is around automation. When they arrive at this problem, and my guess it’s pretty early in the cycle, they inevitably turn to an automation tool such as Puppet or Chef. These tools allow for automating pretty much anything you want around deployment, configuration, and management. When dealing with hundreds, thousands, and certainly hundreds of thousands of hosts, automation becomes critical. In a lot of ways, scale and automation go hand in hand. The relationship, while symbiotic in nature, works well. Scale begets automation, and more automation increases your scaling.

If you’re new to Puppet and Chef, there are some great presentations on slideshare which will bring you up to speed fast:

The latest libvirt release is out!

If you read the libvirt development mailing list, you will have noticed that libvirt released 2 versions this week, the latest of which is version 0.10.1. This version includes a bunch of bug fixes, but between this and the previous 0.10.0, there are some changes in how you work with Open vSwitch virtualport types. I thought I’d explain some of them here, as they are advantageous and will make deploying libvirt with Open vSwitch easier.

VLAN Changes

The most important change going into this release of libvirt was around the handling of VLANs, both for Open vSwitch networks, as well as for 802.1Qbg and 802.1Qbh networks. The changes allow you to specify VLANs on networks, portgroups, or as part of the interface definition in the domain XML. For this article, I wanted to focus on, specifically, how this affects the integration of Open vSwitch with libvirt.

For example, to setup a VLAN in a network definition, you would do something like this:

<network>
  <name>openvswitch-net</name>
  <uuid>81ff0d90-c92e-6742-64da-4a736edb9a8b</uuid>
  <forward mode='bridge'/>
  <virtualport type='openvswitch'/>
  <portgroup name='bob' default='yes'>
    <vlan trunk='yes'>
      <tag id='666'/>
    </vlan>
    <virtualport>
      <parameters profileid='bob-profile'/>
    </virtualport>
 </portgroup>
 <portgroup name='alice'>
    <vlan trunk='yes'>
       <tag id='777'/>
       <tag id='888'/>
       <tag id='999'/>
    </vlan>
    <virtualport>
       <parameters profileid='alice-profile'/>
    </virtualport>
  </portgroup>
</network>

As you can see from the above, we are creating a network (named “openvswitch-net”), and creating 2 portgroup’s here: “bob” and “alice”. Each portrgoup has a VLAN trunk defined, although “bob” only has a single VLAN in the trunk.

Now, if we wanted to put this configuration directly on the interface itself, it would look liks this:

<interface type='network'>
  <mac address='00:11:22:33:44:55'/>
  <source network='ovs-net'/>
  <virtualport type='openvswitch'>
    <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f' profileid='bob'/>
  </virtualport>
</interface>

Now, because we specified the profileid of “bob”, the VLAN trunk information for “bob” would be applied when this VM is booted up and it’s VIF is added to the OVS bridge. But what if we wanted to override this information in the interface definition itself? We can do that too, and here’s an example of how to do it:

<interface type='network'>
  <mac address='00:11:22:33:44:55'/>
  <source network='ovs-net'/>
  <vlan trunk='yes'>
    <tag id='42'/>
    <tag id='48'/>
    <tag id='456'/>
  </vlan>
  <virtualport type='openvswitch'>
    <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f' profileid='bob'/>
  </virtualport>
</interface>

Now when this virtual machine is booted, the configuration on the “interface” will take precedence, and the virtual machine will have a trunk port with VLANs 42, 48, and 456 passed to it.

Under the Covers

How does all of this work under the covers? We simply pass additional parameters to “ovs-vsctl” to ensure the port is trunked (trunk=VLAN1,VLAN2) or setup as an access port (tag=VLAN1). This is added to the command line libvirt uses when adding these ports to the OVS bridge.

Conclusion

If you have read my previous article on configuring virtual machines with libvirt and Open Vswitch, you will note a caveat there around VLAN configuration. I’m happy to say this latest version of libvirt addresses this issue. You can now effectively setup VLAN configuration for virtual ports connecting to an OVS bridge in multiple places in libvirt. This makes deploying libvirt with Open vSwitch much more useful.