OpenStack Summit Portland Aftermath

Last week I attended the OpenStack Summit in Portland. This was my fifth OpenStack Summit, and a lot has changed since I attended my first OpenStack Summit in Santa Clara in 2011. Everything about this spring’s event was bigger: The crowds, the demos, the design summits. It was pretty awesome to see how far OpenStack has come, and even more exciting to see how much is left to be done. So many new ideas around virtual machine scheduling, orchestration, and automation were discussed this week. I thought I’d share some thoughts around the Summit now that things have really sunk in from last week.

Is It Time to Separate the Conference and the Design Summit?

OpenStack Networking Design Summit Session

OpenStack Networking Design Summit Session

With the growth of the conference, and the increased attendance by folks new to OpenStack, the question was asked by many folks if the time has come to split the event into a separate Conference and Design Summit. Particularly on Monday, the Design Summit rooms were packed with people, almost to the point of overflowing. The photo above was taken in the OpenStack Networking (formally the project known as Quantum), but was fairly representative of most Design Summit Sessions. For the most part, the design sessions withstood the influx of people and proceeded as they have in past conferences. And certainly having users participate in design sessions is a good thing. But the scale the conference has now attained means the organizers will need to keep a close on eye on this going forward to ensure relevant design sessions are still attainable by attendees interested in this portion of the event.

OpenStack Networking Is Still Hot

With regards to the design summit sessions and the conference in general, the interest in networking in OpenStack is at an all time high. The Networking Design Summit sessions were packed with attendees, and the discussions were very vibrant and exciting. For the most part, the discussions around Networking in OpenStack are all moving beyond basic L2 networks and into higher level items such as service insertion, VPNs, firewalls, and even L3 networks. There was a lot of good material discussed, and some great blueprints (see here and here, among others) are all set to land in Havana.

OpenStack Networking Design Summit Session

OpenStack Networking Design Summit Session

In addition to the design discussions around OpenStack Networking, there were panels, conference sessions, and plenty of hallway conversations on the topic. Almost all the networking vendors had a strong presence at the Summit including Cisco (disclosure: I work for Cisco), Brocade, Ericsson, VMware/Nicira, Big Switch, PLUMgrid, and others. The level of interest in networking around OpenStack was truly amazing.

Which leads me to my next observation.

How Many Panels on SDN Does a Single Conference Need?

It’s obvious Software Defined Networking is hot now. And per my prior observation, it’s obvious that OpenStack Networking is hot. So it would seem the two fit together nicely, and in fact, they do. But how many panel discussions around SDN and OpenStack does one conference need? There were at least two of these, and it seemed like there was a large amount of “SDN washing” going on at this conference. To some extent, this was bound to eventually happen. As technologies mature and more and more people and money are thrown at them, the hype level goes crazy. Trying to level set the conversation, especially in the Design Summit sessions, and ensure an even discourse will become increasingly challenging going forward.

Customers, Customers, and More Customers

This conference had the real feel of actual customers deploying OpenStack. Take a look at the video of the Day 2 Keynote which featured Bloomberg, Best Buy, and Comcast for a taste of how some large enterprise customers are deploying and using OpenStack. But even beyond those big three, it was easy to walk around the conference floor and bump into many other people who are in the process of deploying OpenStack into their own data centers. Most of these people come to the OpenStack party for one of two reasons: Price and scalability. But once they enter the ecosystem, they realize there is much more to OpenStack than simple economics and scalability. As I’ve written before, OpenStack is a community, and deploying OpenStack in your datacenter makes you an honorary member of that community. To some customers, the idea of open collaboration with vendors and solutions providers is a new idea. But this type of open collaboration is the way forward, and I think ultimately, this is what will help to keep customers utilizing OpenStack to solve their real business needs.

Some Thoughts Before the OpenStack Summit in Portland

As we get closer to the OpenStack Summit next week in Portland, I wanted to reflect back on the last 6 months of my community involvement with OpenStack. It was almost 6 months ago when I created the Minnesota OpenStack Meetup in an attempt to drive some discussions, education, collaboration, and community around OpenStack in the Twin Cities. Since that time, the Minnesota OpenStack Meetup group has grown to over 120 members (at 127 at the time of this writing). We have members from all over the United States, as well as the world. I’ve really been happy to see people joining our discussions and participating in sharing their interest and knowledge around OpenStack.

Minnesota OpenStack Meetup

Minnesota OpenStack Meetup

We’ve had some really great discussions around topics like the following:

Our last Meetup was actually a combined Meetup with the local DevOps Meetup group, in which we spent some time mingling amongst group members and sharing ideas around different cloud platforms and how they relate to OpenStack as on on-premise IaaS cloud. This event in particular was eye opening, in that it broadened our groups local reach by opening up our Meetup group to some new members from the DevOps Meetup group.

Kyle Presenting at the combined OpenStack and DevOps Meetup

Kyle Presenting at the combined OpenStack and DevOps Meetup

In addition to the OpenStack Meetups locally, I’ve also had the pleasure to participate in some Meetup groups in other cities as well. In early March, I was fortunate enough to be invited to the first ever Triangle OpenStack Meetup to present on OpenStack Networking. It was great to be a part of another group of people driving discussions and collaboration around OpenStack. Thanks to my friends Mark Voelker and Amy Lewis for having me!

Mark Voelker opening the inaugural Triangle OpenStack Meetup in North Carolina.

Mark Voelker opening the inaugural Triangle OpenStack Meetup in North Carolina.

Related to OpenStack, I was happy to be able to be in the bay area in March to participate in the Bay Area Network Virtualization Meetup meeting on Open vSwitch. My friend Ben Pfaff gave a great talk on the history of Open vSwitch, and it’s future as well. In addition, he gave an eye opening demo on programming Open vSwitch. His demo source is available here. Since Open vSwitch is typically the first plugin people use with OpenStack Networking, and since most of the open source plugins use it (in addition to some commercial ones), it has increasing relevance here. I hope to present in the future at this Meetup on integrating LISP with Open vSwitch and OpenStack!

Bay Area Network Virtualization Meetup talk on Open vSwitch

Bay Area Network Virtualization Meetup talk on Open vSwitch

Looking back on all of these community events, it’s great to think back on all of the great discussions that have come up, all of the knowledge that has been shared, and all of the new friends I’ve met. Building communities around OpenStack has been a great experience. By bringing people together to share ideas and learn from each other, I hope that I’ve been able to open people’s eyes to the power of OpenStack, both from a technology perspective, as well as from a community perspective.

Looking forward to seeing everyone at the Summit next week!

Multi-node OpenStack Folsom devstack

Recently, I had a need to create a multi-node OpenStack Folsom deployment with Quantum. I needed to test out some deployment scenarios for a customer. To make things even more interesting, I wanted to test it out with the recent VXLAN changes in Open vSwitch which went upstream. I thought others may be interested in this as well. I’m planning to document this for Grizzly as well, but the steps should be mostly the same. Also, I’ve opened a blueprint for the Grizzly release to enable the selection of either GRE or VXLAN tunnels when using the Open vSwitch plugin with Quantum.

First Steps

To get started, you’ll need to setup two machines you can use for this. I chose Fedora 17, but Ubuntu 12.04 will work just as nicely. I also chose to install Fedora 17 into virtual machines. And just a quick plug for deployment options here: If you’re not using something like Cobbler in your lab to automate Linux installs, you really need to. I’ve got Cobbler setup to automate installs of Ubuntu 12.04, CentOS 6.3, and Fedora 17 in my lab. I can PXE boot VM images or physical machines and with a simple menu selection walk away and come back 30 minutes later to a fully installed system. When you’re spinning up a large number of devstack installs, this turns out to be very handy. Colin McNamara has a great blog post to get you started with Cobbler.

Make sure to give each VM 2 virtual interfaces, if you go that route, or that your physical hosts have 2 interfaces. The first one will be used for management traffic, the second one will be used for the external network to access your tenant VMs. I’ll assume eth0 and eth1 here.

At this point you should have your 2 VMs or physical hosts up and running with Fedora 17 or Ubuntu 12.04.

Upgrading Open vSwitch on Your Hosts

To enable VXLAN tunnels in Open vSwitch, you need to pull the latest from master, build it, and install it. I’ll show the instructions for Fedora 17 below, which include building RPMs, but for Ubuntu it should be similar except for the RPM building part. I did this as root, to build the kernel module that seems to work best.

yum install rpm-build
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
git clone git://openvswitch.org/openvswitch
cd openvswitch
./configure --with-linux=/lib/modules/$(uname -r)/build
make dist
cp openvswitch-1.9.90.tar.gz ~/rpmbuild/SOURCES
rpmbuild -bb rhel/openvswitch.spec && rpmbuild -bb -D "kversion $(uname -r)" -D "kflavors default" rhel/openvswitch-kmod-rhel6.spec
rpm -Uhv ~/rpmbuild/RPMS/x86_64/kmod-openvswitch-1.9.90-1.el6.x86_64.rpm ~/rpmbuild/RPMS/x86_64/openvswitch-1.9.90-1.x86_64.rpm

At this point, reboot your host and you should have the latest Open vSwitch installed. Copy the RPMs from this build host over to your other host, install them the same way, and reboot that host. On each host, the output of “ovs-vsctl show” should indicate 1.9.90 as below:

[root@km-dhcp-64-217 ~]# ovs-vsctl show
55bd458a-291b-4ee6-9ff1-1a3383779e02
    Bridge "br1"
        Port "eth1"
            Interface "eth1"
        Port "br1"
            Interface "br1"
                type: internal
    Bridge "br2"
        Port "vxlan3"
            Interface "vxlan3"
                type: vxlan
                options: {key=flow, remote_ip="192.168.1.13"}
        Port "br2"
            Interface "br2"
                type: internal
    ovs_version: "1.9.90"
[root@km-dhcp-64-217 ~]#

devstack

Getting devstack installed and running is pretty easy. Here’s how to do it. Make sure you do this as a non-root user. Make sure you add passwordless sudo access for this user as well (add “<username> ALL=(ALL)      NOPASSWD: ALL” to /etc/sudoers before running devstack).

git clone git://github.com/openstack-dev/devstack.git
git checkout stable/folsom

At this point, you should have a Folsom version of devstack setup. You now need to populate your localrc files for both your control node as well as your compute node. See examples below:

Control node localrc

#OFFLINE=True
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service quantum
#enable_service ryu

HOST_NAME=$(hostname)
SERVICE_HOST_NAME=${HOST_NAME}
SERVICE_HOST=192.168.64.188

FLOATING_RANGE=192.168.100.0/24
Q_PLUGIN=openvswitch

#LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

Q_HOST=$SERVICE_HOST
Q_USE_NAMESPACE=False
ENABLE_TENANT_TUNNELS=True
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

SCHEDULER=nova.scheduler.simple.SimpleScheduler

# compute service
NOVA_BRANCH=stable/folsom

# volume service
CINDER_BRANCH=stable/folsom

# image catalog service
GLANCE_BRANCH=stable/folsom

# unified auth system (manages accounts/tokens)
KEYSTONE_BRANCH=stable/folsom

# quantum service
QUANTUM_BRANCH=stable/folsom

# django powered web control panel for openstack
HORIZON_BRANCH=stable/folsom

compute node localrc:

#OFFLINE=true
disable_all_services
enable_service rabbit n-cpu quantum q-agt

HOST_NAME=$(hostname)
SERVICE_HOST_NAME=km-dhcp-64-188
SERVICE_HOST=192.168.64.188

FLOATING_RANGE=192.168.100.0/24
Q_PLUGIN=openvswitch

#LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

Q_HOST=$SERVICE_HOST
Q_USE_NAMESPACE=False
ENABLE_TENANT_TUNNELS=True
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST

MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin

# compute service
NOVA_BRANCH=stable/folsom

# volume service
CINDER_BRANCH=stable/folsom

# image catalog service
GLANCE_BRANCH=stable/folsom

# unified auth system (manages accounts/tokens)
KEYSTONE_BRANCH=stable/folsom

# quantum service
QUANTUM_BRANCH=stable/folsom

# django powered web control panel for openstack
HORIZON_BRANCH=stable/folsom

For the compute localrc, make sure you change SERVICE_HOST to be the IP on your control node you want to use. Also, pick an appropriate floating IP range if you want to use floating IP addresses. On the compute node, make sure to change SERVICE_HOST and SERVICE_HOST_NAME appropriately. Also, once you’ve run devstack on each host, you can uncomment the “OFFLINE=True” to speed it up on subsequent runs.

Post devstack tasks

I had to do the following tasks on my setup to workaround a few things. Fedora 17 does not come with nodejs installed by default, so Horizon will not work out of the box. To install nodejs, follow these instructions. I performed these as root as well, but sudo would work with the “make install” step as well.

yum install -y gcc-c++
git clone git://github.com/joyent/node.git
cd node
./configure
make && make install

Next, to work around a Nova metadata issue I was having, I added some IP configuration to eth1 by doing “sudo ifconfig eth1 up 169.254.169.254″. I also added eth1 to br-ext on the control node. This is the interface which will be used for external access to your tenant VMs via their floating IP addresses.

You will also need to apply a small patch to Quantum on the control node. This is to make Quantum create VXLAN tunnels instead of GRE tunnels. The patch is below and you should be able to apply it manually quite easily:

[kmestery@km-dhcp-64-188 quantum]$ git diff quantum/agent/linux/ovs_lib.py

diff --git a/quantum/agent/linux/ovs_lib.py b/quantum/agent/linux/ovs_lib.py
index ec4194d..a0f6bbf 100644
--- a/quantum/agent/linux/ovs_lib.py
+++ b/quantum/agent/linux/ovs_lib.py
@@ -159,7 +159,7 @@ class OVSBridge:

     def add_tunnel_port(self, port_name, remote_ip):
         self.run_vsctl(["add-port", self.br_name, port_name])
- self.set_db_attribute("Interface", port_name, "type", "gre")
+ self.set_db_attribute("Interface", port_name, "type", "vxlan")
         self.set_db_attribute("Interface", port_name, "options:remote_ip",
                               remote_ip)
         self.set_db_attribute("Interface", port_name, "options:in_key", "flow")
[kmestery@km-dhcp-64-188 quantum]$

Running devstack

At this point, you should be ready to run devstack. Go ahead and run it on the control node first (cd devstack ; ./stack.sh). Next, run it on the compute host (cd devstack ; ./stack.sh).

To access the consoles of your devstack installs, execute “screen -r stack” on each host. This pops you into a screen session with each session handling the output of a particular OpenStack component. To move around in the screen window you can use “ctrl-a-p” and “ctrl-a-n” to do move to the previous and next windows. “ctrl-a-ESC” will freeze the window and let you use vi commands to scroll back. “ESC” will unfreeze it.

Summary: You’re a Cloud Master Now!

If you’ve followed this guide, you should have an OpenStack Folsom Cloud running in your lab now with the Open vSwitch Quantum plugin running and VXLAN tunnels between hosts! A followup post will show you how to create multiple tenants and verify Quantum is segregating traffic by utilizing VXLAN tunnels between hosts with a different VNI for each tenant.

Welcome to the world of cloud computing on OpenStack!

Welcome to Cloud Computing!

Welcome to Cloud Computing

 

OpenStack, Community, and You

Minnesota OpenStack Meetup

Minnesota OpenStack Meetup

Yesterday I hosted the first Minnesota OpenStack Meetup at the local Cisco office in Bloomington. It was an event I had been planning for about 2 months. I was very excited to meet with other Stackers in the Twin Cities. But the story starts much before this, I’m getting ahead of myself a bit here. Let me backup and tell you the full story of how the Minnesota OpenStack Meetup came to be.

The Minnesota Tech Scene

As my friends and some readers may know, I work remotely for Cisco. I live in Minnesota, not in Silicon Valley. What most people outside of Minnesota likely don’t know is there exists a pretty thriving tech scene here. A lot of the roots of Minnesota’s tech scene, certainly the one I’ve grown up with, come from the roots of Cray Inc and Control Data Corporation. From these early tech giants, many companies have grown in Minnesota over the last 30 years. Like any area, Minnesota has some sweet spots with regards to specific areas of technology. One such area is storage, and in particular storage networking. Look no further than companies who currently have offices in Minnesota with development happening in the storage area: Dell/Compellent, Symantec, EMC/Isilon, Quantum, Cray, SGI, Qlogic. All of these companies have been doing great work in various areas around storage, storage networking, data protection, highly scalable filesystems, and other infrastructure layer projects and products.

Minnesota OpenStack

I recently changed roles at Cisco, and my new role allows me increasing involvement in Open Source technologies. Specifically, I am becoming more involved with OpenStack. One of the things I wanted to do was find other people interested in OpenStack in the Minnesota area. So I went to meetup.com to try and find an OpenStack Meetup group. There existed none at the time. Minnesota had other groups, some of which had hundreds of members, so I knew there was interest for meetups around technology. I set out to create the Minnesota OpenStack Meetup at this point, hoping to find and grow interest in OpenStack in the Minnesota (and likely western Wisconsin) areas.

Planning For the Initial Meetup

I had roughly two months to plan for the initial meeting. My initial focus was on securing a space to host the meeting. This was made slightly difficult by not having a rough idea of how many people would attend. I made the call early on to secure a room at the local Cisco office which would hold around 40 people. Part of me thought having 40 people would be unrealistic for an initial meetup, while another part of me thought getting more than 40 people would be a good problem to have. With the room secured, I turned my attention to an agenda. I’m good friends on Twitter with Colin McNamara, and I had seen his spectacular presentation he gave at the San Diego OpenStack Summit around “Surviving Your First Checkin“.  The presentation was exactly what you would want to show to a new Meetup audience interested in participating in the OpenStack community. I reached out to Colin, and he was kind of enough to fly out to Minnesota and give his presentation at our inaugural meeting. Colin and I talked about what to do after his presentation, and we decided the best thing would be to have everyone do a live devstack install (e.g. a devstack installfest).

Colin doing his thing as presentor

Colin doing his thing as presentor

The Day of the Meetup

This way to the Minnesota OpenStack Meetup!

This way to the Minnesota OpenStack Meetup!

The day of the Meetup I was able to get to the Cisco office well in advance and make sure the room was ready. Colin arrive early, and was able to setup before folks started arriving. We ended up having around 20 people show up for the initial meeting. I was able to provide drinks and pizza for folks, make initial introductions of everyone, and Colin was able to give his presentation. Afterwards, we helped everyone get devstack up and running (despite the oddly flakey wireless at the Cisco office, who would have guessed?).

The Result

I have to say the inaugural Minnesota OpenStack Meetup was a success. It turns out we have a broad diversity of interest in OpenStack in the Minnesota area. We currently have 36 members of our Meetup. There are people interested in developing OpenStack, people who are INTERESTED in deploying it in production, people who HAVE deployed it in production. There were folks who had just heard of it and wanted to learn more. Other people had their customers asking about it, so wanted to sharpen their own understanding about it. It was great to meet everyone who attended and plant the seeds of an OpenStack community in Minnesota.

Community Is Critical In Open Source

And this brings me to something very important to me. Community. Read the definition from the Wikipedia article linked there, and let it sink in. Working on Open Source projects is about community. It’s about involvement. It’s about working for the greater good of something important to you. My experience in shepherding the Minnesota OpenStack Meetup has shown me that all it takes is one person to  plant the seed. If one person does that, other people will help provide water and nourishment to help the flower grow. In Open Source, there are many ways to contribute and be a part of the community. You can write code. You can test code. You can write documentation. You can spread the word. You can start a Meetup. You can present at conferences. You can answer questions on mailing lists. You can edit a wiki. You can get excited and make something happen. It’s all about community. It’s all about the power of Open Source. It’s about sharing your experiences with the world.

The slide below from Colin’s presentation sums it all up nicely.

Giving back

Giving back

So what are you waiting for? If there is no Meetup around OpenStack or other Open Source technology in your area, go ahead and start one. You’ll be surprised and encouraged by the response you will likely receive. And you will help to grow and strengthen an Open Source community in your area.

Ryu and OpenStack: Like Peanut Butter and Jelly

OpenStack

OpenStack

 

Increasingly, I’ve been spending more and more time playing around with and utilizing OpenStack. If you’re looking for a highly configurable and quickly maturing cloud operating system, you can’t go wrong with OpenStack. One of the more interesting parts of OpenStack to a networking guy like me is Quantum. Quantum allows you to create rich topologies of virtual networks, encompassing as much or as little as you want by utilizing different plugins. The plugin architecture is a nice design point, because it allows open source projects as well as vendors the chance to add value and differentiate themselves at this layer. Rather than boiling things down to a commodity, Quantum provides for extensions so each plugin can expose additional information above and beyond the core API.

Ryu from Street Fighter

Ryu from Street Figher Fame

Ryu and OpenStackRyu is a network operating system for Software Defined Networks. (Note: Don’t confuse Ryu the network operating system with the image above, which is of the character Ryu from Street Fighter fame.) Ryu aims to provide a logically centralized control platform for your network, with well defined APIs at the top which make this easy to manage and to build rich applications on top. If this sounds like something you’ve heard before, perhaps it’s because it’s very similar to what Big Switch Networks is doing with their Floodlight platform. One of the main differences between Ryu and Floodlight is that Ryu is written in Python, as opposed to Floodlight which is written in Java. Also, Ryu is fully compliant with OpenFlow 1.0, 1.2, and the Nicira extensions in Open vSwitch. Ryy is started and maintained by the NTT laboratories OSRG group. Ryu is licensed under the Apache 2.0 license.

There is of course a Quantum plugin for Ryu, and it’s upstream and supports both the recent Folsom release, as well as the upcoming Grizzly release of OpenStack. Instructions for deploying the plugin are available on the Ryu webpage here. You can quite quickly download a Ryu image and load it on your favorite hypervisor and be up and running in not much time. I’ve loaded and run these images on VMware ESX, VMware Fusion and VirtualBox. The images are tested on Ubuntu with KVM, but they operate just fine with other systems as well.

Running the Ryu images on your Mac

As I mentioned above, the Ryu images run just fine out of the box on Ubuntu with KVM. But I wanted to run them on my Macbook Pro. I initially wanted to use VirtualBox, but later wanted to switch them over to VMware Fusion. To start with, I needed to get them on VirtualBox. To get the images running with VirtualBox, I used qemu-img convert to convert the images into a format which VirtualBox would understand. Something like this should work:

qemu-img convert -O vmdk ryuvm2.qcow2 ryuvm2.vmdk

With that conversion, I was able to easily boot the VMs in VirtualBox. Running them in Fusion would have been as simple as copying over the converted image and importing it, but I had already configured and had the image running on VirtualBox. I moved the image to Fusion to take advantage of nested virtualization (which VirtualBox doesn’t support). So I ended up converting the images one more time before importing the VirtualBox images. I used this command:

BoxManage clonehd ~/VirtualBox\ VMs/RyuVXLANController/Snapshots/\{e5aa0713-93d1-4a06-b367-c488f29a060e\}.vmdk RyuVXLAN-d1.vmdk --format VMDK

Once I did that, I now had my configured Ryu images running under VMware Fusion, with full nested virtualization support to run nested VMs at (near) full speed.

Ryu: Segmentation

The real power with Ryu is it’s ability to segment traffic amongst tenants by using OpenFlow rules on the hosts. For VM to VM traffic across hosts it uses GRE tunnels by default. So effectively, without burning VLANs, you are now able to create rich network topologies scaling to very high tenant limits. For something like OpenStack, this is very useful, as typically OpenStack deployments have many tenants, and this allows for scaling tenant networks on demand.

Summary

In summary, Ryu is a great platform for virtual networks when deployed with OpenStack Quantum. Scaling tenant networks utilizing a combination of OpenFlow and GRE tunnels is not only very cool, but very practical. Plus, how cool is it to be able to say you’re running an Open Source IaaS Cloud Operating System and utilizing an Open Source SDN Operating System for your networking needs? I think that’s a pretty awesome scenario.

Peanut Butter and Jelly

Ryu and OpenStack: Like Peanut Butter and Jelly

 

Automate all the things!

Automate all the things!

Automate all the things!

(thanks to Hyperbole & A Half & http://memegenerator.net/X-All-The-Things)

The image above got me thinking a lot about DevOps and automation. Increasingly, as large scale IaaS clouds are deployed, the first hurdle people hit is around automation. When they arrive at this problem, and my guess it’s pretty early in the cycle, they inevitably turn to an automation tool such as Puppet or Chef. These tools allow for automating pretty much anything you want around deployment, configuration, and management. When dealing with hundreds, thousands, and certainly hundreds of thousands of hosts, automation becomes critical. In a lot of ways, scale and automation go hand in hand. The relationship, while symbiotic in nature, works well. Scale begets automation, and more automation increases your scaling.

If you’re new to Puppet and Chef, there are some great presentations on slideshare which will bring you up to speed fast:

The latest libvirt release is out!

If you read the libvirt development mailing list, you will have noticed that libvirt released 2 versions this week, the latest of which is version 0.10.1. This version includes a bunch of bug fixes, but between this and the previous 0.10.0, there are some changes in how you work with Open vSwitch virtualport types. I thought I’d explain some of them here, as they are advantageous and will make deploying libvirt with Open vSwitch easier.

VLAN Changes

The most important change going into this release of libvirt was around the handling of VLANs, both for Open vSwitch networks, as well as for 802.1Qbg and 802.1Qbh networks. The changes allow you to specify VLANs on networks, portgroups, or as part of the interface definition in the domain XML. For this article, I wanted to focus on, specifically, how this affects the integration of Open vSwitch with libvirt.

For example, to setup a VLAN in a network definition, you would do something like this:

<network>
  <name>openvswitch-net</name>
  <uuid>81ff0d90-c92e-6742-64da-4a736edb9a8b</uuid>
  <forward mode='bridge'/>
  <virtualport type='openvswitch'/>
  <portgroup name='bob' default='yes'>
    <vlan trunk='yes'>
      <tag id='666'/>
    </vlan>
    <virtualport>
      <parameters profileid='bob-profile'/>
    </virtualport>
 </portgroup>
 <portgroup name='alice'>
    <vlan trunk='yes'>
       <tag id='777'/>
       <tag id='888'/>
       <tag id='999'/>
    </vlan>
    <virtualport>
       <parameters profileid='alice-profile'/>
    </virtualport>
  </portgroup>
</network>

As you can see from the above, we are creating a network (named “openvswitch-net”), and creating 2 portgroup’s here: “bob” and “alice”. Each portrgoup has a VLAN trunk defined, although “bob” only has a single VLAN in the trunk.

Now, if we wanted to put this configuration directly on the interface itself, it would look liks this:

<interface type='network'>
  <mac address='00:11:22:33:44:55'/>
  <source network='ovs-net'/>
  <virtualport type='openvswitch'>
    <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f' profileid='bob'/>
  </virtualport>
</interface>

Now, because we specified the profileid of “bob”, the VLAN trunk information for “bob” would be applied when this VM is booted up and it’s VIF is added to the OVS bridge. But what if we wanted to override this information in the interface definition itself? We can do that too, and here’s an example of how to do it:

<interface type='network'>
  <mac address='00:11:22:33:44:55'/>
  <source network='ovs-net'/>
  <vlan trunk='yes'>
    <tag id='42'/>
    <tag id='48'/>
    <tag id='456'/>
  </vlan>
  <virtualport type='openvswitch'>
    <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f' profileid='bob'/>
  </virtualport>
</interface>

Now when this virtual machine is booted, the configuration on the “interface” will take precedence, and the virtual machine will have a trunk port with VLANs 42, 48, and 456 passed to it.

Under the Covers

How does all of this work under the covers? We simply pass additional parameters to “ovs-vsctl” to ensure the port is trunked (trunk=VLAN1,VLAN2) or setup as an access port (tag=VLAN1). This is added to the command line libvirt uses when adding these ports to the OVS bridge.

Conclusion

If you have read my previous article on configuring virtual machines with libvirt and Open Vswitch, you will note a caveat there around VLAN configuration. I’m happy to say this latest version of libvirt addresses this issue. You can now effectively setup VLAN configuration for virtual ports connecting to an OVS bridge in multiple places in libvirt. This makes deploying libvirt with Open vSwitch much more useful.

Virtual Machines on Fedora 17 with Open vSwitch

My previous blog post showed you how to setup Open vSwitch (including LACP port-channels) on your Fedora 17 host. Once you have this working, creating virtual machines and adding them to one of your Open vSwitch bridges is the next logical step. For this setup, we will make use of libvirt to manage our virtual machines. We’ll utilize virt-manager (a GUI) and virsh (a CLI) to manage the VMs on the host. But the VMs themselves, once running on Fedora with libvirt and KVM, can easily be migrated into an oVirt setup, for instance. Perhaps a later post will detail how to go about the process of importing them into oVirt.

The Setup

I’ll be using a single host running Fedora 17 for this example. The host is running Open vSwitch for virtual networking as well. To make things a bit more complicated, I’ll also convert a virtual machine from a VMW vmdk format into a libvirt format. This will at least show a partial migration path from VMW hosted virtual machines to Fedora+KVM hosted virtual machines.

Make sure you have the appropriate software installed on your Fedora host. For this, the below command should get you going:

yum install libvirt qemu virt-manager

Now, make sure libvirt is started, and also starts at system boot time:

systemctl start libvirtd.service
systemctl enable libvirtd.service

The Process

The first step is to migrate your disk format. For this, I created a directory for my virtual machines (/home/images), and a subdirectory under there for my test virtual machine (fedora17). I copied my existing VMware virtual machine over in that directory, and it looks like this:

[root@ucs-3 images]# ls
fedora.17.x86-64.20120529.vmdk  fedora.17.x86-64.20120529.vmx
[root@ucs-3 images]#

The next step is to convert this image into a format which libvirt will understand. I will utilize qemu-img for this, executing the command as below:

qemu-img convert fedora.17.x86-64.20120529.vmdk fedora17.img

qemu-img has a lot of options, but the above will perform a basic conversion. The end result is we now have an image suitable for import into libvirt. At this point, we can bring up virt-manager and create our virtual machine. Once you start virt-manager, you should see a simple image like below:

virt-manager start screen

We can now start the process of adding our virtual machine. Below are screen shots which show the process.

Step 1

Step 2

Step 3

Step 4

At this point, your VM will show up in virt-manager:

Virtual Machine imported in virt-manager

Virtual Machine Networking

The next step is to configure the virtual machine networking. We want to utilize libvirt’s integration with Open vSwitch for this next part, so we’ll utilize virsh (the CLI) to do this. Fire up virsh now in another window:

[root@ucs-3 images]# virsh 
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh #

We will want to list the virtual machines, and then edit the XML definition for the virtual machine we just created with virt-manager.

virsh # list --all
Id Name State
----------------------------------------------------
- Fedora17 shut off

virsh #

Now run the “edit” command, passing the VM name from above. We will want to scroll down to the networking section, which will look something like the below:

Network section pre-OVS

Change it to look like the below, substituting the name of your OVS bridge for “source bridge” below:

Network configuration with OVS

Once you complete this, you will notice if you do a “dumpxml <VM name>” that libvirt has automatically added an interfaceid to the parameter section of the virtualport section of the XML. See the picture below.

Final libvirt OVS network configuration

At this point, your VM should be all set to fire up and utilize Open vSwitch for virtual networking.

Caveats

There are a few issues with the above. For one, libvirt only allows setting of 2 parameters in the “virtalport” section of the XML: interfaceid and profileid. This means you cannot set a VLAN tag, for instance. However, what this does mean is that by utilizing a profileid, you could take advantage of having all your configuration tied to the profile. This is how advanced networking technologies such as 802.1QBh work.

Conclusion

Hopefully you now have your virtual machine up and running with a combination of Fedora, libvirt, KVM, and Open vSwitch. Future posts will likely show you how to integrate the above with oVirt for a nice, datacenter virtualization management solution which is fully open source.

Fedora 17 with Open vSwitch

I’ve recently decided to move some of the virtual infrastructure in my lab onto Fedora 17. I’ll be running my VMs on KVM utilizing libvirt to manage the VMs. The great thing about this setup is that in theory, by utilizing libvirt, I can easily move my infrastructure to something like oVirt or OpenStack in the future. But for now, I plan to simply make use of a combination of virsh and virt-manager. Getting Fedora 17 onto my host was quite easy, I won’t cover that here. The next thing I wanted to setup was the networking layer.

The Lay of the Land

Before diving into details of my virtual networking configuration, some background on what the setup in my lab looks like. I have two 3560 switches in my lab, connected via a 4-port port-channel using optical connections. I trunk all my VLANs between the switches and let the 3560s hash based on src-dst-ip. The server I am using for this setup has 6 NICs in it, all Intel Gigabit capable. All 6 NIC ports are connected to a single 3560.

Virtual Networking on Fedora

I made the choice early on to utilize Open vSwitch for my virtual networking. This has been a part of Fedora since Fedora 16, and the Beefy Miracle release (17) also includes this fine piece of software. I utilize a variety of VLANs in my lab, thus necessitating trunk ports for some configuration. The first thing I decided to do was trunk some of my management VLANs to a bond interface. The VLANs in question were 64, 66, and 67. I utilized 2 physical ports for this bond interface, and setup the port-channel as LACP.

The configuration on the 3560 end looks like this:

Configuration on the 3560 end of the OVS LACP channel

One the OVS side of things, here is what the configuration looks like in the /etc/sysconfig/networking-scripts/bond0 configuration file. Please note the BOND_IFACES section. This is where you list the physical interfaces you want to be a part of your bond.

bond0 configuration

The configuration, as shown by ovs-vsctl, looks like this:

ovs-vsctl output

Once you have the above working, you should now have the physical side of the OVS bridge working. The next step is to configure your management interface. For this, I simply created a mgmt0 interface, added it to the bridge, and setup a configuration file to have it brought up during system boot. You can see in the previous screen shot what this looks like. Below you will find the actual /etc/sysconfig/networking-scripts/ifcfg-mgmt0 file:

ifcfg-mgmt0 configuration

One Additional Change

There is one additional change which is needed here. I disabled NetworkManager, and went with the old way of configuring networking. To do this, follow these instructions:

systemctl stop NetworkManager
systemctl disable NetworkManager

Before enabling the old network configuration, make a single change to the openvswitch systemctl file. Edit the file located at /usr/lib/systemd/system/openvswitch.service and remove “network.target” from the “After” section and add a “Before” section with “network.target”. The end result is something like this:

After=syslog.target
Before=network.target

Once you are done with this, make sure to enable both network.service and openvswitch.service:

systemctl enable network.service
systemctl enable openvswitch.service
systemctl start network.service
systemctl start openvswitch.service

Conclusion

The end result of the above is that I now have a LACP port-channel between my 3560 and my OVS bridge on the host. I have trunked some VLANs across this, and setup my management interface on the host as a virtual port on the same OVS bridge. This all works really well, and provides robust networking on your Linux host. Future posts will show how to add virtual machines to this OVS bridge!

Teach Your Kids to Code!

On top of being a software engineer, I’m also the father of 3 kids. My daughter recently turned 8, and she is my oldest. I’ve been having her try out different methods of learning to program already, but nothing had really stuck. But I think this is about to change, as I recently discovered the wonderful website KidsRuby. After spending some time with KidsRuby over the last few days, it’s become clear there is interest enough to keep my daughter entertained and learning. My main gauge has been the fact my middle son (6) has also shown interest in what his sister is doing! Both of them have spent time playing with KidsRuby over the last few days, read on for our experience.

Getting Started

It’s pretty straightforward to utilize KidsRuby. Just go to their download page and download either the Windows or Mac package. Since the Mestery household is a Mac household, we went with the Mac version. After the download completed, I was able to install the package, but I could not get it to run. After doing some googling, I found this issue on github, which apparently is what I was hitting. After reading the thread, I simply removed KidsRuby, reinstalled, and then the install went through successfully.

So, just a note if you have issues after installing on Mac, simply remove the KidsRuby package and reinstall to get going.

Learning Ruby!

After brining up KidsRuby, you are greeted by an easy to navigate menu. See the image below for what the “Help” section looks like:

Navigating the program is easy, and within minutes you’ll be writing your first graphical Ruby program! The program uses a Logo-like turtle interface to teach graphics programming. For those old enough to remember using Logo, the experience will be very nostalgic. My daughter and son both enjoyed this and were excited when the turtle was drawing on the screen, following their commands. In fact, my daughter spent 30 minutes changing how the turtle moved to draw different shapes, utilizing “for” loops. Little did she know she was writing her own algorithm!

My Take

Teaching your kids to program is almost a must in today’s world. Everything is moving to be “App” centric, so having the skills to understand this new world is important. KidsRuby provides a fun, easy to understand and navigate experience which will keep your kids happy and entertained for hours. On top of that, they are learning a great language in Ruby. From there, they can move on to move advanced things, such as running KidsRuby on the Raspberry Pi (video  here). How cool is that?