Recently, I’ve been spending an increasing amount of time building a virtual networking layer. Of course, I’m doing this using both Open vSwitch as well as OVN. I’ve been lucky enough to be a part of a few talks on OVN, so I won’t go into many details on OVN here. But I will say we’ve found it to be a great solution to solving virtual networking at scale. What I’d like to discuss here is some of the things we’re testing with regards to OVN at scale.
One of the most interesting new features in Neutron for the Kilo release is the addition of subnetpools to the API. This work was initially targeted to integrate nicely with pluggable IPAM. Unfortunately, this did not make it into Kilo and is targeted at Liberty. But even without pluggable IPAM as a consumer, the subnetpools addition is quite useful. Here’s an example of how you might use it. neutron net-create webapp neutron subnetpool-create –default-prefixlen 24 –pool-prefix 10.10.0.0/16 webpool neutron subnet-create –subnetpool webpool websubnet What I’ve shown above is the creation of a network, pool, and subnet for an example web application.
From this post by Ivan Pepelnjak around the usage of no-name laptops vs. Apple products by industry pundits espousing the value of whitebox switching: There’s a simple litmus test: look at the laptop they’re using. Is it a no-name x86 clone or a brand name laptop? Is it running Linux or Windows… or are they using a MacBook with OSX because it works better than any alternative (even Windows runs better on a MacBook than on a low-cost craptop).
I’ve recently decided to move some of the virtual infrastructure in my lab onto Fedora 17. I’ll be running my VMs on KVM utilizing libvirt to manage the VMs. The great thing about this setup is that in theory, by utilizing libvirt, I can easily move my infrastructure to something like oVirt or OpenStack in the future. But for now, I plan to simply make use of a combination of virsh and virt-manager.