It was a short trip, but action packed from the keynote sessions, breakout sessions, and private sessions set up for some of us bloggers. I also somehow ended up in two Tech Field Day sessions as well. A big thanks to Ivy Worldwide and HP for bringing us out here. It was definitely interesting being at Interop as a blogger because we (about 6 of us) had some great access to HP product management, technical marketing, and executive team members. The group I was in also had the opportunity to sit down and have a Q&A with Bethany Mayer, SVP & GM of Networking at HP. Technology aside, they were a great group of people to talk with. For the ones I actually got to talk to for more than 2 minutes (of course, about SDN) listened and asked plenty of questions as I did back to them. I sincerely felt they wanted to solicit feedback on their solutions to further improve them. On that note, they did have some big announcements this week.
They announced their own virtual switch. They join the likes of Cisco, IBM, VMware, and NEC that all have distributed virtual switches. Since I didn’t have a deep dive, I’m hesitant to call the HP virtual switch a switch. Today (or day 1 when it ships), the virtual construct they load in the hypervisor will only run in VEPA/EVB mode communicating to an HP top of rack switch. To make a comparison to the Cisco world, it is essentially a fabric extender. Also think VM-FEX. So in theory, it’s not really a traditional virtual switch. I would expect all VM to VM traffic local to a single host to actually occur on the HP TOR, but if this is incorrect, definitely let me know. This means no overlays from hypervisor to hypervisor for HP. If they target the mid-market, this could work well for simplified management of the physical and virtual environment. I was also told that eventually it will or could operate as a virtual switch independent of the TOR switch. Another point that I didn’t get a chance to clarify was how their current model works with VM mobility.
They talked publicly quite a bit about their integration with MSFT Lync and also the Tipping Point IPS to deliver SDN apps to the Campus. These SDN applications integrate and manipulate flows in real time to enhance QOS for Lync traffic on an HP network and also re-direct DNS queries to be inspected/analyzed to have a fully distributed botnet filter. In both cases, they are using what I think Ivan (@ioshints) would call “unprotected hybrid port mode.” The nice thing here is, HP is keeping it simple. They are effectively doing PBR in hardware. If there isn’t a match against the OpenFlow FIB, the packets are forwarded using their normal pipeline or MAC-table in a L2 switch here. This means there isn’t any reactive flows or change in how L2/L3 is done today when deploying these applications. I did clarify during the demo of the Lync solution after “QOS” was mentioned to be modified with OpenFlow that HP specific OpenFlow extensions are being used today and the goal is to get these extensions into the next release OF – maybe 1.4+?
HP is also a member of the newly formed OpenDaylight project. In numerous discussions with multiple vendors, not just HP, it seems (speculation on my part) the focus is to continue down their own path when it comes to controller development. Vendors don’t seem to expect soutbound APIs to be the same across their own offerings and OpenDaylight, but however, the majority does want a standard northbound API. This way, the application works on both controllers and the southbound APIs become an implementation detail (assuming they are open standards based protocols :-)).
OpenStack contribution – this was news to me, but HP was a TOP 5 contributor for the latest Grizzly release of OpenStack. I kind of wish they adopted OVS for their virtual switch.
TRILL – HP also announced their next gen DC Core/Agg/TOR platforms. While they are beasts like most core switches, the point I want to make is they support TRILL today. Someone from HP told me they are one of two companies now supporting standards based TRILL. But for me, they are the first company that I’m personally hearing of that supports TRILL. Not sure of the other. I wonder what a full stack deployment would like with VEPA/EVB from TOR to hypervisor and TRILL on the same TOR northbound to the spine/Core?
I hope to share much more about my experience at Interop over the coming days! If you are curious about anything in particular, feel free to write in ask. Comment below or contact me privately.
Thanks,
Jason
Twitter: @jedelman8
Disclosure: Ivy Worldwide and HP covered my expenses on the trip, but they in no way asked for anything in return.