I figured, why not, let’s google CAPWAP and OpenFlow together and see what comes up. No surprise, you see the post from Matt Davy at IU who was drawing a similar comparison to OF/SDN to CAPWAP/WLAN, my recent blog, Ivan’s blog, Martin’s blog, and then finally the reason why I’m writing this – I saw a link to openvswitch.org that talked about building support for CAPWAP into the open vswitch. Interesting, right? Well, to me it is! So after digging further, it looks like Jesse Gross (from Nicira, the company who does much of the development work on open vswitch) had some comments in the commit log for this feature.
After reading Ivan Pepelnjak’s (ipspace) and Martin Casado’s (networkheresy) blogs recently, I noticed they were making general comparisons on network tunneling protocols. These protocols are nothing new, for example using UDP, GRE, EoMPLS, VPLS, and a new one being mentioned over the past several months, VXLAN. However, what caught my attention was CAPWAP was also a protocol each of them used to compare to GRE, UDP, and VXLAN. As you’ll recall in my recent OpenFlow post, I spent quite a bit of time comparing OpenFlow to CAPWAP in the sense OpenFlow is being used to separate the control and data planes on switches and CAPWAP is being used to separate the control and data planes on Access Points.
I figured, why not, let’s google CAPWAP and OpenFlow together and see what comes up. No surprise, you see the post from Matt Davy at IU who was drawing a similar comparison to OF/SDN to CAPWAP/WLAN, my recent blog, Ivan’s blog, Martin’s blog, and then finally the reason why I’m writing this – I saw a link to openvswitch.org that talked about building support for CAPWAP into the open vswitch. Interesting, right? Well, to me it is! So after digging further, it looks like Jesse Gross (from Nicira, the company who does much of the development work on open vswitch) had some comments in the commit log for this feature.
5 Comments
There have been numerous articles, blogs, and columns written over the past few months on OpenFlow and Software Defined Networking, but I still feel like many of them aren’t breaking it down for the typical Enterprise Network Engineer. Having followed OpenFlow since mid 2010, I do not claim to be an expert in this space, but I will give my take on what could be the game changer for the networking industry.
Let me also apologize up front because this post is longer than I originally intended it to be! What is OpenFlow? OpenFlow is simply a protocol that is used to communicate between a server, i.e. controller and network switches that allows the switch control plane to be separated from the data plane. Now what does that mean and what is a controller? This is where there are many analogies being made on what this actually looks like from a low level. The most common comparison made is that it’s “the x86 instruction set for networking.” For those academics or developers out there, this may mean a lot, but for me, it means absolutely nothing, and I consider myself pretty in tune with the mind set of someone designing Enterprise Data Center and Campus Networks, with the latest technology out there. For me, the analogy that I use when explaining OpenFlow is directly relating it to the past and present Enterprise WLAN market. There are a few different warranty programs available from Cisco that are not widely advertised and discussed. However, given that every customer has different technical and business requirements, it is also true that each customer has different SLAs that have to be met internal to their organization. So let’s examine some of the options available.
There are standard SMARTnet (SNT) offerings from Cisco that are the norm including 24x7x4 and 8x5xNBD support offerings. These are probably two of the most popular offerings, but there are many others out there as well. It is important to note that each of the SMARTnet offerings from Cisco include 24x7x365 access to the Cisco Technical Assistance Center (TAC). I am not here to promote TAC by any means even though it is a world class organization, but this is questioned by the “8”x”5”xNBD usually found in the description of the SNT SKU being sold. The 24x7x4 and 8x5xNBD most importantly are referring to the turnaround time to get hardware replacements. So what if you’re an organization that has 200 user access layer switches, and you already spare a bunch of additional switches? Good question, and for me personally, I would first move away from 24x7x4 SNT if that was the case. Since the concern is no longer hardware replacement, the most important piece to the DAY 2 support puzzle becomes having a number to call in the case of a problem. Again, this is a where TAC comes in and you can leverage 8x5xNBD on the switches. It would be ideal to do a ROI of sparing vs. different levels of SNT being chosen to see which makes the best financial sense for the organization. Cisco or your local partner can help you with this. The Nexus 7000 is constantly evolving and there seems to be more and more design parameters that have to be taken into consideration when designing Data Center networks with these switches. I’m not going to go into each of the different areas from a technical standpoint, but rather try and point out as many of those so called “gotchas” that need to be known upfront when purchasing, designing, and deploying Nexus 7000 series switches.
Before we get started, here is a quick summary of current hardware on the market for the Nexus 7000.
Instead of writing about all of these design considerations, I thought I’d break it down into a Q & A format, as that’s typically how I end up getting these questions anyway. I’ve ran into all of these questions over the past few weeks (many more than once), so hopefully this will be a good starting point, for myself as I tend to forget, and many others out there, to check compatibility issues between the hardware, software, features, and licenses of the Nexus 7000. The goal is to keep the answers short and to the point. Q: What are the throughput capabilities and differences of the two fabric modules (FAB1 & FAB2)? A: It is important to note each chassis supports up to five (5) fabric modules. Each FAB1 has a maximum throughput of 46Gbps/slot meaning the total per slot bandwidth available when there are five (5) FAB1s in a single chassis would be 230Gbps. Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five (5) FAB2s in a single chassis would be 550Gbps. The next question goes into this a bit deeper and how the MAXIMUM theoretical per slot bandwidth comes down based on which particular linecards are being used. In other words, the max bandwidth per slot is really dependent on the fabric connection of the linecard being used. |
AuthorJason Edelman, Founder & CTO of Network to Code. Categories
All
Archives
May 2015
|