And this was a shallow dive. The goal was not to spend countless hours on each solution; I simply wanted to get a high level overview. The focus was just to try and get the following questions answered.
Who has fixed configuration switches with 40GbE interfaces? Do they support “standard” L2/L3 protocols? Do they support some type of Layer 2 multipathing? Is there support for a type of MLAG? What is the port to port latency? What is the power consumption? Is there anything “special” about the switch, or is it unique in anyway?
After a few seconds of thinking about it, I decided to focus on Arista, Dell Force10, Extreme, IBM, Juniper, Brocade, HP, and of course, Cisco.
7050S-64 – 48 x 1G/10G SFP+ and 4 x 40GbE interfaces
7050T-64 – 48 x 10Gb BASE-T and 4 x 40GbE interfaces
7050Q-16 –16 x 40 GbE interfaces
In order, the 7050S-64, 7050T-64, and 7050Q-16 switches have a typical/maximum power draw of 125W/220W, 372W/430W, and 192W/303W. Each has a low latency of 800 nanoseconds while the high latency for each switch (same order) is 1.3 microseconds, 3.3 microseconds, and 1.15 microseconds. They all support Arista MLAG and run standard L2/L3 protocols. No TRILL support today, but the datasheets states there will be support in a future release. The 7050T-64 does stand out as it seems to be the only 10GBASE-T switch on the market with 40GbE interfaces. Cisco announced a Nexus 2232 that supports 10GBASE-T, but it requires a parent switch to function.
S4810 – 48 x 1G/10G SFP+ and 4 x 40GbE interfaces
Z9000 – 32 x 40GbE interfaces
The S4810 has a maximum power consumption of 220W, latency of sub 700 nanoseconds, and supports the standard L2/L3 protocols. The Z9000 has a maximum draw of 800W, approximate latency of 3 microseconds, and also supports standard L2/L3 protocols. Based on the data sheets, it states TRILL will be supported in the future, however, in DF10s latest white paper, “Distributed Core Architecture Using the Z9000 Core Switching System,” it gives TRILL as an option for control plane flexibility without any asterisk, so it *could* be supported at this point. Talk to your account team to find out for sure. I'll ask around as well. As far as MLAG goes, I couldn’t find anything on the Z9000, but the S4810 is also a stackable switch (unique for a 10G/40G switch), so being that is the case, you can do a cross-stack port-channel in order to form a multi-switch LAG. Assuming Dell F10 is supporting TRILL, supporting MLAG on the Z9000 becomes less important anyway.
X670V – 48 x 1G/10G SFP+ and 4 x 40 GbE interfaces**
**The 4 x 40 GbE are actually added via a network module. This is unique in that you can grow into the 40GbE interfaces – no need for the expense upfront if all you need is 1/10GE on the front facing ports. The X670V is also a stackable switch.
The latency for the X670V is between 900 nanoseconds and 1.37 microseconds based on L2/L3 traffic. After not seeing this on Extreme’s website, I was able to find it in a Lippis report after a quick search on the inter-web. Assuming the switches are stacked, they also support M-LAG. It has a maximum draw of about 450W and average use of 205W based on what I’m seeing in the Lippis report. Standard L2/L3 protocols are supported, but I could find no mention or plans of Extreme supporting TRILL in the future. One interesting point to make is that Extreme seems to be pretty vocal around OpenFlow and OpenStack. Their DC architecture, an “Open Fabric Data Center” already supports OpenFlow. Very cool.
IBM has two (2) fixed configuration switches that support 40GbE.
RackSwitch G8264 – 48 x 1/10G SFP+ and 4 x 40GbE interfaces
RackSwitch G8316 – 16 x 40GbE
Interestingly enough, the G8264 is one of two switches that I’ve come across that seems to support FCOE, other than the Cisco Nexus 5K series (10GE only). Then again, I wasn’t really looking for FCOE support, so maybe I just didn’t see it.
The typical power consumption on the G8264 is 330W so I’m guessing the maximum is between 400-500W to be fair. It supports standard L2/L3 protocols, does not seem to have support for TRILL, and has sub microsecond port to port latency. No mention of M-LAG either.
Minus OpenFlow support, the G8316 seems to match all of the specs of the G8264 for power, latency, protocol support, etc. based on the data sheets.
QFX3500 (node) Switch – 48 SFP+ and 4 x 40GbE interfaces
It also supports FCOE and can be an FCOE to FC gateway. The switch itself doesn’t offer “ultimate” flexibility in that any port can be 1G copper, 1G fiber, or FC, so there are limitations for 1G and FC, but any port can run 10GbE. Nominal power is at 295W while maximum draw is at 350W. As expected, like all others, it supports standard L2/L3 protocols as well with sub microsecond latency. From what I can tell, it does not support M-LAG (if you were to use these as aggregation devices). No direct mention of L2MP/TRILL, but I do think it comes down to understanding the overall QFabric Architecture and how it competes in that market. But as far as a check in the box goes, no TRILL here unless you tell me otherwise.
Cisco, like the majority of the others, has two (2) fixed configuration switches that support 40GbE interfaces. These both fall into the Nexus 3000 product family. One interesting point to note is that this is Cisco’s only switch based off of merchant silicon.
Nexus 3064 – 48 x 1/10G SFP+ and 4 x 40GbE interfaces
Nexus 3016 – 16 x 40GbE interfaces
Both of these switches are rated at having ~800 nanoseconds of latency. The 3064 has a typical operating power of 205W-239W and a maximum of 273W, while the 3016 has an operating of 172W and maximum of 227W. The Nexus 3000 switches all support Virtual Port Channel (VPC) technology offering M-LAG functionality using them as spine or leaf switches, which does offer some nice flexibility. Each of these switches also support the standard L2/L3 protocols as every other manufacturer did. Recently, Cisco did announce that the N3K line of switches along with NX-OS will support OpenFlow – they did not offer up a definitive time frame. There is currently no support for Cisco FabricPath and there does not seem to be support for TRILL in the future – this is what I was referring to earlier saying Cisco is targeting financials with these switches, not necessarily data centers that require a lot of east-west traffic with L2 requirements. Design documents for the N3K states design with L3 fat trees, but nothing at L2, unfortunately. It could just be because I’m “doing” and “designing” Cisco networks in my day job, but I was able to find all of this information faster than the other manufacturers by using each of their respective web sites.
Arista – looks solid overall. Easy to look up and navigate their website and like the fact they have a 10G BASE-T solution with 40GbE interfaces. Not mentioned here, but Arista’s EOS seems solid, and ready to be programmed for SDNs. Also look forward to seeing if Arista will ever make a move outside the Data Center. Time will tell.
Dell Force 10 – I was probably the most surprised here, maybe because I never paid much attention to Force10. I’m really liking their 32 port 40GbE switch and that it supports TRILL (still verifying this) enabling their customers to build L2 or L3 fat trees, i.e. distributed Cores.
Juniper – QFabric is an interesting technology/architecture and they have a nice converged I/O (FCOE), low latency, and 40G solution. Need to understand it better to really comment on the overall architecture though.
HP, IBM, and Brocade – they didn’t do it for me. From navigating the websites to finding the latest documentation, as a customer, I would get fed up. I do realize this has nothing to do with their products though. My advice would be just to call your Account Team to learn more. The highlight here is IBMs OpenFlow enabled switch.
Extreme – I was pretty surprised here as well. Wasn’t expecting them to have an offering (not sure why), but having the optional module to enable 40GbE seems like it makes it more cost effective for end users. OpenFlow support is also a nice bonus for those forward thinking network architects.
Cisco – these guys own the general switching market right now. Last I checked it’s about 76% market share, which is pretty unbelievable. Fortunately for Cisco, the Nexus 3000 is shipping now with nanosecond latency, but they were in fact late to the game when it came to the low latency networks that Arista has been going after for several years now. But hey, you don’t always have to be first, right? If a customer demands it, Cisco will develop it or get it integrated it into their solution set somehow. However, with that being said, I do not like the lack of FabricPath on the Nexus 3000 series :). Ensuring that it is technically possible to get FP/TRILL on the 3K in the future is something that I’m curious about right now and will surely find out.
We’re in the midst of a lot of change when it comes to data center networking. We are seeing smaller, more compact and power efficient, higher density 10GbE, and now 40GbE fixed configuration switches, quite possibly taking over as the new Core in the data center. Overall, I was pleasantly surprised by the number of offerings for 40GbE switching out there.
I do realize I only focused on a few key data points from every manufacturer, but the goal here was only to give a high level overview of the 40GbE Data Center Switching market. The requirements/characteristics of networking devices that were focused on are not necessarily the most important, but those that I’m being asked about from my customers the most. Some of the manufactures did support technologies for VM awareness and hardware based network virtualization that do seem attractive – just not enough time to cover them here! If you want to dive deeper into any of these areas, even those that weren’t covered in the post, feel free to contact me, and we can dive in together.