Currently in Migration - Jason Edelman's Old Blog
  • Home
  • About
  • Contact

40GbE Data Center Switching

12/10/2011

12 Comments

 
Let me be fair by saying I work on designs on a regular basis that are 99% Cisco.  Of course, there are integrations with other equipment for every technology, but from an R/S standpoint, it’s mainly Cisco.  Occasionally, I’ll come across competitive (to CSCO) information, but that’s about it.  For this post, I wanted to make it a point to see what was out there in the new 40 GbE LAN switching market.  It’s a topic that is becoming more popular (for various trends in the Data Center) and I’m really quite surprised about it all, so I figured let’s dive in and see who’s got what. 

And this was a shallow dive.  The goal was not to spend countless hours on each solution; I simply wanted to get a high level overview.  The focus was just to try and get the following questions answered.

Who has fixed configuration switches with 40GbE interfaces?  Do they support “standard” L2/L3 protocols?  Do they support some type of Layer 2 multipathing?  Is there support for a type of MLAG?  What is the port to port latency?  What is the power consumption?  Is there anything “special” about the switch, or is it unique in anyway?

After a few seconds of thinking about it, I decided to focus on Arista, Dell Force10, Extreme, IBM, Juniper, Brocade, HP, and of course, Cisco.

Arista - Just after going to their website, I was directed to a product matrix, so they made this research quite easy compared to some of the other vendors.  Arista has three (3) fixed configuration switches that support 40GbE.  These are the 7050S-64, 7050T-64, and 7050Q-16. 

 7050S-64 – 48 x 1G/10G SFP+ and 4 x 40GbE interfaces

7050T-64 – 48 x 10Gb BASE-T and 4 x 40GbE interfaces

7050Q-16 –16 x 40 GbE interfaces

In order, the 7050S-64, 7050T-64, and 7050Q-16 switches have a typical/maximum power draw of 125W/220W, 372W/430W, and 192W/303W.  Each has a low latency of 800 nanoseconds while the high latency for each switch (same order) is 1.3 microseconds, 3.3 microseconds, and 1.15 microseconds.  They all support Arista MLAG and run standard L2/L3 protocols.  No TRILL support today, but the datasheets states there will be support in a future release.  The 7050T-64 does stand out as it seems to be the only 10GBASE-T switch on the market with 40GbE interfaces.  Cisco announced a Nexus 2232 that supports 10GBASE-T, but it requires a parent switch to function.

Dell Force10 - now these guys are pushing their distributed Core design hard.  At least that’s the way it seems after tweeting it 10 times in a day :).  I kid, but it was a lot, and they have good reason to market their solution.  They seem to have the ONLY 32 port 40GbE switch on the market right now.  DF10 has two (2) fixed configuration switches that support 40GbE.

S4810 – 48 x 1G/10G SFP+ and 4 x 40GbE interfaces

Z9000 – 32 x 40GbE interfaces

The S4810 has a maximum power consumption of 220W, latency of sub 700 nanoseconds, and supports the standard L2/L3 protocols.  The Z9000 has a maximum draw of 800W, approximate latency of 3 microseconds, and also supports standard L2/L3 protocols.  Based on the data sheets, it states TRILL will be supported in the future, however, in DF10s latest white paper, “Distributed Core Architecture Using the Z9000 Core Switching System,” it gives TRILL as an option for control plane flexibility without any asterisk, so it *could* be supported at this point.  Talk to your account team to find out for sure.  I'll ask around as well.  As far as MLAG goes, I couldn’t find anything on the Z9000, but the S4810 is also a stackable switch (unique for a 10G/40G switch), so being that is the case, you can do a cross-stack port-channel in order to form a multi-switch LAG.  Assuming Dell F10 is supporting TRILL, supporting MLAG on the Z9000 becomes less important anyway.

Extreme – I haven’t really heard a peep from these guys in a long time, but they seem to have a single fixed configuration switch that supports 40GbE interfaces.  The switch is the Summit X670V.  It would be comparable to the previously mentioned Arista 7050S-64 and Dell Force10 S4810.

X670V –   48 x 1G/10G SFP+ and 4 x 40 GbE interfaces**

**The 4 x 40 GbE are actually added via a network module.  This is unique in that you can grow into the 40GbE interfaces – no need for the expense upfront if all you need is 1/10GE on the front facing ports.  The X670V is also a stackable switch.

The latency for the X670V is between 900 nanoseconds and 1.37 microseconds based on L2/L3 traffic.  After not seeing this on Extreme’s website,  I was able to find it in a Lippis report after a quick search on the inter-web.  Assuming the switches are stacked, they also support M-LAG.  It has a maximum draw of about 450W and average use of 205W based on what I’m seeing in the Lippis report.  Standard L2/L3 protocols are supported, but I could find no mention or plans of Extreme supporting TRILL in the future.  One interesting point to make is that Extreme seems to be pretty vocal around OpenFlow and OpenStack.  Their DC architecture, an “Open Fabric Data Center” already supports OpenFlow.  Very cool.

IBM – giving them credit well deserved, they do have an OpenFlow enabled switch although the data sheet doesn’t even state it!  It’s been in the news for a few weeks, but apparently they didn’t get a chance to update the website and data sheet.  Oh well.

IBM has two (2) fixed configuration switches that support 40GbE.

RackSwitch G8264 – 48 x 1/10G SFP+ and 4 x 40GbE interfaces

RackSwitch G8316 – 16 x 40GbE

Interestingly enough, the G8264 is one of two switches that I’ve come across that seems to support FCOE, other than the Cisco Nexus 5K series (10GE only).  Then again, I wasn’t really looking for FCOE support, so maybe I just didn’t see it.

The typical power consumption on the G8264 is 330W so I’m guessing the maximum is between 400-500W to be fair.  It supports standard L2/L3 protocols, does not seem to have support for TRILL, and has sub microsecond port to port latency.  No mention of M-LAG either.

Minus OpenFlow support, the G8316 seems to match all of the specs of the G8264 for power, latency, protocol support, etc. based on the data sheets.

Juniper – over the past few months has launched the “QFabric Switch Architecture” for the Data Center.  This is really the first time where I went to Juniper’s site trying to find information (for 40GbE) and spent a few minutes looking at the QFabric documents.  Interesting stuff, but we’ll save that for a rainy day.  Bottom line is Juniper has one (1) fixed configuration switch that acts as a QFabric node, i.e. Top of Rack switch, which can operate as an independent switch or “node” in the QFabric architecture.

QFX3500 (node) Switch – 48 SFP+ and 4 x 40GbE interfaces

It also supports FCOE and can be an FCOE to FC gateway.  The switch itself doesn’t offer “ultimate” flexibility in that any port can be 1G copper, 1G fiber, or FC, so there are limitations for 1G and FC, but any port can run 10GbE.  Nominal power is at 295W while maximum draw is at 350W.  As expected, like all others, it supports standard L2/L3 protocols as well with sub microsecond latency.  From what I can tell, it does not support M-LAG (if you were to use these as aggregation devices).  No direct mention of L2MP/TRILL, but I do think it comes down to understanding the overall QFabric Architecture and how it competes in that market.  But as far as a check in the box goes, no TRILL here unless you tell me otherwise.

Brocade – The plan was to look at Brocade, but after a quick search on their website, there wasn’t mention of any 40GbE fixed configuration switches.  After looking a little more, the Brocade ICX6610 that comes in a few different flavors does have 4 x 40GbE interfaces based on the QSFP standard, but they are strictly used for stacking.  Maybe this will change in the future and will permit them to connect back into a Core.

HP – requires way too many clicks to get to their networking home page.  HP and IBM both need a re-architecture of their website badly.  I am currently not and am not trying to be an HP consumer.  Anyway, when finally getting to their Data Center networking page, there was nothing mentioned around a 40GbE enabled switch.  I was about to throw in the towel and figured I’d google it, and sure enough, I got redirected to the “switches” main page (not data center), only to find a broken link to what looks like a new HP 5900 switch that supports 40GbE.  The data center page didn’t have the 5900…it only had the 5800 series.  Oh well, maybe next time HP.

CISCO – seems to be focusing on the low latency financial markets with their 40GbE enabled switches.  As of late, they are also building documents on how these switches meet requirements of large Hadoop (Big Data) clusters. 

Cisco, like the majority of the others, has two (2) fixed configuration switches that support 40GbE interfaces.  These both fall into the Nexus 3000 product family.  One interesting point to note is that this is Cisco’s only switch based off of merchant silicon.

Nexus 3064 – 48 x 1/10G SFP+ and 4 x 40GbE interfaces

Nexus 3016 – 16 x 40GbE interfaces

Both of these switches are rated at having ~800 nanoseconds of latency.  The 3064 has a typical operating power of 205W-239W and a maximum of 273W, while the 3016 has an operating of 172W and maximum of 227W.  The Nexus 3000 switches all support Virtual Port Channel (VPC) technology offering M-LAG functionality using them as spine or leaf switches, which does offer some nice flexibility.  Each of these switches also support the standard L2/L3 protocols as every other manufacturer did.  Recently, Cisco did announce that the N3K line of switches along with NX-OS will support OpenFlow – they did not offer up a definitive time frame.  There is currently no support for Cisco FabricPath and there does not seem to be support for TRILL in the future – this is what I was referring to earlier saying Cisco is targeting financials with these switches, not necessarily data centers that require a lot of east-west traffic with L2 requirements.  Design documents for the N3K states design with L3 fat trees, but nothing at L2, unfortunately.  It could just be because I’m “doing” and “designing” Cisco networks in my day job, but I was able to find all of this information faster than the other manufacturers by using each of their respective web sites. 

Final thoughts for each vendor:

Arista – looks solid overall.  Easy to look up and navigate their website and like the fact they have a 10G BASE-T solution with 40GbE interfaces.  Not mentioned here, but Arista’s EOS seems solid, and ready to be programmed for SDNs.  Also look forward to seeing if Arista will ever make a move outside the Data Center.  Time will tell.

Dell Force 10 – I was probably the most surprised here, maybe because I never paid much attention to Force10.  I’m really liking their 32 port 40GbE switch and that it supports TRILL (still verifying this) enabling their customers to build L2 or L3 fat trees, i.e. distributed Cores.

Juniper – QFabric is an interesting technology/architecture and they have a nice converged I/O (FCOE), low latency, and 40G solution.  Need to understand it better to really comment on the overall architecture though.

HP, IBM, and Brocade – they didn’t do it for me.  From navigating the websites to finding the latest documentation, as a customer, I would get fed up.  I do realize this has nothing to do with their products though.  My advice would be just to call your Account Team to learn more.  The highlight here is IBMs OpenFlow enabled switch.

Extreme – I was pretty surprised here as well.  Wasn’t expecting them to have an offering (not sure why), but having the optional module to enable 40GbE seems like it makes it more cost effective for end users.  OpenFlow support is also a nice bonus for those forward thinking network architects.

Cisco – these guys own the general switching market right now.  Last I checked it’s about 76% market share, which is pretty unbelievable.  Fortunately for Cisco, the Nexus 3000 is shipping now with nanosecond latency, but they were in fact late to the game when it came to the low latency networks that Arista has been going after for several years now.  But hey, you don’t always have to be first, right?  If a customer demands it, Cisco will develop it or get it integrated it into their solution set somehow.  However, with that being said, I do not like the lack of FabricPath on the Nexus 3000 series :).  Ensuring that it is technically possible to get FP/TRILL on the 3K in the future is something that I’m curious about right now and will surely find out.

Summary

We’re in the midst of a lot of change when it comes to data center networking.  We are seeing smaller, more compact and power efficient, higher density 10GbE, and now 40GbE fixed configuration switches, quite possibly taking over as the new Core in the data center.  Overall, I was pleasantly surprised by the number of offerings for 40GbE switching out there. 

I do realize I only focused on a few key data points from every manufacturer, but the goal here was only to give a high level overview of the 40GbE Data Center Switching market.  The requirements/characteristics of networking devices that were focused on are not necessarily the most important, but those that I’m being asked about from my customers the most.  Some of the manufactures did support technologies for VM awareness and hardware based network virtualization that do seem attractive – just not enough time to cover them here!  If you want to dive deeper into any of these areas, even those that weren’t covered in the post, feel free to contact me, and we can dive in together. 

12 Comments
Ryan Malayter link
1/18/2012 11:19:04 am

Interesting that all of these switches seem to be based on the Broadcom Trident/Trident+ chip - even the Ciscos. I believe the 32x40GbE from Force10 is based on three of these chips in an internal Clos configuration.

Is Brodcom the only vendor actually shipping 40 GbE silicon?

Reply
Jason Edelman link
1/19/2012 06:46:52 am

Ryan,

It is indeed interesting. Your speculation on the Force10 using an internal CLOS configuration sounds logical to me, although I don't know that for sure. I also haven't heard of any other vendors besides Broadcom shipping 40GbE silicon, but that's not to say there aren't some small shops out there.

I even checked newer Pronto switches that fully support OpenFlow and same thing there - they are using the Trident chipset.

Here is also a good write up by Greg Ferro if you haven't seen it yet. Talks much about Trident being used by EVERY switch and how networks will become software driven in the future:

http://etherealmind.com/merchant-silicon-vendor-software-rise-lost-opportunity/

-Jason

Reply
tony roth
1/23/2012 01:54:24 am

mellanox is the best 40gbe choice nobody comes close. 36ports 40gbe $16k nics dual port 40gbe $500.

Reply
Jason Edelman link
1/23/2012 02:27:23 am

What manufacturer's are using Mellanox chips for their 40GbE switches?

Reply
Frank Castillo
3/22/2012 05:29:15 am

Mellanox Technologies introduced the SX1035 and Sx 1036

http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=114&menu_section=71

http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=115&menu_section=71


Michael Hoehl
1/24/2012 09:22:05 am

Great summary!

Any opinions or findings regarding secure network engineering with these high speed products?

Reply
Jason Edelman link
1/26/2012 02:41:44 am

Thanks!

Can you be a little more specific with what you are asking?

Reply
IrishJan link
5/17/2012 10:12:06 am

The Force10 switches: The S4810 is stackable with a current max of 3 units which will be extended to 6 units max. in future release, The Z switch will become stackable in near future with a max of 3 units per stack

Reply
Rajiv Barsamanian
7/8/2012 11:58:37 am

Our architecture team recently had a discussion with the Force10 folks. The Z9000 does NOT support TRILL at all and it doesn't seem to be in the horizon anytime soon. TRILL would require a whole new chip set and therefore a whole new switch architecture to support it.

The Z9000 also does NOT support MLAG, but I heard that it might by early next year.

Reply
Jason Edelman link
7/10/2012 12:34:45 am

Thanks for the info Rajiv. That is surprising to here.

Reply
Chris
11/12/2012 03:37:47 am

The z9000 supports VLT/MLAG as of the 9.0 release that's available now.

Reply
Mike Walden link
10/13/2013 08:24:22 pm

I just want to say your article is striking. Well with your permission allow me to grab feed to keep up to date with forthcoming post. Thanks.

Reply



Leave a Reply.

    Author

    Jason Edelman, Founder of Network to Code, focused on training and services for emerging network technologies. CCIE 15394.  VCDX-NV 167.


    Enter your email address:

    Delivered by FeedBurner


    Top Posts

    The Future of Networking and the Network Engineer

    OpenFlow, vPath, and SDN

    Network Virtualization vs. SDN

    Nexus 7000 FAQ

    Possibilities of OpenFlow/SDN Applications 

    Loved, Hated, but Never Ignored #OpenFlow #SDN

    Software Defined Networking: Cisco Domination to Market Education

    OpenFlow, SDN, and Meraki

    CAPWAP and OpenFlow - thinking outside the box

    Introduction to OpenFlow...for Network Engineers


    Categories

    All
    1cloudroad
    2011
    2960
    40gbe
    7000
    Arista
    Aruba
    Big Switch
    Brocade
    Capwap
    Christmas
    Cisco
    Controller
    Data Center
    Dell Force10
    Embrane
    Extreme
    Fex
    Hadoop
    Hp
    Ibm
    Isr G2
    Juniper
    Limited Lifetime Warranty
    Meraki
    Multicast
    N7k
    Nexus
    Nicira
    Ons
    Opendaylight
    Openflow
    Openstack
    Presidio
    Qsfp
    Quick Facts
    Routeflow
    Sdn
    Sdn Ecosystem
    Security
    Ucs


    Archives

    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    June 2014
    May 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012
    January 2012
    December 2011
    November 2011


    RSS Feed


    View my profile on LinkedIn
Photo used under Creative Commons from NASA Goddard Photo and Video