Currently in Migration - Jason Edelman's Old Blog
  • Home
  • About
  • Contact

Cisco - Are they Genius or Crazy?

10/17/2012

0 Comments

 
I’ve been out of the Cisco world for a few months, but for the month of October, I’ve been trying to get re-focused as I watch the Yankees lose.  It’s been a month of several announcements, two of which I’ll focus on in this post:  the Nexus 1000V pricing update and the Cisco Edition of OpenStack.

Cisco 1000V Update

Early this month, Cisco made a major change to its pricing strategy for the Nexus 1000V virtual switch.  Prior to the announcement, the cost of the virtual switch was $695 list price per CPU.  It wasn’t a significant cost, but it was still a cost when you compare it to the no cost charge of using Open vSwitch (OVS).  Thanks to Nicira, well I guess VMware now, for the extreme focus on the development and success of OVS in the cloud and open source community.  Because of their work, it is now the standard offering in Citrix XenServer.  

Read More
0 Comments

QSFP+ Fiber Connectivity

3/30/2012

19 Comments

 
There are trends of high density 10GbE connectivity in the data center that are increasing the needs for some to use 40GbE interfaces for uplink connectivity.  Because 40GbE requires a new type of optic, called a QSFP+, I’ve had many questions from customers and myself regarding the connectivity and cabling options.  Oddly enough,  it took talking to at least 5 Cisco Engineers that span San Jose to NYC to compile this data, so if you’ve like to correct or add anything here, please feel free to comment below.  

The specific questions and research I was doing was related to the Cisco Nexus 3000 series switches, namely the 3064 and 3016.  I state that because there is the chance that the QSFP+ could operate differently if using different switch types – and that’s per the TME of optic.  For those new to the Nexus 3000 series, the 3064 has 48 front facing ports of SFP+ (1G/10G) and 4 x 40GbE QSFP+ that could be used as uplinks.  The 3016 has 16 40GbE QSFP+ ports.   


Read More
19 Comments

Nexus 7000 FAQ

11/18/2011

9 Comments

 
The Nexus 7000 is constantly evolving and there seems to be more and more design parameters that have to be taken into consideration when designing Data Center networks with these switches.  I’m not going to go into each of the different areas from a technical standpoint, but rather try and point out as many of those so called “gotchas” that need to be known upfront when purchasing, designing, and deploying Nexus 7000 series switches.

Before we get started, here is a quick summary of current hardware on the market for the Nexus 7000.
  1. Supervisor 1
  2. Fabric Modules (FAB1, FAB2)
  3. M1 Linecards (48 Port 10/100/1000, 48 Port 1G SFP, 32 Port 10G, 8 port 10G)
  4. F1 Linecards (32 Port 1G/10G, F2 linecards, 48 Port 1G/10G)
  5. Fabric Extenders (2148, 2224, 2248, 2232)
  6. Chassis (7009, 7010, 7018)

Instead of writing about all of these design considerations, I thought I’d break it down into a Q & A format, as that’s typically how I end up getting these questions anyway.  I’ve ran into all of these questions over the past few weeks (many more than once), so hopefully this will be a good starting point, for myself as I tend to forget, and many others out there, to check compatibility issues between the hardware, software, features, and licenses of the Nexus 7000.  The goal is to keep the answers short and to the point.

Q: 
What are the throughput capabilities and differences of the two fabric modules (FAB1 & FAB2)?


A: 
It is important to note each chassis supports up to five (5) fabric modules.  Each FAB1 has a maximum throughput of 46Gbps/slot meaning the total per slot bandwidth available when there are five (5) FAB1s in a single chassis would be 230Gbps.  Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five (5) FAB2s in a single chassis would be 550Gbps.  The next question goes into this a bit deeper and how the MAXIMUM theoretical per slot bandwidth comes down based on which particular linecards are being used.  In other words, the max bandwidth per slot is really dependent on the fabric connection of the linecard being used.


Read More
9 Comments

    Author

    Jason Edelman, Founder of Network to Code, focused on training and services for emerging network technologies. CCIE 15394.  VCDX-NV 167.


    Enter your email address:

    Delivered by FeedBurner


    Top Posts

    The Future of Networking and the Network Engineer

    OpenFlow, vPath, and SDN

    Network Virtualization vs. SDN

    Nexus 7000 FAQ

    Possibilities of OpenFlow/SDN Applications 

    Loved, Hated, but Never Ignored #OpenFlow #SDN

    Software Defined Networking: Cisco Domination to Market Education

    OpenFlow, SDN, and Meraki

    CAPWAP and OpenFlow - thinking outside the box

    Introduction to OpenFlow...for Network Engineers


    Categories

    All
    1cloudroad
    2011
    2960
    40gbe
    7000
    Arista
    Aruba
    Big Switch
    Brocade
    Capwap
    Christmas
    Cisco
    Controller
    Data Center
    Dell Force10
    Embrane
    Extreme
    Fex
    Hadoop
    Hp
    Ibm
    Isr G2
    Juniper
    Limited Lifetime Warranty
    Meraki
    Multicast
    N7k
    Nexus
    Nicira
    Ons
    Opendaylight
    Openflow
    Openstack
    Presidio
    Qsfp
    Quick Facts
    Routeflow
    Sdn
    Sdn Ecosystem
    Security
    Ucs


    Archives

    May 2015
    April 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    June 2014
    May 2014
    March 2014
    February 2014
    January 2014
    December 2013
    November 2013
    October 2013
    September 2013
    August 2013
    July 2013
    June 2013
    May 2013
    April 2013
    March 2013
    February 2013
    January 2013
    December 2012
    November 2012
    October 2012
    June 2012
    May 2012
    April 2012
    March 2012
    February 2012
    January 2012
    December 2011
    November 2011


    RSS Feed


    View my profile on LinkedIn
Photo used under Creative Commons from NASA Goddard Photo and Video