If a packet enters the network switch (data plane device) and doesn’t have a match in the flow table, it’s punted to the controller to see how to handle that packet and the subsequent packets in that flow. This is classic reactive forwarding. Due to latency and possible scaling issues, it’s recommended to leverage and deploy proactive flow forwarding whenever possible.
In a 3-tier software defined network (SDN) that has control and data plane separation leveraging a protocol such as OpenFlow, there are generally data plane devices, controllers, and applications/control programs. Pretty straightforward.
If a packet enters the network switch (data plane device) and doesn’t have a match in the flow table, it’s punted to the controller to see how to handle that packet and the subsequent packets in that flow. This is classic reactive forwarding. Due to latency and possible scaling issues, it’s recommended to leverage and deploy proactive flow forwarding whenever possible.
2 Comments
I recently participated in two podcasts where the focus was all about Software Defined Networking and the changing network landscape.
If you are interested in listening, check them out:
On a side note, Brian and Theo rock. If you haven't been listening to Providing Cloudy Service or The Cloudcast, I suggest you start! -Jason Training and use cases are still emerging in the world of Software Defined Networking (SDN). Luckily, there is an event, local (for me) in New York City, that has two full days dedicated to SDN (some call it open networking nowadays since it’s never been more cool to be open) on October 29 & 30. The event is ONUG Fall 2013. On day one there will be solid hands-on training on building your own SDN applications, understanding white box networking, and how to get started with OpenFlow deployments. Day two is structured more like a traditional conference.
I was driving home tonight and saw a tweet from Ethan Banks (@ecbanks) that stated, “After all these years of IPSEC (a standard, after all), bringing up a tunnel between disparate vendors is one of the hardest tasks I do." When I see these kinds of statements and have these thoughts myself, I think, there is clear problem, do others have the same problem, is this a problem looking for a solution, and can be there be a better way? In this particular case, it’s definitely a problem, but can there be a better way? Can we view this as an example where the network and security industry has been okay with mediocrity? Maybe.
A few weeks ago, I wrote about where I was in the world of programming. As I said then, I am still focused on building a onePK application. This onePK application now dubbed Network Control Manager is a central interface to the network. It can be used to gather real time data as well as make changes to the network in a more centralized, automated, and real-time fashion. Following the SDN model, this application can be seen as a SDN controller if you wish to call it that. The southbound API used is Cisco’s onePK and the northbound API is self-defined as “je-nb-API” :). The application/controller exposes northbound RESTful interfaces to be consumed by 3rd party applications and control programs, the first of which is a CLI application that interacts with the network via Network Control Manager.
In the new world of networking, you can program your network. You can make it do whatever you want. Even your business applications can program your network. Have you heard this before? If so, you aren’t alone. Well, before you let business applications program the network, how about starting somewhere a little less frightening? Here is a good use case for network programmability that I thought about during the ThousandEyes presentation while at Network Field Day 6. It combines ThousandEyes Private Agents and Cisco’s onePK.
In Part 1, I talked about how OpenFlow could commoditize hardware in the network visibility fabric market. In this post, I’ll focus on intelligent network load balancing.
Long overdue, but here are some slides from the Open Networking Summit that happened back in April 2013. These were presented by an architect on the Azure team. Fully relevant given some discussion happening at the SDDC today.
Absolutely, but I’m not going to say what you think. I’m going to shift from talking about the traditional network or network virtualization solutions that have been getting all of the attention lately. There are still companies out there building new products that leverage black box vertically integrated hardware and software. The two markets that could lose out to commodity hardware are network visibility fabrics and intelligent network load balancing. In this post, I’ll focus on visibility fabrics and save the latter for my next post.
VMware’s NSX officially launched just two weeks ago. Since the launch, the media has focused on the VMware and Cisco relationship and where that may end up in the future. That also includes me. I wrote my own take that was recently published by TechTarget on the impact NSX will have on the Cisco/VMware relationship, but when you look at the industry as a whole, it’s more than that. If we take a step back, it’s not about just VMware and Cisco. If we use stereotypes (good or bad) in the networking space, Cisco falls into the traditional physical network or incumbent category and VMware falls into the emerging network virtualization category.
Harry Quakenboss made an interesting comment on a previous post of mine a few days ago. He noted that Big Switch has been pretty quiet in terms of their outbound marketing. He is absolutely right and I also commented back to him, I actually remember thinking a few times over the past 18 months when startups would launch or get acquired, they usually went through a quiet period when it came to social media and outbound marketing. That makes perfect sense --- so what is going on with Big Switch?
Many focus on the lack of visibility in network virtualization environments. It's time for more concrete conversations in this area. The statements around visibility are usually too broad After a quick search, I found that Riverbed’s Cascade network performance management (NPM) solution already supports VXLAN and I’m sure what they offer will only get better. That means they can let you know what applications are being used within the overlay tunnels. A demo of the solution is below.
It’s been something I’ve wanted to tackle for a while, but choosing one is damn hard, especially as a network guy who hasn’t programmed in a while. I had two years of Advanced Programming (AP) in high school, but that was it and that was a long time ago. With network agility, automation, APIs, and SDN on the horizon, how should you pick a language and what do you even want to program? For everyone, it will be different, but I’ll let you know the path I’m taking and how I ended up where I’m at today.
In Plexxi & Affinities Part 1, I gave a very high level overview of Affinities and how algorithms are used to ultimately figure out which network path certain traffic should use in a Plexxi network. In this post, I want to explore and speculate where else in the network Affinities make sense. Don’t forget, this is fully speculative and just my opinion.
The first Tech Field Day (@TechFieldDay) event I ever participated in was Wireless Field Day 2 in January 2012. I happen to be on Twitter and saw some people I followed talking about it, so I clicked a link, and there I was watching a LIVE feed on Meraki while I sat on my couch having dinner. I even asked a question on Twitter directed at the speaker. If my memory serves me correct, Tom Hollingsworth (@NetworkingNerd) was nice enough to relay the message from Twitter to the Meraki presenter as I waited and listened for an answer. My voice was heard from across the United States while not even being in the room. This was pretty sweet.
It Starts with Affinities
Plexxi is a start up in the network industry re-thinking networking from the ground up. A Plexxi network is different. It is cabled differently. It is thought about differently. It is integrated to other systems differently. With Plexxi, it all starts with conversations that are occurring on the network. These conversations, or relationships, occur between different systems on the network. These relationships are what Plexxi calls Affinities. I’ve written in the past about how the virtual switch is an SDN war zone. Well, it still seems to be the early days for Software Defined Networking (SDN) no matter how much time goes by and I realized there isn't a whole lot of documentation out there, especially for the new guy or gal on the block, when it comes to Open vSwitch when compared to the vendor offerings from Cisco and VMware. Over the coming weeks, I’m going to hope to write more about Open vSwitch, Linux networking, and Open Stack Networking (Neutron, formerly Quantum). On that note, this post is meant to be an easy to read, longer than expected, concise introduction to Open vSwitch (OVS).
What is going on with Big Switch? It is really hard to tell on the surface, but there is a general sentiment out there (particularly in Twitter land) that they have a bumpy road ahead of them, especially after the Open Daylight (ODP) project launched. Okay, but what start-up has it easy? The only ones who know how Big Switch is really doing are the users of their technology and Big Switch themselves.
Two weeks ago I had the pleasure of taking a 2-day OpenStack training by Mirantis in New York City. It was well worth it because up until this point I had never been hands-on with OpenStack and more relevant for me, never had a deep dive on the underlying architecture. Plus it’s always good to get time out of the office and do a deep dive into a new technology.
While many are still figuring out private cloud and public cloud along with the networking impact of each, Cisco went into a bit more detail today on its Hybrid Cloud Networking strategy leveraging the Cisco Nexus 1000V InterCloud product that was initially announced a few months ago. I had the opportunity to attend a session on this just a few hours ago and here are some of the highlights along with some general thoughts.
After a few days immersed in Cisco land down in Orlando, what’s trending?
While there are definitely many trends, many sessions, and many perspectives, I can only speak to what I am seeing. Here are three (of many) things that I’ve seen a good amount of focus on in the breakouts I’ve attended. I would also say all three are highly strategic for Cisco in the Data Center and Cloud markets. I wrote this a few days ago, but didn’t have time to post, but it’s still relevant given all the discussion around SDN and Network Virtualization.
Getting into a long thread on Twitter is entertaining. You have to keep your thoughts short and concise and sometimes it’s hard to list every descriptive phrase known to man to articulate what you mean. But…that also makes it fun! One example is the thread that happened last Saturday that I jumped into a little late. In my previous post, I closed with asking, “if you require certain hardware configurations and ASICs for your virtual network solutions, have you truly deployed network virtualization?” I didn’t touch upon where hardware does and does not make sense though. I will expand on that here.
Brad Hedlund recently wrote an overview of Network Virtualization. I’d recommend it to anyone exploring network virtualization technologies over the coming months. In particular, I want to focus on the comments in the blog coming from both Brad and David Klebanov. The comments sparked a flurry of thoughts that I’ll attempt to get out in this post.
As a reminder, this is pure opinion and speculation on my part just as it is theirs. Mine however, I’ll say is a bit more neutral since I don’t work for a manufacturer :). Who will be the first to promote it? Will it be via hardware or simply an application of network virtualization? Because it will happen.
While some whole heartedly believe in not connecting sites with ANY type of layer 2, and I actually am a bigger believer in that now than I used to be, customers still ask and “require” this occasionally – namely for workload mobility. Any answer I get or anything I read does not actively promote using an overlay such as VXLAN between data centers. The responses are usually around 1. BUM traffic control 2. ARP localization 3. Traffic Trombone (since only one active default gateway) 4. STP isolation. If you want to know all of the typical responses, look at the benefits of OTV. But again, in a world that will soon be eaten by software, why can’t a viable solution be developed for L2 DCI with overlays? |
AuthorJason Edelman, Founder & CTO of Network to Code. Categories
All
Archives
May 2015
|