- The Nexus 1000V Intercloud creates a secure Layer 2 connection between an Enterprise Data Center and Public Cloud. Today, it is just AWS that will offer this service.
- This will work just fine in a virtualized, non-automated, non-Cloud, Enterprise Data Center!
- The Nexus 1000V Intercloud does NOT require the Nexus 1000V – it is vswitch agnostic
- However, if you are already using the 1000V along with vPath enabled services such as VSG, ASA 1000V, etc., it is fully transparent and supported in the Cloud network. vPath is supported on the 1000V InterCloud solution to seamlessly offer and maintain dynamic service insertion for Enterprise and Public Cloud workloads. This is pretty cool.
- Intercloud requires a few components: (1) Virtual Supervisor Module (VSM), (2) InterCloud Extender, (3) Prime Network Services Controller (formerly VNMC), and (4) InterCloud Switch. Of the four components, only the InterCloud Switch is located in the Public Cloud.
- Even if a 1000V is already deployed, a new and dedicated VSM is required for the InterCloud solution.
- The InterCloud Extender (ICX), in a VM form factor, serves as the gateway between the Enterprise and the Public Cloud. Any VLAN(s) can be extended to the Cloud. If you’re using an overlay like VXLAN in the Enterprise, it can be bridged using a L2 VXLAN to VLAN gateway before being extended by the InterCloud Extender. That sounds like fun.
- The Prime Network Services Controller (VNMC) will be used to manage the InterCloud solution and move workloads between the Enterprise and AWS. The migration must be a cold migration today, thus you cannot do a vMotion between the Enterprise and AWS. During the cold migration using the Prime Network Services Controller, Cisco is adding an InterCloud agent to the VM as well as converting it to an Amazon AMI image. These workloads can’t be moved via vCenter, so operationally, it will be interesting to see how something like this is adopted and deployed. This is another example of how compute and network are converging in the world of cloud. Just as the network industry is doing what it can to remain in control of physical and virtual networks, I’d expect the workload mobility (cold migration) functions of the Prime Network Services Controller be pulled into hypervisor management solutions (as they should?). The Prime Network Services Controller is the same platform that manages Cisco’s virtual network security appliances, which by the way, is said to support physical appliances as well in the future. If you envision the Network Services Controller managing the placement of virtual network appliances (VMs) and moving workloads between private and public clouds, you start to see the beginning of a lightweight hypervisor if you ask me, or at least the possibility could be there.
- The InterCloud switch (ICS), also in a VM form factor, essentially becomes a remote linecard to the VSM for the InterCloud solution located in the public cloud. A secure connection using DTLS is created between the InterCloud switch and the InterCloud Extender. A dedicated ICS/ICX pair is required for each availability zone in AWS should the design have a multi-AZ requirement. The agent mentioned above that is loaded on each VM as it is converted to an AMI also creates an encrypted overlay connection between the VM and the InterCloud switch. This is how the VM would communicate to the ICS and is why only one InterCloud switch is needed per AWS AZ. All VM to VM traffic that happens in the Public Cloud would essentially be switched in user space (compared to kernel) by the InterCloud switch VM. The secure DTLS connection is between the ICS and ICX also between the VM and ICS.
- As far as what traffic is permitted or denied over the overlay, I have to verify, but thought I heard something like the following: Unicast is sent, ARPs are sent, broadcasts are sent, and all STP and multicasts are dropped.
- Routing will happen in the Enterprise unless a L3 network appliance is placed in AWS. If this is done, it offers some more flexible hybrid cloud typologies.
- Good examples of uses cases include bursting into cloud for peak workloads, DR, and dev/test.
- The overlay being used is not OTV or VXLAN. At one point a few weeks ago, I did hear it was VPLS, but no one could confirm that today.
An overlay like this is interesting because I’ve heard repeatedly OTV should always be used for L2 DCI (of course, when L2 is absolutely required) because of ARP localization, STP isolation, default gateway/FHRP and outbound traffic optimization, etc. but then a solution like this only offers some of those features. For this reason, I still think for those who DO adopt an overlay like VXLAN in the Enterprise, I wouldn't be surprised if certain customers want to try extending that between data centers for small workloads.
Overall, I'd say the InterCloud solution is pretty impressive.