|By Marten Terpstra||
|February 10, 2014 04:30 PM EST||
We get quite caught up in high level architectures at times. It is good to read some posts that focus on design and implementation and the practicality of taking higher level architectures to reality. Two of Ivan’s posts caught my eye this week. In the first, he discusses the difference in how application and network folks look at the deployment of tiered applications and what that means for the security between them. In the second, he asks a question that our entire industry has under delivered on for more than a decade: why can’t we have plug-n-play networking? They may appear as wildly different topics, but in my mind they are more than related. Applications and operations must drive network design and implementation.
In creating a data center design it is important to carefully design how L2 and L3 are layered on top of the physical network. L2 and L3 provide different levels of separation and security domains and understanding what can (or should) go where can very significantly change how efficient an application runs on the network. As Ivan points out, in many cases layers of an application require additional network services between them. The obvious ones are firewalls and loadbalancers, less obvious ones may include IPS/IDS systems, mirror and compliance monitoring and I am sure you can come up with a few more.
Traffic from applications (or between tiers of an application, the often mentioned east-west traffic) needs to be passed through one or more of these network services (or none). With the distributed nature of the VM components of a tiered application, getting the traffic to these services is not always easy. There is a movement to virtualizing these services and have them distributed and co-located with the actual VMs, but some services simply need to be a applied in a more central place because of the context they need to do their work.
Getting traffic to centralized or semi-distributed services can be accomplished in several ways. By far the easiest is to have the application send the traffic explicitly to the service. Many firewalls also act as a router for a segment, so telling the application where its default router is ensures its traffic always ends up on the firewall. Most loadbalancers terminate a http or other connection oriented session on the “outside” and attach it to a new session on the “inside”, so that traffic also naturally flows to the service.
Carefully crafting the boundaries between subnets, what belongs on each subnet and what service is applied on and between subnets is not at all trivial. There are those that believe every server or even VM should be in its own 31 bit subnet. And while just about every application (and I include storage in that too for the most part) only really needs L2 connectivity to its router, there are traits of not requiring to route traffic that may reduce the need for the network services. Multicast based applications within a subnet just work without complexity. IGMP snooping on the switch ports is about all you need. Worrying about intrusion becomes easier when VMs or portions of applications cannot be reached from outside the subnet. There is no one size fits all, no magic design or template.
The question of plug and play networking should be an embarrassing one for all of us in the industry. We have not done anything to significantly improve the automatic provisioning of networks. Sure we have glued together some DHCP, LLDP, CDP or 802.1X based VLAN memberships (mostly pushed by VoIP phone enablement), but we honestly have not moved on significantly from those most basic steps. There is certainly progress when creating fabrics and we are doing our part to significantly reduce the amount of provisioning touches. The bulk of provisioning and configuration however is on the access side of a network, where we plug in our servers, appliances, storage and everything else. And Ivan is totally correct. Some of the fundamental tools exist to exchange useful information between the devices just connected, but we have not taken that to a next level and taken a good chunk of provisioning out of the hands of the operator (and their scripts).
The reality of network design and implementation is in the details. An understanding of the applications that use the networks and how they are tiered and separated into VMs is critical to understand how L2 and L3 are layered on top of a network. Virtualization may make this easier to dynamically attach VMs to network segments (L2 or L3), but the resulting traffic flow still needs to make sense. Especially if network services need to be applied.
When we talk to our customers, the discussion moves on from spine and leaf versus a mesh fabric very quickly in most cases. The bulk of the discussions are focused on flexibility, automation, placement of boundaries and adjustment of topologies. The design process is driven by the application. Which is why it is nice to see Ivan’s video article starting with an application and deriving a network design from it. Even if the application was a generic one.
[Today's fun fact: Half of all Americans live within 50 miles of their birthplace. This is called propinquity.]
The post Network Design in a Virtual World: Applications and Operations must Rule appeared first on Plexxi.
- My Personal 2010 Predictions
- Java Application Security in the Corporate World
- Sarbanes-Oxley: The New Rising Star
- Don’t forget to register for FOSE 2013
- Sarbanes-Oxley and Web Services
- Itemfield: Defining the Benchmark for Complex Data Transformations
- A Storage Management Perspective on Sarbanes Oxley
- IT Security - "Sarbanes-Oxley Will Be a Huge Driver," Says Sun Exec
- Streamline Health® Engages KPMG as Its New Independent Registered Public Accountants
- Extending Identity Management Solutions Into a SOA