relocating an HQ network

I wanted to document a few of the hurdles we had to overcome when moving our HQ from one collection of buildings to a shiny new campus.  We pretty much gutted the new campus, pulling back all Ethernet cables in order to re-use where possible.  We designed a routed access model for our new building access layer, and installed a lot of new Ethernet infrastructure.

Routed access notes:
Allows us to re-use configuration templates and know for sure what VLANs are on every switch stack, since the model generally states one VLAN per stack (per network type).  We settled on a Data VLAN, a Voice VLAN, a Building Management / Security infrastructure VLAN, and an Audio / Video VLAN (for conference room infrastructure).  The conference rooms are all fairly complex, with up to 30 network-connected devices per CR, so we chose to give it a separate VLAN to segment it from the rest of the infrastructure.  The BMS / Security VLAN allows us to also segment it from the rest of the network.

A few things we would change about the structured cabling include:
Re-use of cabling is a great idea, and having 2 ports per cube is good for expansion, but the reality is that most people only ever use a single port, and running additional cables to the cube areas is not that big of a deal.  We like to reduce the amount of patch cables in our wiring closets, so we typically deploy a single 48-port switch per patch panel and use 1-foot patch cables.  If we want to continue this model, a lot of unsed ports are present and this could be considered wasteful.  The big issue we had is that since we deployed a significant increase in structured cabling, we couldn’t justify the additional switching infrastructure that would be required to terminate every port.  The end result is a bit of a mess in the wiring closet.

Routed access allows us to remove spanning-tree from the equation (although we don’t disable STP, we could).  It also allows both uplinks to be active and equal-cost load balancing to take place, potentially reducing the need for lots of uplinks.

Routed access could be considered wasteful from an IP addressing perspective, but since we use RFC1918 space internally, we didn’t consider this much of an issue.

Server infrastructure move planning.

We procured a circuit from a major provider with the intention of bridging our server segments at L2 in order to reduce the need to re-IP anything.  This was a great success although we had some issues at first.  If you do this, be sure to procure the right kind of circuit – the one we purchased was not a true L2 circuit and had limitations on the number of MAC addresses permitted across the link.  This took a while to troubleshoot, and was extremely frustrating.  The solution we came up with was to deploy two 6509 switches we had ‘laying around’ and running l2tpv3 over the link.  This proved to be a bit frustrating as well, since we had to ensure full-size frames could pass.  Thankfully, the circuit had an MTU of 1518 so we did have headroom, but until we did a shut/no shut of the interfaces it was not working (also a bit unnerving).

Multiple l2 circuits would have been great, with the obvious caveat that we would need to use port aggregation or STP in order to break the loop.

This is all I can think of right now, I’m sure there will be more and I’ll update this post..