While in Las Vegas recently I was invited by Mark Thiele to tour the Switch SuperNAP data center – an invitation I jumped at. Like all guys, I grew up reading stories about soldiers and spies and dreamed of running around in a SWAT team. Entering a facility that has ex-marines patrolling the perimeter in Hummers 24*7, where all the security staff wear full combat gear and where a rack of automatic weapons features prominently in the control room brought those memories flooding back – but that aside, the visit to the SuperNAP was a fascinating glimpse at how vendors in a relatively boring space can stand out from the crowd.
SuperNAP houses some high profile workloads – Ebay’s big data initiatives, HP’s public cloud, VMware’s CloudFoundry and a number of highly classified federal workloads all reside within the facility.
Switch applies a significant amount of new thinking to data center design – used to raised-floor data centers? Switch doesn’t have them. Used to seeing HVAC plant within the data center? The entire SuperNAP is dedicated to racks – all the HVAC equipment is outside. Used to limitations on server density because of power constraints? Switch challenges you to fit enough kit into a rack to tax their power constraints. Essentially Switch CEO and founder Rob Roy has decided to reinvent all parts of the traditional data center and has a huge number of patents to show for it. Switch has developed two particular cooling technologies to support power loads of an amazing 1,500 watts a square foot;
- A heat containment system known as T-SCIF (Thermal Separate Compartment in Facility). Overhead cooling ducts drop chilled air into the cool aisle, which sits on a slab rather than a raised floor. T-SCIF systems encapsulate each rack, leaving the front open to the cold aisle. The enclosure uses a chimney system to deliver waste heat back into the ceiling plenum, where it can be returned to the cooling units
- Cooling is powered by custom units known as WDMD (Wattage Density Modular Design) that sit outside the building and can switch between four different cooling methods to provide the most efficient mode for changing weather conditions. (See our video overview of the WDMD)
For comparison, many other facilities boast of 250 watts per square foot, but often this rating includes the cooling demand as well, meaning that only half of that capacity is actually available for kit. With Switch, 1500 watts per square foot is all available to power servers.
Even more interesting than the physical way Switch builds data centers though (and I can’t stress just how impressive their construction approaches are), Switch is doing some really innovative stuff in terms of business models. They take advantage of the fact that they now own something built by Enron back in the 90’s – who was looking to build a broad-band arbitrage center. They were going to arbitrage bandwidth just like they were arbitraging power. About a week before they were ready to open that facility, they declared bankruptcy. But prior to that they had paid millions of dollars to have a raft of fibre providers string connectivity into one facility in Las Vegas.
Switch founder Roy spent the next nine months pulling that facility out of Enron’s bankruptcy. In an absolute coup, no one else bid for the facility and hence it became Switch’s asset. It is a fibre hub unparalleled worldwide – in fact Thiele who is an EVP at Switch told us that insurance companies are unable to value the facility as it is unprecedented, both countryside and worldwide. What Switch does that is interesting is to use the buying power that so many connections brings in order to lower the cost of connectivity for its customers.
Switch aggregates the carrier buying power of both its large number of customers, and the fact that they have this amazing connectivity, through SwitchCORE (Combined Order Retail Ecosystem), a carrier-neutral purchasing consortium. Bandwidth providers can then compete for the business of Switch customers. The carriers benefit from volume deals while Switch customers enjoy favorable rates for their connectivity needs. Anecdotally Switch tells us that given the savings customers can achieve from SwitchCORE, the increased costs of colocation at such an impressive facility can in fact become cost neutral compared to lower tier data center hosting alongside regular carrier bills. Case in point Disney – whose Vegas to Seattle carrier link price dropped from $35.5K per month to $6.7k per month by using SwitchCORE.
Switch already has a handful of facilities in Nevada, and is in the process of building another facility on the East Coast – their particular operating methodology means that they continue to look for locations that have available land close by high capacity electricity supplies and multiple carrier connections. Globally Switch are looking to partner with companies who have specific local knowledge that can be combined with the Switch unique approach towards data center construction to expand the boundaries of what is possible locally.
The SuperNAP tour was fascinating, if for no other reason than it provided a very strong counterpoint to the current thinking around modular data centers – while Microsoft and others are advocating the use of prebuilt container based data center modules – Switch is reinventing the concept of best of breed mass data center construction. Alongside that they fulfill every little boys dreams of SWAT teams and spy games – what a winning combination!
Funny how the guys at SuperNap are so anti-modular. They often confuse “phased construction” with “modular” as well.
I have heard bad things about the people here, and that their operation is simply all “smoke and mirrors”.
By the way, the biggest deal in data center history was just signed in a modular deployment which spans into 3 continents.