Whether it’s a building filled with racks of managed cloud servers, a small colocation facility for local enterprises, or an Internet giant’s showcase IT installation, a data center is a complex and hungry beast. The National Resources Defense League reports in its Data Center Energy Efficiency Assessment that in 2013, data centers in the US consumed about 91 billion kWh of electricity, equivalent to the annual output of 34 500-MW coal-fired power plants. Data centers currently cost American businesses $13 billion in power bills and pump nearly 100 million metric tons of carbon dioxide into the air every year.
Even as computing tech refinements become more energy efficient in terms of the components, our appetite for processing, storage, and applications nonetheless increases. Data center buildouts have shown little sign of slowing, and according to the NRDC report, industry power consumption could reach 140 billion kWh per year by 2020.
NOTHING FOUNDAt the same time, increasing energy costs and the desire to shrink carbon footprints are driving the same cultural shift toward energy efficiency here as in nearly every other industry. Whether they’re managing internal cloud infrastructure, renting colocation space, outsourcing IT services to managed third-party data centers while thinking of networking as a utility, or doing all of the above at different times to manage growing big data needs, companies-and the information tech facilities that they’re employing-are focusing ever more on the energy efficiency, and other efficiencies, of their IT operations.
But IT folks also face a unique set of challenges that require looking at loads through surprising lenses. Accelerating change nips all heels in the 21st century, but the tensions are shearing for an IT world that measures time in 18-month iterations of Moore’s Law, the time it takes the price of computing power to shrink by half. Exponential evolution is transforming human and business behavior and generating unpredictable feedback loops that continually reshape the look, feel, backbone, and operations of technology products and services.
Even the computing media of the near future-what we use to store, process, and visualize our bits and bytes-is ever up for grabs. Witness the shift from floppy disks not so long ago, to redundant solid-state storage arrays in the cloud today. If tomorrow looks anything like the past, it’s virtually unpredictable. And with one foot in the future, successful data centers must meet operation, construction, and performance standards in the here and now that make them closer cousins to passenger jets than to buildings made for people.
Bringing together a global network of data center owners and operators to collaboratively develop benchmarking and continuous improvement strategies across the industry, the Uptime Institute is a part of the 451 Group, a sort of industry think tank. Uptime has become the de facto global data center authority, publishing the Tier Rating Systems that have become the gold standard for data center performance ranking. Uptime holds the exclusive right to certify data centers to Tier standards, provides professional development programs for data center designers and operators, and has developed the FORCSS methodology for optimizing IT efficiency, named for the metrics it encourages clients to weigh: Financial, Opportunity, Risk, Compliance, Sustainability, and Service Quality.
Note that Uptime’s favored phrase is IT efficiency, not energy efficiency. “Energy efficiency is very important,” says Scott Killian, Uptime’s Vice President of Efficient IT Programs, “but Uptime doesn’t speak on energy efficiency on purpose, because we don’t want people to just focus on electricity. Most of the industry may be overly focused on energy-and rightly so-since it can be a third or half the operational cost for some data centers. But we focus on the holistic conservation and efficiency of all information technology resources, including whether a company’s culture encourages continuous improvement. Energy is just one part of that ecosystem.”
The logic makes sense when Killian explains that companies fixated on energy efficiency often face increasing energy bills because they’re missing the big picture and not talking to their IT departments. “Companies may plan energy-efficient facilities having no idea that hundreds of servers idling at 20% utilization is a huge waste,” he says.
In contrast, designing and operating from the holism of IT efficiency often nets greater energy efficiency. People may realize, for example, that having all of their physical servers 100% utilized as slices of one virtual machine drastically cuts the number of physical servers they need, and hence the redundant cooling, building space, maintenance, capital investment, and energy.
The case for IT efficiency makes even more sense considering how much each physical server must be pampered. Keeping servers running, regardless of holidays or earthquakes, has long been a condition of survival for data centers, and so Uptime posited the Tier system 20 years ago, reflecting what goes into 100% availability. A Tier I data center has dedicated infrastructure supporting IT beyond an office setting. It includes a dedicated IT space, an uninterruptible power supply (UPS) to stabilize power inflow, dedicated cooling equipment that stays on beyond normal office hours, and an engine generator to protect IT functions from extended power outages. Tier II adds redundant critical power and cooling components to provide select maintenance opportunities and an increased margin of safety against IT process disruptions resulting from site infrastructure failures. Redundant components include power and cooling equipment such as UPS modules, chillers or pumps, and engine generators. A Tier III data center requires no shutdowns for equipment replacement and maintenance. A redundant delivery path for power and cooling is added to the redundant critical components of Tier II so that each and every component needed to support the IT processing environment can be shut down and maintained without impact on the IT operation. Tier IV adds Fault Tolerance when individual equipment failures or interruptions to a delivery path occur; the effects are stopped before it impacts IT operations.
The concern for 100% availability isn’t an idle one; in recent decades, our modern world, our pocketbooks, our food delivery systems, and even our health have come to rely on 100% availability as data centers host banking, credit card, social security, payroll, federal, transportation, hospital, and other mission-critical records and operations. Because we take it all for granted until something goes down, resiliency through redundancy is so at the heart of the data center industry that it carries its own lingo, backup+1: a backup plus another backup.
Emerson’s Global Data Center (St. Louis) has a 7,800-square-foot rooftop solar array containing more than 550 solar panels. It is the largest in Missouri, capable of generating 100 kW of power.
Consumer-facing search engines and applications are perhaps the one exception to this “no fail rule,” and the Internet giants behind them are the ones most notably pursuing cutting-edge design and efficiency, according to Killian. Data infrastructure is central to their business models and comprises much of their overhead, so the pursuit of efficiency yields massive advantage, hundreds of millions in savings, and demonstrable environmental stewardship. But with huge efficiencies of scale, these companies are also learning to eschew “cooler” data centers in favor of customized, heat-tolerant servers stacked in distributed networks of smaller, lower Tier data centers made resilient by software. Eight Tier II data centers do the same work as two Tier IVs without the over-the-top electric bills. And with Google Search, if a server or even a whole data center fails, the end user’s search takes 0.4 seconds instead of 0.2 and yields 80,000 results instead of 2 million. In other words, nobody really cares. But for the rest of us, and for the small, medium, and corporate data centers that consume the vast majority of the industry’s energy pie, the “no fail rule” strictly applies.
A layman could be forgiven for thinking going high Tier and high efficiency is just out of the question, and he might have been right until very recently. But designers and operators are learning to achieve the best of both worlds in surprising ways. Once you’ve maximized your servers and minimized their ranks, you can increasingly go for the gold, too . . . LEED Gold, that is.
In 2011, Bend Broadband-a local ISP and cable company serving Bend, OR-built the Vault, a 30,000 square foot Tier III and LEED Gold certified data center. Since then, the Vault and eight other data centers between Wisconsin and the Pacific were acquired by TDS Telecom in a bid to offer a spectrum of data services to nearby mid-sized companies through OneNeck, a wholly owned subsidiary. The new owners are clear that the Vault is a prototype for data centers to come.
Several factors conspired to make the Vault an EPA Energy Star, carbon neutral data facility, note Hank Koch, Vice President of Mission Critical Facilities for OneNeck IT Solutions LLC., and Steve Hall, Data Center Director for the OneNeck Data Center in Bend. The Vault’s first 152 kW of power come from 624 south-facing photovoltaic (PV) roof panels. While the panels don’t cover the energy load, they do support OneNeck’s Blue Sky Partnership with Pacific Power, ensuring that all purchased power comes from wind and hydroelectric sources, while surplus power from the array feeds the grid.
Siemens Apogee Building Automation software posts details from sub-meters throughout the facility to a digital dashboard so that operators know how and where power and IT resources are being used. And while colocation customers bring what equipment they will, OneNeck runs its own servers as one virtual machine that does the job of 15 server cabinets with five.
The high-altitude, cool, dry climate of Bend is friendly to cooling with outside air, and the Vault offsets traditional precision refrigeration for much of the year with the largest installation of KyotoCooling in North America. The system named for the 1997 climate treaty uses outside air to remove heat generated within the data center. Its heart is a rotating thermal wheel that separates inside and outside air streams and allows controlled, pressurized flow from one side to the other. The wheel blows hot air out, cools in the outside air, and pulls cooled air back in. The system uses between 75% and 92% less power than traditional refrigeration, eliminates the need for water, reduces emissions, gives the Vault free cooling for 70% of the year, and-with the help of humidifiers and air-tight seals throughout the building-helps keep internal moisture at perfect equilibrium even as winter levels swing wildly outside. Chimneyed racks add another layer of cooling efficiency; when hot exhaust air is chimneyed above the ceiling, it doesn’t mix with colder air supplying the servers.
“Our server rooms don’t have hot and cold aisles, but only cold and slightly less cold aisles,” says Koch. The supply air can be warmer going in.
The Vault tops it all off with Lutron LED lighting that brightens in front of people and dims behind, low-flow toilets, and sustainable materials throughout the building. “The insulation in the walls is made of shredded jeans; it’s the perfect material for controlling temperatures,” says Hall.
While there were a few challenges in building the Vault to such exacting and seemingly contradictory standards, including the need to fix initially undersized humidifiers and a potential single point of failure for multiple redundant cooling lines discovered by Uptime inspectors, “overall, the project managers were great, the building was commissioned on time and budget, and the process of designing to both LEED Gold and Tier III standards went flawlessly,” says Hall.
Discoveries industry-wide point at surprising efficiency opportunities as well. Hall notes that recent tests have determined that even at 80°F, component failures are rare and hardly impact today’s large server configurations, challenging the time-honored belief that computer rooms needed frigid temperatures. Whether the technology is more robust or the iciness was overthought from the beginning, updates to ASHRAE computer room thermal standards reflect a warming attitude that’s lowering the costs and emissions of data centers worldwide. Even the increasing energy density of data centers resulting from tech miniaturizing, while consuming the same energy and outputting the same heat, could conceivably be an opportunity in the hands of future designers.
While the concurrent concentration of cooling sometimes draws more energy, it doesn’t always, and if the heat can be concentrated and directed enough it might someday be useful, say for heating water or a nearby neighborhood. The Vault’s designers toyed with heating the building’s office spaces with server-generated heat, but had to let go of this idea upon learning that Tier III data security meant no sharing of HVAC across rooms.
Interior shot of Global Relay green data center,
Vancouver, BC
Global Relay’s data center in Vancouver at dusk
Data Ecosystems
And Jack Pouchet, Director of Energy Initiatives for Emerson Network Power seems to finish Hall’s sentences when it comes to the Cube Rule of Fans rooted in fluid dynamics. “Resiliency-driven redundancy can be a designer’s best friend in running highly efficient data centers,” he says.
You can run more fans at lower speed and use less energy generating the same airflow than with one fan at full speed. If one fan fails, the others pick up the slack with no change in energy demand. In fact, when consulting on the design of a 1-MW facility, Pouchet will advise 1.2, 1.4, even 1.6 MW of cooling with variable speed fans, and show surprised clients models demonstrating reduction in energy numbers. For existing US facility upgrades, he’ll also mention that utilities often pay to convert single speed fans to variable speed.
Another way that designers and operators are learning to achieve the best of all worlds is by thinking of data centers and the businesses that use them as ecosystems. Growing product and service offerings in homage to this strategy, Emerson Network Power, a pioneer of precision cooling, airflow, and power for data centers, has in recent years leapfrogged a full-spectrum product portfolio to offer consulting, design-build, and full life cycle maintenance for projects, including Facebook’s new Sweden data center and a 10 data center rollout for Australia’s open-access National Broadband Network.
The ecostystemic approach treats design, build, operations, and life cycle, as well as vicinity, site, and construction details as a whole. With product offerings ranging from customizable racks and continuous power solutions, to optimization software and switches for controlling entire data centers from one terminal, Emerson is well-positioned to help meet increasing demand for sustainable regional data centers from the ecosystemic point of view, whether designing from the ground up, “refitting” obsolete data centers, or repurposing buildings never meant for the hum of computers.
The approach starts from the widest angle, with the consultants first asking clients what the building needs to be now and five years from now. To plot an integration of products, software, and services that will faithfully serve through a dynamic future, electrical, mechanical, site, and civil engineers on the consultant side and IT people, facilities people and executives on the customer side have to step out of their accustomed silos.
Once everyone’s talking, the next step is to look at potential site opportunities, considering conditions such as climate; reliable, cheap, and hopefully renewable energy; and utility, tax, and equipment purchase incentives. One thing that sold Facebook on its Sweden site, for example, was a surrounding high-voltage ring supplied by three hydro plants that hadn’t seen an outage in 30 years.
Noting the complexity of the design/build that follows, Pouchet describes a Tier III “refit” project currently underway in a former 50-MW paper mill in the rural US southeast: The good existing power lines are just the beginning. Fiber must be routed in from five or more providers with access routes pre-considered, laid, and paid for, and site and walls inspected for hazardous materials. The site must be served by two power utilities and two water utilities, their sets of lines coming in from north and south. Existing high ceilings are great for ventilation, yet the otherwise opportune remoteness begs questions about hauling racks of servers, generators, and cooling components over rutted rural roads. Walls, insulation, ceilings, roofs, and seals must be modified; the list goes on, and, as always, Emerson is using the knowledge gained afield to refine its product offerings.
Working with large real estate investment trusts and colocation providers with extraordinarily complex data centers in the last three to four years, Emerson developed a logistical process called “fitting” that involves trucking in modular steel superstructures fully wired up with servers and complete infrastructures, and positioning them with cranes and forklifts. Fitting is especially helpful in “refreshing” existing, aging data centers. It turns out that just replacing equipment every four years can net huge leaps in IT and energy efficiency.
“Just a few years ago, UPS’s often ran at 80 to 85% energy efficiency,” says Pouchet. “Our gear now runs at 93 to 94% in exactly the same situations, and sometimes 98%.” And in North America, utility and government incentives often even further sweeten the deal as some larger companies save $100500,000 per year in energy bills.
“We’ve begun to turn a corner as an industry,” says Pouchet, noting that Emerson’s only regret regarding its own LEED Gold data center in Saint Louis, MO, is that the rooftop PV now carries a quarter the price it did four years ago. “We can build a fortress of a data center that’s very efficient, have our cake, and eat it, too,” he says.
Looking to the long-term future, Pouchet envisions a level of digital interconnectivity that looks like a “kinder, gentler SkyNet from Terminator, kept in check by Asimov’s Three Laws of Robotics.” Autonomous data centers will likely fall into a natural hub-and-spokes pattern, with large regional centers, conveyances, and last-mile community-scale data centers doing their part. Pouchet is not alone in his speculation. According to Killian of Uptime Institute, the 451 Group, too, is speculating that trends could lead to a world of hubs and nodes.
“Whether public, private, or third-party, clouds will drive how data infrastructure gets deployed, and it’s just a matter of time before everybody is in some form of cloud,” says Killian, adding that though there will be unforeseen issues, operations will naturally be more IT and energy efficient. Wherever they operate, lone servers will become slices of virtual servers that pool resources and use 100% of every physical machine. Infrastructures will compress, standardize, and become easier to maintain; technology will keep shrinking; and regionalization will drive efficiencies of scale that make design approaches like those of the “big boys” more tenable for all.
The Vault, now part of the OneNeck IT Solutions family, was built with overhead power and cooling lines for easy upgrades and repairs.
But the industry is also finding that the distance between here and there may depend far more upon human factors than upon silicon, bio, quantum, or whatever comes after. As data centers become more complex and third-party services come online, many companies are responding to the rude awakening of having no idea what it all costs with apathy. Killian has seen companies neglecting backend IT efficiency, because it won’t reap the biggest savings and is never the sexiest project on a starry-eyed IT executive’s plate.
Until costs spike or major legislation hits the fan, “it’s like taking out the trash,” he says. Data center operators and company facilities guys do care, because it sets their budgets. They pay bills IT departments never see, and it all goes unnoticed.
Perhaps the antidote is once again a resounding breaking of silos. When executives become accountable for bills that can be in the hundreds of millions per year for some enterprises, see those bills and hear the stories behind them, they’ll likely take active interest, hold staff responsible, encourage collaboration between IT and facilities people, and foster cultures of continuous improvement built on shared goals. Bringing that about is more straightforward for large enterprises that own their facilities, Killian says. But it’s also possible for companies that outsource to incentivize their providers as well.
Uptime is soon to begin beta-testing a program to help clients create the bonds of communication, collaboration, and accountability essential to ever improving IT efficiency. Still in the “talking phase,” they’re getting positive response to proposed approaches and protocols. They’re getting willing guinea pigs. But it’s still uncertain whether participants will be able and willing to change. On that, “there’s just no data yet,” concedes Killian, but he adds, “I think it will be positive; we’ll have to see over time.”
On the industry’s energy future, he strikes a similar pose. “In the IT world, anyone able to forecast a year out is doing a great job. Five years is an eternity, and the hardest thing of all to predict is human behavior. But I am eternally optimistic that the industry will become carbon-neutral.”
How and when is still uncertain, but with whole systems design solutions blazing trails in an industry built on ingenuity, innovation, and rapid change, hopeful conclusions are not to be dismissed lightly.