Data Centers Become More Environmentally Aware

10 March 2015

Sarah McElroy of IHS recently spoke with ViaWest's Chief Data Center Officer, Dave Leonard, about his upcoming session at AFCOM's Data Center World conference and how data centers are becoming more environmentally considerate while also being built for quality, cost-effectiveness and flexibility.

Sarah McElroy: What are you speaking about at Data Center World this year, Dave?

Dave Leonard: The session is about sharing our experience with building data centers because ViaWest has been building them for a long time. We have 27 data centers online right now, and we are building two more that will go online later this year. We've built data centers for a long time, and we've learned a lot by doing some things right and some things wrong. This session is fundamentally about sharing the best practices that we've learned while building these data centers.

SM: For whom will this session be most useful? Who do you envision the audience to be?

DL: I think there are three major audiences. The one that would benefit most directly is other builders of multi-tenant data centers (MTDC). Because we are building MTDCs, ViaWest does things differently than we would if they were not MTDCs. So that's one audience, but it's a very small audience.

More broadly, the second audience is anybody building a data center, whether it's an enterprise or a service provider. These attendees are able to benefit from the session because it's about the best practices of building data centers, some of which will be applicable to the non-MTDC world.

And then the third audience, which is the biggest audience, is people that want to know why we do what we do at ViaWest. It's people that are interested in potentially consuming MTDC services and want to understand why we build in a such a way and how it benefits customers. Essentially these are people who are interested in being a customer of ViaWest or another colocation provider. Those three audiences are the main targets for the session.

SM: How has the data center market evolved to the point where quality, energy efficiency, and cost are no longer at odds with each other?

Dave LeonardDave LeonardDL: There are three major dimensions in building a data center that, historically, have been opposed to each other; they worked against each other. In the past, if you were going to be very energy efficient, usually your reliability wasn't great. There are technical reasons why that was true (extra electrical paths increase energy losses, for example) and philosophical reasons (treating reliability as paramount lessens focus on energy efficiency, for example). The third trade off against both of those two was cost. If you were going to be hugely energy efficient, that means you probably had different cooling modes and maybe even different cooling equipment when you were in free cooling mode, and the capital costs of deploying multiple cooling methods might be higher than if you weren't interested in energy efficiency. The capital cost of higher quality data centers has historically been very significantly higher than lesser quality data centers.

Think of a rubber band that you are stretching around three points in a triangle. If you try to pull one of those points out, like if you try to increase your quality for example, the more pressure it's going to place on your costs and energy efficiency. Those points will get pulled inward. You can envision outward pull as good and inward pull as bad. Then you try to pull the costs out, which means you lower the cost, trying to make the cost better. Well that is going to naturally pull your quality and energy efficiency in, make them worse in other words. So the visual that I'm trying to give is that you've got this triangle with contending dimensions. Historically, people have made compromises in at least one and maybe more of those dimensions. What we think we've done at ViaWest is optimized all three dimensions.

There is one more overarching dimension that I frequently talk about which is the resulting flexibility of the data center build. It's flexible to a range of different customer needs, and this is where the multi-tenant aspect becomes kind of unique. We don't know the customer set that is going to be in our data centers, but we do know that they are going to have very different needs, for example, from their power density to their desired delivery topology. Maybe they want to use whips; maybe they want to use a busway bar electrical delivery system. They may even want different voltages because some customers are now wanting a European higher voltage standard of delivery. We also don't know what kind of equipment they're going to have in there or what depth, width, and height of racks are required. Our product, the data center that we build, has a lot flexibility that can adapt to those differing customer needs, some of which we don't even know about yet. Overall, you can you think of those three major dimensions we discussed earlier and then a fourth resulting dimension of flexibility.

SM: Taking into consideration that model and those four dimensions, what does a “green" data center mean to you?

DL: A green data center is a bunch of different things to a bunch of different people. For us, a green data center is one that uses the least amount of energy and water possible. It's the most energy efficient and the most water efficient because those are the two major commodities that get consumed day in and day out in a data center. Optimizing to those is the most important thing a data center can do because data centers use a tremendous amount of those resources, especially electricity.

There are also things we don't focus on. We actually started going down a path of focusing on these things, and we realized, “Wow, that's just really not as important." A Leadership in Energy and Environmental Design (LEED) certification is one of those things. There are a couple of issues with LEED. It's recently evolving but, as it stood up until a couple years ago, LEED was really focused at measuring how environmentally considerate you were at the initial build-out of a building and mostly people-space buildings. They didn't have a version that was focused on computer space or white space. Now they are changing that to try to adapt more for computer space, but even so, the whole LEED certification is all about how you construct a building. The construction costs of a typical high rise or office building is a pretty substantial portion of its lifetime cost compared to its lifetime use of energy other natural resources. In contrast, the build cost of a data center is a much smaller portion of its total costs than its operating cost, predominantly because it's going to be consuming a tremendous amount of electricity throughout its lifetime (unlike an office building). If water is used to cool, it may be consuming a tremendous amount of water over its lifetime as well. So those aspects of the ongoing cost or resource use – those aspects that aren't weighted strongly by LEED – are way more important than the one-time build which LEED does measure. I have nothing against LEED; we have data centers in LEED certified buildings. I just don't believe it's as important an aspect as energy and water efficiency.

SM: Because green initiatives for ViaWest focus on energy and water efficiency, what are the most common features or pieces of equipment that are being used to create green data centers?

DL: Energy efficiency in a data center is made up of a thousand small decisions and one large decision. The one large decision is how the data center will be cooled and, more specifically, whether it will use free cooling, which means not using a compressor, not doing mechanical cooling. Whether you do free cooling using direct or indirect air, direct or indirect evaporation, or any of the commonly accepted methods of doing free cooling, that decision impacts your energy efficiency far more than any other decision you can make in the data center.

Think of power usage effectiveness (PUE), the ratio of total energy entering the data center divided by total energy entering the servers, where a lower number is better. From one respect, that measure is kind of flawed because it assumes you can't change the usage of energy in the servers themselves. There is actually a huge gain to be had by using servers that are efficient in processing applications, but ViaWest, as a data center provider, really can't materially affect that. The PUE is the measure of all the overhead, all the rest of the infrastructure and equipment that is needed to support the servers. That includes security systems, lighting, electrical losses, and maybe some office overhead, but by far, the biggest chunk of that overhead is cooling. Making a cooling decision that really is energy efficient and tuned to the environment where that data center is built, because environment matters in free cooling, that's the biggest decision you can make when building an energy efficient data center.

Then there are the thousand little decisions. For example, we use very energy efficient electrical gear, including unity power factor uninterruptible power supplies (UPS), low loss transformers, etc., so our losses throughout the electrical system are really low. We also design our electrical topologies to drive usage up because there are relatively less electrical losses at higher utilizations. We use LED lighting in the data center halls, and we do that for a couple of reasons. One is that they are very energy efficient in and of themselves (compared to fluorescent or incandescent), and the second reason is that they can go instant-on and instant-off, cycling many times without being damaged. We put proximity sensors up everywhere so that when part of the data center is not being used, the lights do not have to be on; they simply turn on if somebody walks over to that area. All these little decisions cumulatively matter a tremendous amount, but they are still overwhelmed by the one big decision which is cooling.

SM: What are some of the incentives, or motivations, for becoming a green data center and are those incentives different for colocations compared to enterprise data centers? Is the main incentive just lowering operating costs, or is there more to it than that?

DL: The main incentive is cost. As a citizen of the planet, you always feel better when you're using resources wisely, so there is some psychological advantage of using resources efficiently and building a data center that is as good to the earth as it can be. Also, there are some companies that have requirements on environmental initiatives and sustainability that they have to report, so that's another motivation. ViaWest has some customers that have to report on their carbon footprint, their energy efficiency, and that type of stuff. But dominantly what people care about is the cost impact.

In most enterprises, it is difficult to get great alignment on energy efficiency because often the data center builder, the operator, and the IT team are different groups, without a concerted focus on energy efficiency. The big difference between a MTDC and an enterprise is that being energy efficient gives a MTDC, like ViaWest, an advantage against competitors that aren't energy efficient because we can charge less for our services. That's the biggest incentive because customers care how much they are paying for energy, and we can charge them less because we use less overhead energy.

SM: Who is leading the way in green data center design? Is it the colocations or perhaps the hyperscale data centers and web giants who are normally on the leading edge of data center innovation?

DL: The leaders are both of those. Hyperscale data centers and web giants are companies that typically control more aspects of the equation. Whether it's a hyperscale search engine company or a hyperscale social media company, those are companies that are building a relatively homogenous set of services and, therefore, have a relatively homogenous set of servers in their data center. They can control all aspects of those servers and can push the envelope a lot more in certain directions. For example, those kinds of companies, because they control the entire IT stack, can decide that they are going to run their data center really hot because they've defined the specifications for their servers; they know the servers will be able to handle that with no problem at all. Running a data center hot is, in a lot of cases, going to make it more energy efficient because you cool less and can use free cooling more.

Another example would be the decision to not have UPS support for the servers because in some companies' business models, they may not care if those servers go down. They have a bunch of other servers in another data center that will pick up the load. Or they may decide they don't need generators at their data center because the whole data center can go down, and there are other data centers that can pick up the load. All of these examples are real life decisions that these very large search and social media companies have made because they can, because of their business model.

Now contrast that to a multi-tenant company or most enterprises where they have a mixed bag of computing – some of it old, some of it new, some of it mainframe, some of it AS/400, some of it blade centers, some of it individual pizza box servers, all kinds of different switches, etc. They can't optimize to the high end of any one of those pieces of equipment because they have to run all of them. For instance, at ViaWest we run our data centers at the very low end of what most people do in terms of temperature. The cold aisle of our data centers runs about 71-72°F (22°C). In a company that really understands all the equipment that's going to go into the data center, they might run at the high end of the ASHRAE Recommended (American Society of Heating, Refrigerating, and Air-conditioning Engineers) or even allowable range, and they might be more energy efficient because of that. We can't do that because we don't know the equipment that's going to be in the data center.

We also have to have bullet proof UPS, bullet proof generators, and bullet proof everything because our customers may not have that inherent application reliability or data center resilience that sometimes these big guys do. I like to say that MTDCs innovate tremendously within a very strict operating parameter. The big guys, the big search engines and web giants, they innovate more broadly across the entire spectrum because they control more of it.

SM: Is there anything else you'd like people to know before attending your session in April?

DL: We talked a lot about the energy efficiency aspect and touched on the quality aspect as it relates to energy efficiency. What we didn't touch on is the resulting flexibility for the customers, and that's something that I will delve into more in the session – the flexibility and adaptability of these data centers to meet a wide variety of customer needs. That's important to me because that didn't used to be a capability in our portfolio; it's much more recent. What we are currently building is what we call Generation 4 data centers which have so much more customer flexibility built into them. We are really excited about sharing that and having other people learn from that.

SM: Sounds like people will need to check out the session to hear more about that very important part of data center innovation! Thank you for sharing some of your insights, Dave. I'll look forward to hearing more during your session. See you in Las Vegas!

Dave Leonard is Chief Data Center Officer at ViaWest.

Questions or comments on this story? Contact

Related links:

IHS Information Technology

IHS Data Centers, Cloud & IT Infrastructure

News articles:

Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter