27 March 2016
Despite the data centre industry’s best efforts to remind the eco-geeks that there is more to data centre (DC) power usage than just the networked computing, debates around the greening of facilities continue to be biased toward the servers and networking technology.
The IT crowd have borne the brunt of adverse sentiment about the data centre’s public reputation for having unfeasibly swollen carbon footprints, even though a DC’s energy consumption is shared between a range of utility systems, from computing, cooling and CRAC units, to lighting, alarms and premises security.
To their credit, deep-pocketed and big name online brands such as Google and Amazon have pointedly sought to design, locate and build new facilities with the eco-efficiency of all data centre systems, both IT and non-IT, as driving paramount attainments. This drive has also highlighted the potential for applying new science to getting electricity consumption (and bills) scaled responsibly to expansion.
2016 is already proving to be something of a vintage year for ultra-green data centres, with Amazon, Apple, and Facebook each announcing new such facilities over the last few weeks.
Of course, these wealthy top-tier players have the financial wherewithal to prioritise the green ambits (and R&D may surely be entitled to borrow from marketing budgets here), while other independent new-build facilities have also had their green credentials commended.
But there remain many DCs out there which have not had the efficiency of their power usage recently scrutinised, nor have proper plans for operations to be calibrated to minimise environmental impact. It is possible that some of these ‘ungreen’ centres could be running critical applications for some of the world’s most-esteemed businesses or prestigious government agencies.
Recent research commissioned by The Green Grid found that while most organisations face mounting pressures to improve the efficiency of their data centres, 43 per cent of those surveyed admit to having ‘no energy efficiency objectives in place’.
Furthermore, these respondents said that while better opex management might be within scope of their DC goals, it was prompted by financial planning objectives rather than a commitment to making IT estates shades greener.
Green Grid EMEA spokesperson Roel Castelein said: “The research found that the top challenges and opportunities at the board level were all to do with reducing and predicting costs, rather than on ‘green’ or resource-efficient objectives.”
He went on to note that this is likely driven by customer requirements in an acutely competitive marketplace requiring enhanced flexibility and ‘always on’ functionality – all despite common declarations toward corporate social responsibility strategies.
The US Natural Resources Defence Council has identified other reasons why ongoing greening efforts may have slipped down data centre agendas. It highlights the fact that growth in the multi-tenanted DC market segment is creating facilities where the diverse ownership and management of server estates operating inside them means that commonly applied and/or holistic energy efficiency programmes
are made difficult to implement.
So even after some 10 years of asking, the question of whether established data centres can be made greener has to restart from the recognition that there may be significant numbers of established facilities operating in the US and UK (no-one knows precisely how many) that are not designed to be resource efficient, and cannot easily be modified to become in line with latest energy efficiency models.
What’s more, for new DCs, the pace of growth is such that energy management (and other green upkeep factors) are often secondary considerations to maintaining operations and meeting escalating customer demands. After all the debates around government legislation, voluntary codes of conduct, EU guidelines and self-regulation, it seems that the sector still has some way to go before baseline energy control targets are standard industry conforms.
Yet shades of green may be set to reappear on the multicoloured spectrum of data centre issues. These are prompted in part by headline-grabbing events. For example, last December’s United Nations Climate Change Conference (COP 21) aims at encouraging energy consumers to switch to renewable supply sources, away from fossil-fuel-based electricity generation; and in the UK, government plans to reform carbon reduction reporting and taxation regimes for businesses, announced last September, also hold the promise to bring the green data centre question back into legislative focus.
So at a more local and immediate level, what are some of the challenges data centre practitioners face when seeking to create more energy-efficient facilities?
One area where green aspirations rub up against the realpolitik of data centre operations is in server provisioning. Pursuing a balance between ensuring that the required processing and memory resources are fully available, and not having servers under-loaded while they idle through the kilowatts, continues to challenge DC planners and managers.
Predictive analysis might help inform foreknowledge of resource shortfalls, but when shove comes to push, commercially-oriented data centre specifiers, for example, are bound to err on the side of surplus capacity. Although some benchmarks have been set and met with regard to the greening of DCs, vast amounts of IT resources at such facilities are under-utilised due to over-provisioning.
This means that rather than running two under-utilised servers, the workload could be combined to run both on one server, according to Stuart Higgins, technical evangelist at Sumerian. He says: “McKinsey & Company analysed energy use by data centres and found that on average they were using only six-to-
per cent of the electricity consumed powering their servers to perform computations. The rest was used essentially to keep servers idling and ready in case of a surge in activity that could slow or crash operations.”
This phenomenon is commonly excused as a matter of redundancy – it’s better to over-provision in the first install rather than add new servers incrementally later on, and so forth.
There’s also the issue of covering for server failure rates which, according to anecdotal evidence, seems to be an abiding problem for some DCs. But here, managers have to deal with other factors at play around this issue.
One is where server over-provisioning (i.e. over-consumption) has come about because customer demand has not met levels that were anticipated back when a data centre was specified and kitted-out. According to Higgins, the problem partly lies in the fact that a ‘standard’ server configuration is used, rather than one that is optimised for the workload type.
Past it or power dressing?
For years, debates around data centre greenness in terms of energy usage efficiency have been couched in the context of the PUE (power usage effectiveness) metric. ‘Redundant redundancy’ is yet another factor that complicates PUE calculations.
Sometimes referred to as a ‘standard measure’, PUE remains something of a de facto benchmark for some quarters of the industry, and although there are other power usage/power efficiency models in the market, its status continues to divide opinion in the profession.
PUE can be gauged and interpreted from multiple directions. In terms of applying it to a data centre’s green quotient, it’s sometimes overlooked that PUE is a measure of how effectively power is used – not how much power is consumed.
Some industry insiders have argued that the intricacies of holistic data centre greening will not be resolved fully until the thorny issues around PUE are fundamentally reconsidered – or even abandoned altogether. For others, PUE has validity because it is a moving target that can be pursued but not attained.
Many data centre operators market PUE as a badge of honour which at least flags-up
the fact that, as responsible service providers, they are cognisant of the need
to practice energy management awareness.
Kevin Read, GIO senior delivery centre manager at Capgemini, believes PUE is a simple calculation and can provide a good metric – if used correctly. He points out that you need to understand its limits: “The key rule is never mix IT power load with non-IT power load.”
Other data centre experts, such as MigSolv CEO Alex Rabbetts, remain implacably sceptical about PUE’s actual worth. He suggests that the problem with PUE is that it was never intended to be used as a data centre ‘comparison tool’.
He agrees that there are many factors that affect PUE, not least of which is where it is actually measured: “As one part of the metric is ‘Total Facility Power’, then it really should include total facility power and not exclude the building management system, monitoring tools, or security systems.”
Rabbetts adds that other factors, such as location, building construction, time of measurement, outside temperature and many others, will also affect PUE.
But above all, he reckons the metric’s worth is compromised by the fact that it’s been “stolen” by the marketing department as a selling tool. “It is so widely abused [by the industry] that it no longer has any value. Throwing away PUE would be good for the industry – and good for the customer.”
Andrew Donoghue, European research manager at 451 Research, is likely to agree here. “[PUE] only measures the efficiency of a data centre facility – it says nothing about improvements in IT energy efficiency. In fact, improvements in IT energy efficiency can sometimes result in a worse PUE number.”
He continues by saying that the industry has been waiting a long time for a single “useful work” metric to emerge, and cites initiatives from the BCS (DC-FVER) and Future Facilities (ACE metric) as contenders of interest.
Yet Sumerian’s Higgins believes that there’s life in PUE yet: “To track and measure any sort of improvement, an effective measure is needed. PUE, in this regard, allows comparison of progress, and supports informed decision-making.”
While for now PUE may be an influential metric, Keysource MD Mike West believes there is still much to be done in terms of maximising utilisation of available capacity and reducing absolute power demand.
He points out that much of the data centre heat management efficiency improvements in recent years have been achieved using evaporative cooling systems. This, says West, shifts the eco-onus from power consumption to water consumption, and future technological advances may find it easier to reduce the usage of water rather than power.
“No need” for DCIM
Data Centre Infrastructure Management (DCIM) platforms from vendors such as Emerson Network Power, Nlyte, Schneider Electric, and others, give data centre engineers a range of controls over both the computing and network components, as well as over service delivery and quality assurance, including green gauging elements. These platforms can also contain visualisation tools which enable environmental readings to be depicted graphically in real-time, so that operations teams can adjust and recalibrate accordingly.
However, the use of these tools is elective, and there could be many large data centres chugging away where only the minimum levels of tool-based management is in effect.
The positive news here, according to John Curran, VP at Avocent/Emerson Network Power, is that as DCIM solutions are more widely utilised, more data centres will be managed toward greener performance targets – even if those controls should have been deployed years ago. “Those tasked with the design and operations of a data centre are increasingly turning to DCIM tools to better understand the goals, measure performance against those goals, and make changes to improve the performance over time,” he says.
However, others see DCIM as an interim approach, and that more profoundly transformative infrastructure management technology has yet to arrive. “DCIM has been fantastic for a few software developers and other firms to create a new market for them to sell into, but it isn’t actually very clever,” claims Rabbetts.
He predicts that artificial intelligence is the technology that will change the way in which data centre infrastructures are managed and perform: “Data centres using AI for management will automatically balance cooling against load and alert if a piece of plant is about to fail or needs maintenance. It will also report on its own efficiency and configure itself without the need for someone to add each element of the infrastructure to a database.”
As a result, Rabbetts says there will be “absolutely no need” for DCIM because the AI will do everything, as opposed to existing software solutions which still require data centre managers to intervene.
As noted above, when assessing a data centre’s green eco-credibility, distinctions should be made between how effectively its power is used, alongside how much is used. And again, power usage levels are too often measured at the point of highest density consumption (usually servers), whereas opportunities for green gains might be identified in many other systems around the facility, including networking infrastructure and devices such as routers, switches, and the cabling that connects them.
According to Rockley Photonics CEO Dr Andrew Rickman, future demands made by an ever-more interconnected world will cause data centre designers to reconsider how they tackle issues of scaling infrastructure. He believes this will enable them to better align the limitations of complementary metal-oxide-semiconductor (CMOS) IC processing with fibre optic technology’s potential performance gains, energy efficiencies, and ecological advantages (glass rather than copper cable). For Rickman, “re-inventing” the data centre network is necessary for future green gain. “Bandwidth is scaling at such a fast rate that data centre operators are actively deploying single-mode fibre infrastructure to future-proof the networking infrastructure for future network equipment upgrades.
“This contributes to the ‘greenness’ of the data centre by saving the ripping and stripping of generational infrastructure links associated with other transmission media, and enables higher bandwidth capabilities over fewer cables by means
of wavelength division multiplexing.”
Rickman continues by saying that it has been suggested that all-optical switching solutions could reduce the energy requirement per bit by multiple orders of magnitude. While he admits all-optical packet switching is still some way from becoming a commercial reality, he says the right combination of CMOS and photonics in switching has the potential to reduce networking power and cost by a factor of ten.
Meanwhile, for data centre managers seeking less technologically-demanding solutions for making their facilities greener, MigSolv’s Rabbetts has the following simple advice: don’t allow cardboard and packaging into the data centre.
“Nearly every data centre I have visited in 30 years has had cardboard and other rubbish in it. Cardboard and packaging creates dust. Dust is a fire hazard and it blocks fans. Fans have to work harder, meaning they draw on more power, which creates more heat, which means the air-con works harder, which requires more power and therefore more CO2. Remove the cardboard and you improve the data centre energy efficiency immediately.”