Developing greener data centres

By Phil Andrews, operations director, Data Centre, Cisco for European Markets

The ever-increasing power consumption of data centres is rapidly putting energy efficiency at the top of the data centre manager's agenda. Even if energy costs and the threat of a shortage of power to support data centres weren't driving this efficiency agenda, the threat of carbon capping and legislation soon will. Reducing the demands being made on the data centre by businesses is not an option so what are the alternatives?

This article looks at a combination of subtle trends driving the impending power crisis in the data centre, strategies and technologies to reduce power consumption whilst meeting the evolving needs of the business, and approaches for the short-term and long-term future.

From barely being an issue a few years ago, the environmental impact of data centres has risen to the top of many IT managers’ agendas. The very real concerns over power consumption need a convergence of technological and non-technological solutions to mitigate/address the issue, says [name to come] from Cisco Systems.

How green is your data centre? Until recently, such a question would have raised eyebrows among IT managers. But with rising storage requirements and the levels of data centre infrastructure, the increase in power consumption of such facilities has been getting harder to ignore in recent years, despite the fact that accurate measures of data centre power use are difficult to come by.

Historically, power consumption has not been an issue for data centre managers for a number of reasons. First and foremost, data centres have often sat at the heart of strategic moves to expand or improve the business, and as such have not usually had to contend with cost-containment measures.

A second reason is that IT divisions have not traditionally had responsibility for the environmental impact of their data centres. Facilities departments usually foot the power bill and are often in charge of implementing environmentally-friendly practices.

Thirdly, there has never been much of a green alternative to data centres. Unlike, say, corporate air travel, you cannot just stop using IT storage systems and expect the enterprise to carry on as before.

As a consequence, some data centres have been allowed to turn into the gas guzzlers of the IT world. It takes about 830 pounds of coal to run a computer for a year. And in the case of servers, research by Intel shows less than 20 percent of power actually goes to the CPU.

This carefree attitude to power use is changing now, though, as companies face spiralling bills to maintain their sprawling data centre operations.

Data storage requirements are currently expanding at a compound annual growth rate of between 40 percent and 70 percent. Server use grew by 12 percent in 2005 and is expected to increase.

As a result, energy costs are expected to mushroom from 10 percent up to 30 percent of average IT budgets, overtaking all other forms of data centre expenditure and meaning IT managers will effectively loose a fifth of their budget to power consumption.

Exacerbating the problem is the fact that cooling tends to become less efficient as power consumption rises. The simplest way to increase cooling to a given rack of equipment is to simply open up more floor tiles.

While this is simple fix in the short run, it does not work much above two or three because cooling air being provided to one rack will be ‘stolen’ from adjacent racks, reducing the amount of cooling provided to neighbouring racks.

Another reason is that as more floor tiles are opened up for a particular rack, the distance from the tile to the rack increases. The cooling system ends up being less efficient because it ends up cooling the atmosphere in the data centre in addition to the equipment.

Both of these effects result in higher cooling bills, a reduced ability to cool equipment in the data centre on a per-rack basis and a less efficient cooling system.

Since cooling and heat removal are typically growth constraints in the data centre, this wasted cooling capability will act as a drag or a cap on growth.

Over the next three years, says Gartner, 50 percent of large organisations will face an annual energy bill that is higher than their yearly server budget. Google has already notoriously reached this point. And it gets worse.

In 2005, the University of Buffalo paid US$2.3 million for a new supercomputer, only to find there was not enough power to switch it all on.

An increasing number of data centre managers are similarly finding that there simply is not enough power available to expand their operations any further.

Gartner says most data centres are now operating at 100 percent capacity in terms of power and cooling, versus 70 percent capacity for data storage, meaning that energy, not memory, is now the main limiting factor on growth. (Availability of suitable space is also an issue.)

This puts data centre managers in a difficult position, since demand for IT storage is not going to go away.

If anything, compliance requirements such as the banking sector’s Basel II or Sarbanes-Oxley regulations, combined with the need to roll out ever faster and more complex IT applications, are increasing the demand for data centre services.

As a result, the only way to go is to cut power consumption and thereby reduce the environmental impact of data centre operations. Doing this is not easy. The actual amount of power required by data centre devices is only part of the equation.

Each watt consumed by IT infrastructure carries an additional ‘burden factor’ of between 1.8 and 2.5 for power consumption associated with cooling, lighting, conversion and distribution, all essential energy-consuming services that have to be taken into account in efficiency plans.

In addition, simply checking the power rating on the back of a device will not necessarily give you an accurate picture of how wasteful it is; its processing power and utilisation are also critical factors in determining its overall efficiency.

Because of all this, it is not easy to accurately measure and track data centre power consumption and even now few IT managers are building operating efficiency considerations into their purchasing criteria, although it is likely many will need to soon.

The good news is that recent developments by equipment vendors have led to a number of innovations that can help data centres run more efficiently. Server manufacturers, for example, are looking at introducing variable power consumption based on CPU activity.

The beneficial effects of this will be tempered, however, by the fact that server virtualisation strives to increase CPU utilisation to upwards of 80 percent.

Another option is the creation of blade centres and multi-core CPUs. This will raise the percentage of power going to the CPUs on a per-server basis, improving the overall power efficiency.

It will not necessarily reduce the power per rack, though, without other measures such as IO consolidation.

Where there is perhaps more scope for improvement is in the data centre’s network components, which can be used to create efficiencies in three ways:

  • By switching to devices that offer more processing power per watt.
  • By incorporating more services into each device, so that redundant devices can be removed from the infrastructure.
  • By using virtualisation to ensure that the remaining devices are used as efficiently as possible.
Looking at perhaps the most obvious measure for reducing power consumption, which is the efficiency of the devices themselves, it is fair to say that virtually all equipment manufacturers are working hard to bring leaner machines to market.

As an example, the efficiency of power supplies for the Cisco Catalyst 6500, the most widely used switch on the market, has improved from 70 percent to 80 percent since it was introduced in 1999.

Forthcoming Cisco power supplies are expected to be 90 percent efficient. At the same time, Cisco is continuing to reduce the power per port required by its data centre platforms, with a 30 percent to 50 percent reduction goal.

What is also significant about many of these new, more efficient platforms is that they can support a greater range of services. This can have a major impact on power consumption.

A typical application server may have multiple appliances associated with it, such as firewalls, secure sockets layer termination devices and load balancers, each with its own power and cooling requirements.

A rough and ready calculation shows these could represent up to an additional 2700W of power and cooling load per server, representing a considerable drain across the entire data centre.

Nowadays, however, functions such as security and load balancing can be incorporated into the network fabric, making it possible to eliminate the appliances and their associated power loads.

Doing this has several added bonuses. It lowers the complexity of the overall infrastructure, making it more manageable, reducing latency and eliminating single points of failure.

Finally, virtualisation can further increase disk utilisation by around 70 percent simply by incorporating all a data centre’s disparate storage devices into a single fabric that is then compartmentalised logically rather than physically.

In a virtual storage area network, each device can be ‘filled up’ to full capacity with data from various sources and applications, so fewer devices need to be used at any point in time.

In addition, the network can give priority to more efficient devices, so that those that represent the greatest drain on resources are only used when absolutely necessary.

The benefits of virtualisation can be significant. Taking a tape subsystem offline can save nearly EURO€3000 in power and cooling per year.

Taken together, these measures could reduce data centre power requirements by up to 85 percent, certainly enough to allow significant further expansion in storage area network use at current energy levels.

Storage area networking technologies can also help reduce server power requirements in a number of other ways.

Aside from power conversion losses, peripheral component interconnect cards and hard drives are the two biggest non-CPU power loads on a typical dual-core server, so moving to diskless servers will potentially remove a 72W load.

This translates into approximately 1.2kW per rack, in addition to reducing costs and improving the availability of servers. Another big area of opportunity is multifabric input/output and server I/O consolidation.

Consolidating storage and Ethernet connections on a single link reduces the number of network interface card ports required on the server (as well as switch ports), reducing the amount of cabling needed and thus improving airflow around the rack.

Furthermore, there are other areas of technical innovation that could help create further savings.

As an example, Cisco has an Automated Power Management System (AMPS) to control energy consumption in laboratories where it develops and tests new equipment.

These labs represent approximately 20 percent of Cisco's real estate, although the testing equipment is rarely used continuously. The system identifies equipment not in use and automatically switches it off.

Separately, Cisco is also partnering with the U.S. Department of Energy's Lawrence Berkeley National Laboratory to research technologies that could significantly reduce energy demands, as well as improve reliability and lengthen equipment life in data centres.

The technology eliminates power conversion losses by using DC (direct current) rather than AC (alternating current) power to provide electricity throughout the data centre.

According to Intel, AC to DC power conversion losses account for around 36 percent of the total server power budget in a typical data centre.

On a more general level, using IP networks to monitor and control energy use can help reduce power consumption across the business as a whole, a concept which Cisco has dubbed ‘Connected Real Estate’.

With all this, technology clearly remains only part of the answer to the issue of data centre power consumption. As indicated above, there can be challenges in identifying whose responsibility it is to deal with energy supply in the first place.

Organisations need to take a holistic view of the problem. However, it is a fact that technology can now have a significant impact on power consumption and it makes sense to start assessing developments in this field now.

Currently the power consumption of data centres is not regulated, but with climate change moving inexorably up the political agenda worldwide this is unlikely to remain the case for long.

And there are other pressing reasons to evolve to more environmentally-friendly operations as soon as possible, including the growing likelihood of outages as power and cooling systems come under increased stress.

Specifically regarding the network components of the data centre, there are a number of steps you can take now to reduce power consumption. They are:
  • Consolidate networks – fewer networks equals less cost and a reduced storage power draw.
  • Avoid gateways and consolidate functions – specialized appliances are not power efficient due to redundant internal cooling, switching and power conversion elements.
  • Bring in virtualisation – one network or network element per customer is inefficient in terms of power and space, so consider technologies such as Multiprotocol Label Switching to enable future virtualisation.
  • View power requirements holistically and prioritise efforts based upon reducing overall power consumption.
The need to save energy for the sake of the planet is now well established. Within data centres, the need to save energy is no less critical, not just for the sake of the environment but in order to ensure the enterprise’s viability, too. Now is the time to go green.

Cisco is exhibiting at Storage Expo 2008 the UK’s definitive event for data storage, information and content management. Now in its 8th year, the show features a comprehensive FREE education programme and over 100 exhibitors at the National Hall, Olympia, London from 15 - 16 October 2008 www.storage-expo.com

Source: StoragePR
<>