The Big Switch Off- Decarbonising The Data Centre
The global data centre industry revolves around data that is available 24 hours a day, 7 days a week, 24×7 in other words. The world is seeing an unprecedented growth of data and subsequently data retention. Data is doubling in size every 2 years, and this trend is set to increase even further with the explosion of Big Data and the Internet of Things (IoT). The Internet of Things will subsume the Information and Communication Industry (IDC 2014)
What the industry has yet to adapt to is data that is not available on a 24×7 basis. It is assumed that everything we need to know will always be available at a touch of a button or click of a mouse. However, what would happen if we ran out of space to store all our data, or if there were not enough data centres to store the data, or more realistically, if there wasn’t enough power to keep all the data centres running all the time.
Data centres in the UK currently account for about 8% of all electricity consumed and this is forecast to rise to about 20% in the next few years if current trends continue. The reality is that there just isn’t enough power generated in the National Grid to support this demand. So what is the answer?
The majority of data we generate is only used a few times, sometimes only once, and then stored forever more until we need it again. A good example of this is a person’s bank accounts or a company’s trading accounts.
An average bank customer rarely requires more than the last 3 months’ worth of bank account data. Banks are archiving older material and making it increasingly difficult for consumers to view back more than the last 2-3 years.
By law in the UK a company is required to keep 7 years’ worth of accounts in case of an inspection by the Inland Revenue. In reality companies rarely need access to data outside their current year’s figures, the other 6 years can be filed safely, and readily accessible, until required.
Consumers create data but enterprises are responsible for data. Two-thirds of the digital universe bits are created or captured by consumers and workers, yet enterprises have liability or responsibility for 85% of the digital universe (IDC 2014).
A new way to manage data and its exponential growth is required. The effects will not only be a reduction in costs for enterprises but an overall reduction in energy consumption and CO2 emissions on a global level.
The problem faced by the data centre industry is not just isolated to the UK, but with every country in the world that data centres operate in. There are numerous accounts of power outages in countries with limited capacity at peak times and the global effect of the CO2 emissions can be compared to that of the airline industry.
The next wave of digital growth is coming with the advent of the Internet of Things (IoT) and Edge Computing, together with 5G mobile networks. Mobile operators are already recognising that IoT will be more than 4 or 5 times the size of the current cellular telephone market. Many have already launched their initial offerings, although the market is still relatively immature.
If we take the example of a limited company in the UK, it stands to reason that the current year’s accounts should be accessible all the time, but also that the remaining 6 years are not required to be active all the time. This represents more than 85% of a company’s accounting information.
If we were to scale this up to a whole data centre, we could say that 85% of data could be switched off when not required, just kept ‘warm’ for when it was needed. If we use the example of a data centre consuming 1MW every hour, it could be argued that only 150KW every hour was required to support the critical data.
- 1MW of electricity in the UK being consumed on a 24×7 basis would cost an enterprise somewhere in excess of £700,000 per year at a rate of £80/MWhr.
- 150KW of electricity would cost around £105,000 per year.
So what would happen to the other data that needs to be accessed every now and then? Supposing this data was only made available for a certain number of hours a day, 20, 10, 4 or even 2. Then, supposing the data was only available between Monday and Friday, excluding bank holidays.
Let’s use the example of 2 hours a day. Firstly, if we only take into account business working days, excluding weekends and bank holidays, the required active days are 252, not 365. This is already a saving of over 30%.
Then if we factor in the 2 hours a day, the total hours in a year that data needs to be available is 506, compared to the current global standard of 8,760, less than 6%.
We still need to remember that 15% of data needs to be available 24×7, but if we add that to the 506 hours we still only come to a figure of 1,820 versus 8,760 hours. This represents nearly an 80% saving on total consumption and CO2 emissions.
However, the savings do not end there. Since we no longer require electricity on a 24×7 basis at our ‘warm’ data centre site, we can negotiate the price we pay with the supplier
If we stay out of the red band (peak time) pricing, between 16:00 and 20:00, we can negotiate a much lower price per MWhr.
Technology is changing rapidly. Historically data was stored on floppy disks and tape drives. In fact, much of archived data is still stored on tape. The floppy disk was replaced by what most people are familiar with today, the Hard Disk Drive or HDD. This involved a spinning disk with a laser to read the data.
However, new technology has developed using Solid State Drives or SSD, that enable data to be stored without having to power up a disk or tape drive. This is the same technology that is used in pen drives or USB sticks as they are sometimes referred to. This technology allows users to simply plug in or connect the associated peripheral and get instant access to their data. When they are finished the user simply disconnects the devices from their computer or network.
The solution SmartEdge DC have developed is an enterprise level version of the portable hard drive, using common data centre components. The key is the separation of data, enabling users to differentiate between time-critical and non-time-critical data.
In essence, a new data centre has been designed without the resiliency of the 24×7 data centres. Why? One big reason is that the data stored here is not time-critical. If the data is not available on a particular day, then the business will not stop. The problem can be rectified and business resumes. This does not mean that the data is not valuable, only that it does require high availability.
What happens if the data becomes corrupt? The same as in a standard data centre. There is never only one copy of data in enterprises. There is always a backup and also a disaster recovery centre. If data centre A fails, then data centre B is powered up and takes the load.
New data centres require a great deal of up front capital expenditure, and old data centres have huge operating costs due to operating inefficiencies. We anticipate that the ‘warm’ data centres will be available at around 50% or less of the cost of a traditional data centre, due to the reduction in plant equipment and no backup generators.
Enterprises could choose to free up space in their existing data centres for critical data expansion by moving the non-critical data out to new purpose built data centres at a fraction of the cost of a typical new data centre build. There are several ways to accomplish savings, the good news is that this principle can be applied to all existing data centres.
This architecture could change the face of the data centre industry.
A traditional 24×7 ‘always on’ data centre stores all types of data, meaning that downtime is not an option. Financial losses associated with data centres are mainly attributed to human error and loss of power, due to the loss of critical data. These data centres are mainly Tier 3 or sometimes Tier 4, which is a measure of their resilience according to Uptime Institute guidelines.
These resilient data centres require a large amount of plant equipment as well as backup diesel generators to prevent downtime. The data centre industry uses a term known as ‘N’ for a piece of equipment and resilience is measured by additional or multiples of ‘N’. For example, a UPS system may be specified as N+1 or N+2, where N might be 4 UPS systems and the solution requires 5 or 6 systems for resiliency. 2N is where the standard requirements for operation are doubled for protection. This is obviously more expensive and is found more in Tier 4 data centres.
The SmartEdge demand-side response (DSR) data centre is different because, by design, it does not offer resiliency, and thus is only specified with the necessary parts to make it work, just like a home computer or laptop. If it fails, an engineer will either resolve remotely or fix on site.