IT production in a modern data centre involves many different components. Particular attention is given to the network, but also to compute and storage.
While, in the old days, the priority used to be the ability to compute more quickly, nowadays the big challenge is to organise data centre operations to be as efficient as possible.
The volume of virtual servers and data and, thus, the number of systems to be managed is constantly growing in almost every data centre. In contrast, the number of IT staff employed usually remains the same.
One way to get on top of this growth is large-scale homogenisation. The fewer different systems you have, the fewer the different administration tools you need to master. As a result you probably already have, for your data centre, a preferred manufacturer, e.g. for servers.
For mega-data centres such as those at Google, Amazon and Facebook, however, this type of homogenisation is in no way adequate. These companies have, for years, adopted a policy of wide-scale standardisation combined with a great deal of automation. In principle, these massive data centres now consist of grids of innumerable blocks of the same type which contain all the necessary components (compute, storage, virtualisation, network). As a result, little additional work is involved in any upgrades or in replacing modules if they fail.
This basic idea, known as HyperConverged Systems (HCS) and web-scale IT infrastructure, can also be used for your data centre: data centre modules of a similar type
- which can be managed as if they were a single device,
- which organise themselves to provide collaboration and redundancies,
- which offset any damage by repairing themselves
Wizards ensure that software updates are rolled out automatically. With just a few mouse clicks you can trigger your hypervisor update and then move on to more important tasks. The system itself ensures that hosts are “emptied out”, updated and then reintegrated into the grid. Host by host, until the entire grid is up to date, without you ever having to intervene.
Likewise, separate management of hard disk devices is no longer required. The overarching HCS software ensures that the data associated with the workload is always located on the same physical medium as the application. As a result, the server/storage connection is even faster than fibre channel. The system also ensures, itself, that the data is redundant – further copies and replications can be made and configured.
The HCS modules are available in different versions, in terms of their computing power, capacity and memory speed, and with additional graphics processors. So they can be optimally customised to suit the needs of your business processes, despite the standardisation.
Talk to us about how, in the future, you can be as efficient as Google & Co.