Pay-by-use computing is coming
Why own computers, when you can just buy the number-crunching muscle you need? Anand Parthasarathy explores the emerging options of utility computing.
Heterogeneous specialist providers of hardware, software, networking and storage must come together at one platform combining the functions of storage, site and network manage. to provide a unique utility computing module.
ANALOGIES ARE important guides to understanding; but terminology can be a tricky thing. Take the recent excitement in the enterprise computing arena, about the emerging option called `utility computing'.
The theory is that large (and not so large) corporate should be able to pay for the precise computing power that they need neither more, nor less. This is beginning to sound interesting to many of them, because they know that at any given moment in time, more than half their servers and storage devices are idle. That can add up to a lot of wasted terabytes and tera rupees.
What if one could get number-crunching power as easy as turning on a water tap or a light switch two utility services that one is accustomed to paying-by-use? Hence the term `utility computing' and no confusion so far. Now, some linguistically challenged experts stepped in and started calling it by another name; Grid computing. Why `grid'? Because in the US of A, they say `grid' for the electricity supply network, dummy! This is when confusion became worse confounded. Because grid computing is an altogether different ball game.
It is a term generally understood to mean a vast, and possibly informal, network of computers, across countries and continents, combining their computational muscle power to solve a huge problem like DNA code breaking or finding a cure for cancer.
The world's largest working computer grid was demonstrated in early September this year, when over 6000 computers at 78 sites worldwide, joined to crack problems in particle physics.
It was called the Large Hadron Collider Computing Grid. The Large Hadron Collider is a mammoth machine under construction at the CERN particle physics centre in Geneva and the grid will deal with the 15 petabytes that is, 15 million billion bytes of data, that will be produced by this work.
Single shared pool
In a narrower sense, Grid Computing is also understood to mean a network within your own organisation: uniting all your servers and storage into a single shared pool that acts as a single computer. In other words, all your applications tap into all your computing power.
This type of grid computing is being promoted by different companies only they give it different names: IBM calls it `on-demand computing'. Hewlett Packard uses the term `utility data centre.' Sun Microsystems has given it another name: `N1' and describes it as a virtual data centre.
In fact virtualisation is the name of the game: software is on offer from these companies which helps enterprises to manage hundreds of servers and storage units spread across continents, and operate them as if they are part of a giant grid, swapping and reassigning tasks `on the fly' as demand peaks in one application area or wanes in another.
It is not just the big name computer makers who are into grid computing; storage management specialists like EMC, NetApp and Veritas, have created solutions, which allow storage networks to work in tandem with server grids, releasing surplus space in one storage area network when there is a crunch in the storage elsewhere.
New, unusual concept
A few weeks ago, a U.S.-based Indian company mooted a new and somewhat unusual concept of `compute pools' flexible network attached processing engines that would do for the server side of computing deployments what network attached storage has done for storage. Azul Systems' imminent offering was reported in The Hindu of November 7, 2004 ("Indian startup crafting radical server technology")
Pure utility computing on the other hand, assumes you have literally thin resources at your own end, maybe some of those `thin client' computing platforms. "It is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges for specific usage rather than a flat rate.
"Like other services, such as electrical power, it seeks to meet fluctuating customer needs, and charge for the resources based on usage rather than on a flat-rate basis. This approach, is sometimes known as `pay-per-use' or `metered services.''' (Quoted from www.whatis.techtarget.com.)
Internet Service Providers and Internet Data Centres already provide web services to multiple customers in a manner that makes the end user feel he is dealing directly with one web resource.
Application Service Providers provide clients with the ability to access pricey software application tools as and when they require them, without having to license them directly.
So why not take the trend to its logical conclusion and provide the full gamut of computing and storage services on a pay-per-use basis?
That is the compelling logic behind utility computing and this may well be the next wave of enterprise computing that is being driven by countries like India and China where the scenario is made up of a large number of small and medium players rather than a small number of very big corporate customers. To them utility computing will seem like an idea whose time has come.
For this to happen in a big way, heterogeneous specialist providers of hardware, software, networking and storage must come together at one platform combining the functions of storage, site and network manager and offer a `single window' of opportunity, that will appear to be a unique utility computing module to the thousands of user agencies out there. For them it should be as simple as opening a water tap and paying by the litre.
Send this article to Friends by