Future of Ha
Essay by review • February 14, 2011 • Research Paper • 2,881 Words (12 Pages) • 1,131 Views
Introduction
High-Availability solutions have evolved significantly over the past couple of years; many of today’s solutions combine mirroring or replication technologies with intelligent management software to enable geographic separation of cluster nodes and storage to provide disaster recovery within the High-Availability framework. But what will tomorrow’s High-Availability solutions look like?
Solution Architectures
There are probably three realistic architectures for high-availability which will exist once consolidation in the market place is complete;
Distributed Computing
Data is distributed among many machines. Changes are normally made on one machine and roll out over time to all machines. This model is in use today, in a simplistic form, by software vendors who use �mirrors’ for downloads like tucows or in distributed computing projects like the �search for extraterrestrial life’ (SETI).
This model allows fairly low powered machines to provide a high computing capability and more importantly a system resilient to large scale failure. However, conflicts can occur if the same �object’ is changed in two different places at the same time. This is normally overcome by running a master, for writes, and multiple slaves, for read operations. Problems can occur if the master is unavailable for any length of time.
Parallel Computing
Transactions are completed simultaneously on different servers at the same time. However, in contrast to the distributed computing model, the servers perform a co-ordinated locking arrangement, so that no conflict can occur. Today only a handful of vendors, like Oracle with their RAC solution, have begun to provide some of the components needed for true parallel operations.
The theoretical benefits of parallel computing include improved scalability and performance. Depending on the specific solution, the servers don’t need to be co-located or share the same disks but if performance is also important then the links between the machines must be very low latency, normally only available with co-location and specialised hardware.
There are a number of ways to achieve parallel computing operations; at the application level, at the operating system level and through middle-ware.
On-Demand Computing
Only one instance of an application processes all transactions but other instances are available to take over if a fault should occur.
This can be achieved using co-located hot-standby servers or disparate mirrored data. The mirrored solution is only practicable where performance is a not priority.
So What is Grid Computing?
The constant marketing push for new buzz words and news items has perpetuated a view that true grid computing for the enterprise is available, or will shortly be available. Distributed applications, like SETI, are already running in a form of grid like architecture but the enterprise needs databases and other applications where synchronised data must be provided to be useful. Even the director of SETI rejects the grid label, preferring the term “public resource computing”.
The term “grid computing” was originally coined by Carl Kesselman and Dr Ian Foster of America’s Argonne National Laboratory in 1998, they drew an analogy between the supply of computing power and the supply of electricity, which is delivered when and where you need without needing to worry about where it came from.
In 2000 Dr Foster refined his definition with Steve Tuecke to describe the essence of grid computing. They came up with three essential components;
1) coordinates resources that are not subject to centralised control …
(A Grid integrates and coordinates resources and users that live within different control domainsвЂ"for example, the user’s desktop vs. central computing; different administrative units of the same company; or different companies; and addresses the issues of security, policy, payment, membership, and so forth that arise in these settings. Otherwise, we are dealing with a local management system.)
2) … using standard, open, general-purpose protocols and interfaces …
(A Grid is built from multi-purpose protocols and interfaces that address such fundamental issues as authentication, authorisation, resource discovery, and resource access. It is important that these protocols and interfaces be standard and open. Otherwise, we are dealing with an application specific system.)
3) … to deliver non-trivial qualities of service.
(A Grid allows its constituent resources to be used in a coordinated fashion to deliver various qualities of service, relating for example to response time, throughput, availability, and security, and/or co-allocation of multiple resource types to meet complex user demands, so that the utility of the combined system is significantly greater than that of the sum of its parts.)
Drawing from Dr Foster’s work, when or if grid computing is delivered it will be a collection of self healing, ultimately scalable and self learning systems that can accept new servers or applications without downtime or manual re-configuration. Servers will re-configure on the fly to provide extra resources to applications when and where they are needed. However, it is unrealistic to expect this to deliver enterprise computing facilities for all applications or users in the coming decade. Security and predictable performance as well as cultural changes are all challenges that need to be met before the enterprise will embrace true grid computing.
The Needs of Corporate Enterprise
Distributed computing provides excellent availability and potential performance but, for reasons of security, predictability and the need to have coherent synchronised data, distributed computing is not a model that is normally accepted by enterprises outside the field of number crunching based research processing.
Enterprises need a balance between cost and performance, while maintaining control, availability
...
...