Friday, February 22, 2008

Future-Proofing the Data Center: Will you be seen as a Visionary or a Fool?

Written by Ken Salchow, Jr
Wednesday, 13 February 2008

Innovation is rife with tales of visionaries and fools; the difference between the two is often simply the application of time. Some innovations are touted as visionary breakthroughs, but quickly become dim memories, simple fads, or punch-lines. Other innovations are seen as pure foolishness and only through many years and societal changes are they realized for the vision that they embody.

For example, take the new Airbus A380 “superjumbo” jet. As the largest commercial passenger jet ever built, it has been heralded as model of modern technology and achievement. Despite the hundreds of technology advancements included, it is its sheer size and capacity that have captured most people’s imagination. And yet, this modern marvel is still, in some ways, surpassed by another plane built more than 60 years ago and seen as a madman’s delusion—the Hughes H-4 Hercules, better known as the “Spruce Goose.” What a difference 60 years can make. Who knows how history will treat the A380 when another 60 years goes by?
The technology industry is often very similar.
Who knew at the turn of the 21st century that dedicated electronic books would exist, that the market would nearly completely fail less than three years later, or that they would be resurrected less than 5 years after that? This capitulation and uncertainty makes it hard to predict what the technological landscape will look like in the future, something many technology pundits have learned the hard way (Bill Gates, Ken Olson, etc.). IPv6 has been around for nearly 15 years and service-oriented architecture (SOA)—with all the associated protocols and standards encompassed by it—remains an often talked about, but rarely implemented ideal. So what is the modern technologist, IT Manager, or CIO going to rely on when trying to build the best, visionary solutions for the future without looking like a fool down the road?

Scalability
Certainly, any solution deployed must be capable of handling today’s needs as well as being able to handle the expected load of the future. The fact that the future, by nature, is unknown is the essence of the problem. A smart engineer will apply the simple, yet well proven concept of “buy the biggest, most powerful widget that the budget will allow.” When purchasing a server platform, this might mean buying the system with the greatest number of processor sockets and addressable RAM—even though the current budget may not allow you to fully populate them. At least in this case—if the current system becomes overloaded—you always have the option of adding more processor and memory down the road, without having to replace the entire system or spend all your money upfront for performance you may never need.
This may also mean architecting your solution in such a way as to virtualize as many components within your architecture as possible, specifically the “one-to-many” virtualization that enables you to grow beyond the bounds of a single hardware device or instance. For example, instead of buying a single, extremely powerful server platform, you might consider buying several, less powerful (and less costly) systems, and using the remaining budget to buy an Application Delivery Controller (ADC) to virtualize the application servers. This enables you to incrementally add additional, low-cost servers to the architecture as performance demands increase; this allows you to address future concerns without re-architecting or replacing existing design elements.

Extensibility

Another critical element in architecture design today is the need for extensibility. Scalability helps address the unknown demands of performance in the future, while extensibility helps address the unknown features/functions of the future. Ideally, whenever possible, today’s architects need to find solutions that are not simply point-products. Obviously, products that solve the business need of the project are all viable candidates, but preference should be given to products which, in addition to solving the problem at hand, are also capable of solving other problems through the addition of hardware or software components. Whether or not one can foresee the need for these additional solutions is, often times, irrelevant. The extensible capability of the solution is what will count when you are planning for an unknown future.

Current extensibility should not be the only criteria, however. A visionary technologist will also examine the design and track record of products to evaluate their future extensibility. Software applications built to SOA principles are a perfect example of this: not only does the virtualized and object-oriented nature of them provide robust extensibility in the present, but—depending on the history of the vendor or third-party support—it can provide built-in protection against the unforeseen needs of tomorrow. Products that are inherently designed to have new, unique technology “plugged-in” down the road at least have the potential to adapt to your changing needs; in the case of those without that capability, what you see is what you get.

Adaptability

Just as important as being able to add more capacity and more functionality to the solution, is the ability to adapt to changes in the business need or the unique characteristics of an organization’s implementation. There are two significant reasons for this need. The first is simply a reflection of the de facto 80/20 rule. It is most often more efficient and cost effective to buy shrink-wrapped solutions that solve 80% of the problem as long as they are adaptable, or customizable, enough to allow them to be modified to solve the remaining 20%. Again, many of the commercially available SOA-based software solutions show this trend—providing development platforms on which to build custom applications to solve unique business problems, not the final solutions themselves. The second reason adaptability is so important, is that technology standards today are also built with the ideals of scalability and extensibility in mind. As such, many standards allow for custom modification on a per implementation basis without violating the “standard.” This leads to situations where two similar applications, both built to the same “standard,” may not be able to work with each other. The session initiation protocol (SIP) and hyper-text transfer protocol (HTTP) are two perfect examples where individual implementations (like custom headers) can make integration difficult.

Extensibility
can often help this, as it inherently enables components to be ”upgraded” or modified as well as act as a wholesale replacement or new feature addition. However, the effectiveness of this is contingent on the granularity of the components. A modularized accounting system, with separate components for general ledger, accounts payable/receivable, and so on, can be adapted to changes based on the replacement of modules or even pieces—such as changes in withholding in the payroll module. This allows the system some degree of adaptability through extensibility, but only on a module basis. A more powerful and direct way to achieve architecture adaptability is the inclusion of intelligence through application integration. That is, the ability to modify or adapt the way the module itself behaves to fit changes in need or in direct response to unique business requirements—especially in a dynamic and programmatic way. This not only allows the solution to adapt to necessary changes, like accounting for custom headers used by an ancillary solution, but enables it to act in an intelligent way and continue to work the way it always has.


Manageability
While the architecture you implement today may have limited components with finite capabilities, a design built with scalability and extensibility in mind promises a much more complex tomorrow. Manageability is the final critical component of modern architecture design. The simplification, and consolidation, of managing the diverse components within the system—keeping in mind the components and features not yet implemented (or even imagined)—is an absolute requirement. No one will remember how scalable, extensible, or adaptable the architecture was intended to be if the resultant solution is too costly or complex to manage.

It is important to note that the management system itself is also a business solution. This means that the same characteristics of scalability, extensibility, and adaptability of management need to be considered. In addition to providing a virtual view of the entire architecture, management systems need to provide the scalability to grow with the architecture, the extensibility to integrate with new features/solutions down the road, and the adaptability to meet the unknown issues of tomorrow.
Visionary or Fool?

The real difference between the visionary and the foolhardy is that the visionary anticipates the unknown and incorporates “wiggle-room”; the fool makes finite decisions based only on what can be known at any given moment. The problem with systems analysis and design, as taught throughout the world, is that it remains fairly focused on analyzing and solving the specific requirements on a single system based on concrete information. This has been the bane of information security for years since users rarely mention security as a requirement, no one can anticipate the security threats of the future, and, therefore, without an organizational mandate, it never becomes a priority. With the ever-increasing rate of technological change and complexity, it is essential that the concepts of scalability, extensibility, adaptability, and manageability also become primary stepping stones in solution design and data center architecture. No matter how technologically advance an architecture is the day it goes live, the real test is the test of time; how will that solution look when tomorrow comes? Will your architecture prove you to be visionary or foolish?

About the Author: KJ (Ken) Salchow, Jr. is the Manager of Technical Marketing at F5 Networks