Thursday, June 17, 2010

Are You Over Virtualizing Your Technology Infrastructure?

In a recent key note address at Gartner’s 2010 Infrastructure and Operations Summit, Raymond Paquet, Managing VP at Gartner, made the point that consistently over the last eight years 64% of the typical IT budget has been dedicated to tactical operations such as running existing infrastructure. A driving factor behind this consistent trend is the tendency of technology managers to over deliver on infrastructure services to the business. Paquet makes the point that technology managers should always be focused on providing the right balance between technology solution cost and the quality of service the solution provides to the business. As the quality of a technology service increases the cost of that service will naturally increase. There is often still a gap in the quality of service a business requires and what is architected and built by technology professionals. The key is to understand the businesses requirements in terms of parameters such as uptime, scalability and recoverability and to architect an infrastructure aligned with these requirements. Sounds simple, right? Often however technology professionals find themselves (usually inadvertently) swept up in implementing technology solutions that at specific levels of adoption add value to a business but at unnecessary levels of adoption serve to pull budgets and resources away from more value added activities. Virtualization is a case in point.

The gap between quality of technology services required by the business and the cost to deliver them can be observed today in the tenacious drive by many technology managers to virtualize 100% of their IT infrastructure. Gartner research VP Phillip Dawson points out what he calls the “The Rule of Thirds” with respect to virtualizing IT workloads. The theory is that one third of your IT workload will be “easy” to virtualize, another one third will be “moderately difficult” with the remaining third being “difficult” to virtualize. Most technology organizations today have conquered the easy one third and have began working on the moderately difficult workloads. The cost in terms of effort and dollars to virtualize the final one third of your IT workload increases exponentially. Are you really adding value to your business by driving 100% of your environment into your virtualization strategy? The recommendation from Dawson is to first simplify IT workloads as much as possible by driving towards well run x86 architectures. Once simplified, IT workloads with low business value should be evaluated for outsourcing or retirement. Retirement may consist of moving the service the application provides into an existing ERP system thus leveraging existing infrastructure. Workloads with high business value will likely now (after simplification) be cost effective to bring into your virtualization strategy.



So be selective in the applications you target for virtualization. Be sure that all steps have been taken to simplify the respective applications technology infrastructure so that it is not in the upper third of costs and effort in terms of virtualization effort. Outsource or retire complex applications where applicable rather than force fitting them into your own virtualization strategy. Look for ways to retire low business value applications with complex technology architectures (such as moving them into an existing ERP package). The bottom line is that all technology investments and initiatives must be aligned with your organizations needs and risk tolerance. Virtualization is no different. Making unnecessary efforts to virtualize certain applications may be doing more harm than good.

Saturday, June 5, 2010

Software Licensing Models Must Evolve to Match Innovation in Computing Resource Delivery

Software licensing has never been one my favorite topics. It has grabbed my attention lately however as a result of what I perceive as a gap between the evolution of infrastructure provisioning models and software vendor licensing models. As infrastructure virtualization and server consolidation continue to dominate as the main trends in computing delivery, software vendors seem bent on clinging to historical licensing models based on CPU. In today’s dynamically allocated world of processing power designed to serve SaaS models for software delivery and flexible business user capabilities, software providers still insist that we as providers of computing services be able to calculate exactly how many “processors” we need to license or exactly how many unique or simultaneous users will access our systems. In addition, many software salespeople use inconsistent and confusing language that causes confusion among business people and even among some CIO’s.

Is it a CPU, a Core or a Slot?

This is where I see the most confusion and inconsistency in language, even among software sales reps. Here is a little history of how things have gotten more complex. It used to be (in the bad old days) that one slot equaled one CPU which had one core. So, when software was licensed by the CPU it was simple to understand how many license you needed. You just counted the number of CPU’s in the server that was going to run your software and there you had it. As chip makers continued to innovate, this model started getting slightly more complex. For example, when multi-core technology was introduced it became possible to have a CPU in a slot that actually constituted multiple cores or execution units. This innovation continues today and is evident in the latest six and even eight core Nehalem and Westmere architectures. So now the question had to be asked to software sales reps “what do you mean by CPU?”

Processing Power Gets Distributed

At nearly the same time that the chips inside our servers where getting more powerful and more complex, our server landscapes where getting more complex as well. Corporate enterprises began migrating away from centralized mainframe computing to client/server delivery models. New backend servers utilizing multi-core processors began to be clustered using technologies such as Microsoft Cluster Server in order to provide high availability to critical business applications. Now the question of “how many CPU’s do you have?” became even more complex.

ERP Systems Drove Complexity in Server Landscapes

In the late 1990’s, many businesses introduced Enterprise Resource Planning (ERP) systems designed to consolidate sprawling application landscapes consisting of many different application for functions such as finance, inventory and payroll. The initial server architecture for ERP systems generally consisted of a development system, a quality system and a production system, each running on distinct computing resources. Most business users only accessed the production system while users from IT could potentially access all three systems. At this point, software sales reps began speaking of licensing your “landscape” and many ERP providers offered named user licensing. A named user could access the ERP system running on development, quality or production regardless of the fact that these where distinct systems.

ERP Technical Architecture Options Drive Confusion

ERP systems still run on databases and require appropriate database licensing. ERP packages such as SAP offer technical design options allowing the database used for the ERP development, quality and production systems to be ran separately in a one for one configuration or in a clustered environment with one or two large severs providing the processing power for all the system databases. With the database licensing needed to run the ERP system there was still a breakeven point where licensing the database by “CPU” could come out cheaper than licensing the database by named user. And so as ERP implementations continued, software salespeople now spoke to technology and business managers about “licensing their landscape by named users with options for CPU based pricing based on cores”. The fog was starting to set in. In fact these options became so confusing that when asked what the letters SAP stood for many business managers would reply “Shut up And Pay”. The situation only got worse in the early 2000’s when ERP providers like SAP saw the adoption rate of new core ERP systems begin to slow. At this point, the majority of enterprise organizations already had some sort of ERP system in place. ERP providers branched out into other supporting systems such as CRM, Supply Chain Management or Business Analytics. Now the “landscape” of applications became even more complex with named users crossing development, quality and production systems across multiple sets of business systems.

Virtualization - The Final Straw

In the 2007 timeframe, a major shift in the way computing power was allocated to business systems began to appear in enterprise data centers. Virtualization technology from companies such as VMware began to gain main stream adoption in enterprise class IT operations. Virtualization gave rise to terms such as Cloud Computing meaning that the processing power for any given business application was provided from a pooled set of computing resources, not tied specifically to any one server or set of servers. An individual virtual server now depended on virtual CPU’s which themselves did not have any necessary direct relationship to a CPU or core in the computing “Cloud”.

Despite this colossal shift in the provisioning of IT computing power, many software vendors clung to their licensing models, insisting on knowing how many “CPU’s” you would use to run their software. Even today in 2010 when Virtualization has been well vetted and proven to be a viable computing delivery model at all tiers of an enterprises technical architecture, some software vendors either insist on sticking to outdated license models or simply issuing vague statements of support for virtualization. These vague statements of support often seem to be based more on resistance to change traditional licensing models rather than on any clearly stated technical facts.
After a long and storied evolution of computing power delivery from single core, single CPU machines to Cloud Computing, it seems like software providers ranging from database providers to providers of business analytics have been slow to innovate in terms of licensing models. Enterprise class providers of business software such as SAP or Oracle who claim to have embraced virtualization yet continue to either license certain product sets by CPU or issue only vague statements of virtualization support seem to be struggling to provide innovate pricing structures aligned with the new realities of computing power delivery.

I am sure they will get there, but for now I still get emails from software sales representatives quoting prices for products “by the CPU”. Sigh…..