Saturday, June 5, 2010

Software Licensing Models Must Evolve to Match Innovation in Computing Resource Delivery

Software licensing has never been one my favorite topics. It has grabbed my attention lately however as a result of what I perceive as a gap between the evolution of infrastructure provisioning models and software vendor licensing models. As infrastructure virtualization and server consolidation continue to dominate as the main trends in computing delivery, software vendors seem bent on clinging to historical licensing models based on CPU. In today’s dynamically allocated world of processing power designed to serve SaaS models for software delivery and flexible business user capabilities, software providers still insist that we as providers of computing services be able to calculate exactly how many “processors” we need to license or exactly how many unique or simultaneous users will access our systems. In addition, many software salespeople use inconsistent and confusing language that causes confusion among business people and even among some CIO’s.

Is it a CPU, a Core or a Slot?

This is where I see the most confusion and inconsistency in language, even among software sales reps. Here is a little history of how things have gotten more complex. It used to be (in the bad old days) that one slot equaled one CPU which had one core. So, when software was licensed by the CPU it was simple to understand how many license you needed. You just counted the number of CPU’s in the server that was going to run your software and there you had it. As chip makers continued to innovate, this model started getting slightly more complex. For example, when multi-core technology was introduced it became possible to have a CPU in a slot that actually constituted multiple cores or execution units. This innovation continues today and is evident in the latest six and even eight core Nehalem and Westmere architectures. So now the question had to be asked to software sales reps “what do you mean by CPU?”

Processing Power Gets Distributed

At nearly the same time that the chips inside our servers where getting more powerful and more complex, our server landscapes where getting more complex as well. Corporate enterprises began migrating away from centralized mainframe computing to client/server delivery models. New backend servers utilizing multi-core processors began to be clustered using technologies such as Microsoft Cluster Server in order to provide high availability to critical business applications. Now the question of “how many CPU’s do you have?” became even more complex.

ERP Systems Drove Complexity in Server Landscapes

In the late 1990’s, many businesses introduced Enterprise Resource Planning (ERP) systems designed to consolidate sprawling application landscapes consisting of many different application for functions such as finance, inventory and payroll. The initial server architecture for ERP systems generally consisted of a development system, a quality system and a production system, each running on distinct computing resources. Most business users only accessed the production system while users from IT could potentially access all three systems. At this point, software sales reps began speaking of licensing your “landscape” and many ERP providers offered named user licensing. A named user could access the ERP system running on development, quality or production regardless of the fact that these where distinct systems.

ERP Technical Architecture Options Drive Confusion

ERP systems still run on databases and require appropriate database licensing. ERP packages such as SAP offer technical design options allowing the database used for the ERP development, quality and production systems to be ran separately in a one for one configuration or in a clustered environment with one or two large severs providing the processing power for all the system databases. With the database licensing needed to run the ERP system there was still a breakeven point where licensing the database by “CPU” could come out cheaper than licensing the database by named user. And so as ERP implementations continued, software salespeople now spoke to technology and business managers about “licensing their landscape by named users with options for CPU based pricing based on cores”. The fog was starting to set in. In fact these options became so confusing that when asked what the letters SAP stood for many business managers would reply “Shut up And Pay”. The situation only got worse in the early 2000’s when ERP providers like SAP saw the adoption rate of new core ERP systems begin to slow. At this point, the majority of enterprise organizations already had some sort of ERP system in place. ERP providers branched out into other supporting systems such as CRM, Supply Chain Management or Business Analytics. Now the “landscape” of applications became even more complex with named users crossing development, quality and production systems across multiple sets of business systems.

Virtualization - The Final Straw

In the 2007 timeframe, a major shift in the way computing power was allocated to business systems began to appear in enterprise data centers. Virtualization technology from companies such as VMware began to gain main stream adoption in enterprise class IT operations. Virtualization gave rise to terms such as Cloud Computing meaning that the processing power for any given business application was provided from a pooled set of computing resources, not tied specifically to any one server or set of servers. An individual virtual server now depended on virtual CPU’s which themselves did not have any necessary direct relationship to a CPU or core in the computing “Cloud”.

Despite this colossal shift in the provisioning of IT computing power, many software vendors clung to their licensing models, insisting on knowing how many “CPU’s” you would use to run their software. Even today in 2010 when Virtualization has been well vetted and proven to be a viable computing delivery model at all tiers of an enterprises technical architecture, some software vendors either insist on sticking to outdated license models or simply issuing vague statements of support for virtualization. These vague statements of support often seem to be based more on resistance to change traditional licensing models rather than on any clearly stated technical facts.
After a long and storied evolution of computing power delivery from single core, single CPU machines to Cloud Computing, it seems like software providers ranging from database providers to providers of business analytics have been slow to innovate in terms of licensing models. Enterprise class providers of business software such as SAP or Oracle who claim to have embraced virtualization yet continue to either license certain product sets by CPU or issue only vague statements of virtualization support seem to be struggling to provide innovate pricing structures aligned with the new realities of computing power delivery.

I am sure they will get there, but for now I still get emails from software sales representatives quoting prices for products “by the CPU”. Sigh…..


  1. Thanks a lot for this blog. This really helped me to understand Licensing issues over CPUs, Cores and Virtualization Licensing too.