Tuesday, October 5, 2010

Are You Global or International?

Is your organization global or international? Do you know the difference? Does it matter in terms of how you plan for and carry out technology operations such as support, procurement, hosting or messaging? You should know and yes, it makes a huge difference to the structure of your technology operations. Trying to operate a global technology strategy at a company who is truly an international entity will surely pit business needs against your technology goals, setting the stage for failure. So are you global or international? I am sure there are volumes of business school literature on this topic but here is a simplified distinction that is immediately relevant to your technology planning efforts.

Global companies will produce standard products or services that are not tailored to the specific tastes or needs of specific country markets. Usually, such products or services will not be differentiated by brand recognition and may represent either commodities that serve as inputs to other products or services or as standard outputs that don’t vary across markets. An example could be an organization who produces chemicals that go into other industrial products to produce a final result. While certain aspects such as specific packaging or regulatory considerations may vary from market to market, there is more commonality across markets than differences. This has a huge impact on how you are able to design and structure your technology systems and associated services. There will likely be a single set of business systems that have been slightly customized to fit the needs of specific markets. A centralized infrastructure and associated support model may function well in a global organization. Such centralized services could potentially go beyond commoditized IT services such as email hosting to include more advanced services such as centralized Service Desks. Can all technology services be centralized in a global organization? Of course not, local market needs will always dictate some degree of local flexibility. In a global organization however, the technology synergies across operations in different regions around the world will outweigh the need for local differentiation.

International companies may operate in markets around the globe, but tend to offer distinct products or services in each market that are customized to local tastes or market requirements. Products or services such as consumer goods that rely heavily on brand recognition tend to be heavily tailored to local tastes, customs and trends. For example, a packaged meat company operating in Asia, Europe, Latin America and the US will likely have highly differentiated SKU’s for each specific market. In addition to product differentiation, consumer goods often have very distinct and varying methods of distribution across global markets. For example, in the US, consumer goods may be delivered to central warehouses of large big box retailers who then ship products through their own logistics chains to grocery shelves. In some Asia markets, the dominance of the small, local shops may dictate a Direct Store Delivery (DSD) model where product is delivered straight from the manufacturer to the store shelf. In International organizations, the need for local market flexibility will dictate a need for market specific technology services. This does not mean there are no technology synergies to be had at international organizations. Again, commodity services such as email hosting or centralized infrastructure support along with leveraging buying power through centralized procurement may represent true opportunities for working globally while acting locally.

I have obviously painted this topic with a wide brush but hopefully the point is clear. Understand the needs of your organization in terms of its operations around the world. Don’t try to centralize services where local flexibility is needed in order to win in the local market. Don’t leave services local that represent cost saving or opportunities for scale when managed centrally. Understand that there are times when being global and being international are not the same and that a misaligned technology strategy can hamper your organizations success in either case.

Sunday, September 12, 2010

Evaluating Staff with High Level Questions

As a Technology Director, I want all my staff to be successful. I would like to see all my managers go on to become Directors, Vice Presidents or CTO's. How can you tell if you are coaching your managers to obtain the skills they will need? What is a simple test for understanding where a particular manager may need more work? Here is a simple test, ask them a question. Before you roll your eyes and click out of here allow me to explain.

As you move up the technology profession career ladder, both the types of questions you get asked as well as the types of answers you are expected to give will change significantly. A Service Desk representative gets asked a myriad of specific technology questions on a daily basis. At the level of Service Desk representative, success is defined by how well the individual is able to articulate instructions clearly and provide direct input on very tangible and immediate problems. A Network Engineer may be approached by his or her manager and be asked to provide a design for a specific business case. The engineer may be given a loose set of guidelines along with some specific constraints. The Network Engineer produces a solution that he or she formulated using a combination of technical expertise, past experiences and critical thinking. The questions asked the Network Engineer are less tangible and immediate than what typically confronts the Service Desk representative. While answers given from the Service Desk representative to an end user may generally be right or wrong (they fix the issue or not), answers given by the Network Engineer may not so easily be classified as “correct” or “incorrect”. A technology manager responsible for more than one discipline such as Networking and Server Operations may get asked even broader questions from his or her Director. The Director may not (and usually is not) looking for the specifics that went into the answer but is simply interested in the input he or she needs to help guide their thinking on broad topics that effect strategic technology directions (e.g. hosting options, outsourcing options, budgeting etc..). The Director may not get asked anything at all! The CTO or VP to which the Director reports may have broad reasonability for business applications, development, project management etc.. At the VP level, the Technology Director should be a trusted advisor who stays abreast of the technology market trends and opportunities. It is expected that the Director has a strategy and direction for the organizations critical IT infrastructure that aligns with business goals. The director will often simply be asked to present his or her plan in terms of budgeting needs, resource commitments, business disruption and ROI/TCO models. The Director is rarely instructed to look into specific technologies like desktop virtualization, however if an opportunity exists for an organization in the area of desktop virtualization it is expected that the Director has examined it and has a logical opinion. At a certain level, people stop asking you specific technical questions and start looking to you to be an intelligent, informed and articulate advisor. A successful Technology Director never finds himself or herself waiting to be told what to do!

So, is your Server Operations Manager learning the skills he or she will need to sit in the Directors chair? Is he or she prepared and do they understand the types of deliverables they will need to provide? It is your job as a Technology Director to make sure you are giving your team opportunities for growth. While there may not be any immediate promotion opportunities in your organization, people will feel good about working for you if you can provide them with skills that they feel and understand will benefit them at some point in their career. So ask your mangers the same types of questions you will get asked. "Please provide me a projected capital spending budget for your area", "Please prepare your thoughts on the best practices for our company with respect to server platforms". Some managers will find these questions strange, they will see you as being "to high level" and start to grump about how "you don’t understand the specifics". These people aren’t ready. If they push back on you and give answers like "I don’t know the details" or if you hear "some of it is up to you", you need to coach this person. You may be surprised that some managers will provide you with very good market analysis, strategic plans and proposed budgets. These are your managers who are on the right track and you should foster their energy among the larger management team. Hold their work up as examples and reward them accordingly. Of course, better yet are the managers who proactively provide you with insight without you specifically asking for it!

Monday, August 23, 2010

You Don't Understand Your IT Costs

Do you know what your IT costs are? I hear many technology Directors now saying, absolutely! They will rattle off their IT CAPEX budget for the current year and perhaps the last two or three years. They probably have a line item view of the IT P&L with a deeper view into some areas such as maintenance contracts, consulting expense and supplies. For the majority of technology directors however who came up through the ranks of network engineering, data base administration or something equivalent, that is where the understanding ends. Many do not understand how CAPEX ultimately ends up on their P&L (through depreciation) and will struggle when asked to classify their total IT spending into categories such as business application expense versus collaboration application expense. Ultimately however being able to classify your total IT spending into categories that you can then easily use to bench mark your spending levels against industry standards is fundamentally what is meant by “understanding” your IT spend. Below are three practical tips that can be used by Directors just trying to get a handle on overall IT spend or by new comers trying to truly understand how his or her new IT operation functions.

Know Your Asset Classes

Every company will categorize capital spending into classes. These classes are used by accounting for grouping specific assets together for depreciation purposes. For example, IT may have the asset classes “PC's”, “Servers” and “Software”. The depreciation on your P&L is the sum of the depreciation from each of these asset classes. The more detailed your asset classes the better, this will enable you to answer more specific questions about how your IT OPEX breaks down. For example, the total amount of your IT OPEX dedicated to PC's for 2009 may be equal to the amount of your total maintenance contracts dedicated to PC hardware & software PLUS the amount of depreciation carried on your P&L that is directly attributable to PC purchases. To know the amount of depreciation directly attributable to PC purchases you must of course have an asset class specific to PC's. So, go to your accounting department and ask for an IT depreciation report by asset class. See if you get the level of detail you need to make sense of the depreciation on your P&L, if not talk to your accounting manager and ask about creating IT asset classes that match your desired categorization strategy.

Know Your Maintenance Costs

Anyone who has had responsibility for managing maintenance contracts within a medium to large IT operations knows what a challenge this can be. You may literally have hundreds of hardware & software contracts all with different renewal dates. Very large IT operations may have a dedicated person and a sophisticated IT management software package dedicated to keeping track of maintenance contracts. In the typical medium IT operations, this duty is loaded onto the technology director who is generally armed with a self created spreadsheet for tracking. Here are some tips. First, get a feel for what contracts you have that are “pre-paid” expenses and which ones get expensed in the period they are due. Usually larger contracts (read Microsoft or SAP) get amortized over the course of a year so from an accounting perspective they effect net income evenly across the year. The cash is of course paid when the invoice comes but the “expense” for these contracts is spread evenly across your P&L throughout the year. This is important from a budgeting standpoint, you should create a list of your “pre-paid” maintenance contracts and have accounting start the amortization of these amounts on the first day of your fiscal year. You should classify these expenses on your tracking sheet into categories that match your asset classes. Once you have a depreciation report by asset class along with a detailed view of your large maintenance contracts both categorized into your desired OPEX classifications, you are well on your way to being able to categorize your total IT yearly OPEX. You cant forget your smaller maintenance contracts that simply expensed as they are received. In aggregate, these can add up to be significant so you need to capture them in detail on your maintenance contract tracking sheet. Ask your accounting manager for a report detailing the line items that made up the maintenance contracts on your IT P&L. Generally, maintenance line items listed as “accruals” will be the “chunks” of the larger maintenance contracts being spread evenly throughout the year. There should be detailed enough description on all other line items for you to ascertain what the expense was for (e.g. web filtering software etc.). Classify these contracts into your desired categories as well and you should have a firm grasp on your yearly IT maintenance contract commitment.

Understand your IT GL Codes

General Ledger (GL) codes are used by your accounting department to classify expenses into line items that will be shown on your IT P&L. For example, if you look at your P&L and see a line item called Wide Area Network, chances are your company has an IT GL code labeled Wide Area Network. As bills come in from your WAN carrier, they are “coded” to this GL account. It is important that you have IT GL codes at an appropriate level of detail that will allow you to easily classify your P&L costs into your desired IT OPEX categories. For example, if one of your desired categories for tracking IT OPEX expense is “Customer Support Operations”, it would be ideal to have a GL code labeled that and have all expenses related to that activity coded to that GL account. Avoid GL codes like “miscellaneous”, these create black holes into which IT expenses disperser, never to be understood again. Certain GL codes may be dictated by the need for your organization to report expenses in certain ways for regulatory or tax purposes. This means you will likely never see your IT P&L arranged totally into the categories your desire. For example, you will likely always have GL codes for “Salaries”, “Benefits”, etc.. So you will always have some leg work to do to completely organize your P&L into your desired OPEX tracking categories but with a set of IT GL accounts that match your goals as close as possible, your job will be much easier.

With these three steps you will be well on way to having a full understanding of your IT OPEX. Having this understanding and being able to classify your IT expenses into categories that can then be used to benchmark your IT operations against industry norms will gain you huge credibility among your management team and will give you a clear set of goals to manage towards.

Friday, August 6, 2010

Being a True Business Partner as an Infrastructure & Operations Manager

Several things I have been working on lately seem to have a common theme, defining “what is enough”, enough redundancy, process, response time etc.. I have spent some time studying several of the popular Infrastructure and Operations maturity models as defined by industry leaders such as Gartner, Forrester or IDC. All of these models make the not so implicit assumption that you “should” be driving your IT department “Up” the model. In Gartner's model for example, your IT operation is not considered a “business partner” until you have ascended through all the various layers, each requiring higher levels of process discipline, scalability and business service uptime. On the surface it seems hard to argue against continuous improvement as it relates to your IT operations. I certainly believe you should always search for ways to run your IT operation or to build your IT infrastructure in better ways, however it seems an over generalization to imply that you ALWAYS need more process, more redundancy etc.. I would make the argument that to be a true partner to the business, you as an IT leader should focus on defining exactly what your business is trying to accomplish in the marketplace and target your level of IT infrastructure and operations rigor accordingly. It may be tough for the die hard technology professional to accept that a certain degree of risk is acceptable or that it is OK to not be on the latest hardware or software. What really enables you to drive value for the business is understanding the level at which your business “needs” you to operate and ensuring that you do not operate below or ABOVE that level. Operating below an acceptable level in terms of service levels, infrastructure scalability or redundancy certainly puts your business and its go to market strategy at risk. Operating above where your business needs you to be may be diverting capital and resources away from more value added activities whose value to the business outweighs the risks you may be taking in certain areas of your IT operations. So before you to go your CFO and start showing him or her charts and models outlining where you SHOULD be as an IT organization, make sure you have a clear understanding of where your business NEEDS you to be.

Saturday, July 10, 2010

Picking an SAP Business Warehouse Accelerator (BWA) Hardware Provider

My team recently completed an evaluation of the SAP Business Warehouse Accelerator (BWA) product from three different vendors and I felt the insights gleamed during this process were worth sharing. First, if you are not an SAP customer and even more specifically an SAP Business Warehouse customer, you probably have never heard of the SAP BWA offering. The BWA is SAP’s “in memory database” solution designed to speed up complex analytical queries submitted by information workers. The way this works is that data is moved from the SAP BW OLAP database into the BWA columnar database located in the BWA RAM during a load and fill phase. End user request are directed to the BWA which provides much faster response time than the traditional OLAP database located on regular spinning disks. SAP markets the BWA as an “appliance” which is a complete misnomer. Here is what the BWA truly consists of:


A dedicated infrastructure committed solely to the BWA workload. The infrastructure consists of dedicated disks (in most cases in the form of a dedicated SAN chassis) and a blade chassis and associated blades. The BWA blades are delivered as a SUSE Linux cluster with one hot spare server blade. The term dedicated is very important here. SAP is very strict on what they support from a hardware perspective. They have certified specific solutions from various different hardware vendors such as HP, Dell, IBM and Cisco with all of these solutions required to be solely dedicated to the BWA environment. Already have a SAN with space? Too bad, you can’t use it. Already have a data center full of IBM or HP blade chassis with room to handle the additional capacity of BWA? So sorry, you can’t use it. The entire technology solution is required by SAP to be totally segmented from you other data center workloads. A dedicated subnet is also required between the BWA and the message service associated with your BW application server logon group. The dedicated networking can take the form of a VLAN riding across your existing data center network. Not only can you not share the BWA infrastructure among other data center workloads, you cant share the infrastructure between your BWA test and BWA production environments. So you bought a BWA solution with a dedicated SAN and blade server chasis and you want to put a couple of BWA test blades in the same chassis as your production blades and leverage the same SAN as production? Sorry, that is not supported. Simply take the hardware quote you got for all that dedicated infrastructure for BWA production and multiply it times two and thats your cost for a test and production SAP BWA environment. SAP does not support any intermingling of your test and production BWA infrastructure. The notion here that SAP does not “support” a shared infrastructure between BWA test and BWA production is important. This does not mean a shared infrastructure won’t work, it will work just fine. The gotcha is however that if you do encounter performance problems on the BWA and you contact SAP for support and they discover you have shared the test and production infrastructures, you will politely be asked to separate those environments before you receive any assistance. This of course goes directly against the movement towards efficient and virtualized data centers. SAP is being slow to accept alternative solutions to the BWA product delivery model, primarily because of what it is selling with the BWA which is performance. The BWA software is not cheap but it really does deliver amazing performance results. Unlike other SAP products like CRM where what SAP is selling is streamlined and efficient business processes, with the BWA SAP is selling a promise of enhanced query performance and thus faster decision support. Regardless, it feels like this promise could be fulfilled in a more elegant fashion that does not require IT managers to invest in infrastructure when they may already have computing capacity.


SAP licenses the BWA software “by the blade”. This is an unfortunate choice of terminology on SAP’s part because it adds to the confusion that exists in the market about the BWA product. By the term blade here SAP is NOT refereeing to the physical server blades you have purchased for your blade chassis. SAP defines a BWA “blade” as a logical unit consisting of 4 GB of RAM. When you size your specific BWA environment, you will be asked to run some reports in your existing BW system and provide those to the hardware providers you have chosen to work with. Each hardware provider who offers a SAP certified BWA “architecture” has a group who can use the BW report information to help you size your BWA environment in terms of the amount of RAM you will need to “accelerate” your specific quantity of data. The minimum BWA size is four blades or 16GB of RAM. So to determine your software costs, you can divide your overall BWA RAM size by four and multiply the result by the SAP price per blade. Currently, SAP does not charge for the BWA test environment software, only production.

What we found when looking at different vendors for the SAP BWA hardware “appliance” is that even though SAP has a strict certification process for various combinations of blade server chassis and storage arrays; you will still get inconsistent answers from vendors on what is and is not supported from SAP. The big area we saw this in was whether you can share test and production infrastructure. Some vendors said yes, others said no. The final word came from SAP who clearly states a shared infrastructure is not supported (again, not that it won’t work). Another dimension to consider when talking to hardware vendors is how well defined you feel their respective SAP BWA product is. For example, some of the vendors we meet with seemed to be trying to put together an offering based on their latest offering initiatives “on the fly” stating they would simply need to check with SAP on cretin configurations. Keep in mind, SAP has a stringent certification process around the BWA “appliance” offering. It should be very straight forward, a given combination of storage and servers are either certified or not, period. You should also get a comfort level around the particular hardware providers service offering as it relates to the SAP BWA implementation. The “appliance” is sold as turnkey meaning that you as a customer cannot install it yourself. In addition to certifying a hardware combination, a vendor also certifies a service offering for installing and configuring the BWA solution. Because you have to use this install service, it is important to ask and understand how a vendor will deliver the service. Do they have a certified team or will the work be contracted to a third party? Again, the definition of the vendors install service should be clear and straight forward without requiring a lot of back and forth and “checking”.

Thursday, June 17, 2010

Are You Over Virtualizing Your Technology Infrastructure?

In a recent key note address at Gartner’s 2010 Infrastructure and Operations Summit, Raymond Paquet, Managing VP at Gartner, made the point that consistently over the last eight years 64% of the typical IT budget has been dedicated to tactical operations such as running existing infrastructure. A driving factor behind this consistent trend is the tendency of technology managers to over deliver on infrastructure services to the business. Paquet makes the point that technology managers should always be focused on providing the right balance between technology solution cost and the quality of service the solution provides to the business. As the quality of a technology service increases the cost of that service will naturally increase. There is often still a gap in the quality of service a business requires and what is architected and built by technology professionals. The key is to understand the businesses requirements in terms of parameters such as uptime, scalability and recoverability and to architect an infrastructure aligned with these requirements. Sounds simple, right? Often however technology professionals find themselves (usually inadvertently) swept up in implementing technology solutions that at specific levels of adoption add value to a business but at unnecessary levels of adoption serve to pull budgets and resources away from more value added activities. Virtualization is a case in point.

The gap between quality of technology services required by the business and the cost to deliver them can be observed today in the tenacious drive by many technology managers to virtualize 100% of their IT infrastructure. Gartner research VP Phillip Dawson points out what he calls the “The Rule of Thirds” with respect to virtualizing IT workloads. The theory is that one third of your IT workload will be “easy” to virtualize, another one third will be “moderately difficult” with the remaining third being “difficult” to virtualize. Most technology organizations today have conquered the easy one third and have began working on the moderately difficult workloads. The cost in terms of effort and dollars to virtualize the final one third of your IT workload increases exponentially. Are you really adding value to your business by driving 100% of your environment into your virtualization strategy? The recommendation from Dawson is to first simplify IT workloads as much as possible by driving towards well run x86 architectures. Once simplified, IT workloads with low business value should be evaluated for outsourcing or retirement. Retirement may consist of moving the service the application provides into an existing ERP system thus leveraging existing infrastructure. Workloads with high business value will likely now (after simplification) be cost effective to bring into your virtualization strategy.

So be selective in the applications you target for virtualization. Be sure that all steps have been taken to simplify the respective applications technology infrastructure so that it is not in the upper third of costs and effort in terms of virtualization effort. Outsource or retire complex applications where applicable rather than force fitting them into your own virtualization strategy. Look for ways to retire low business value applications with complex technology architectures (such as moving them into an existing ERP package). The bottom line is that all technology investments and initiatives must be aligned with your organizations needs and risk tolerance. Virtualization is no different. Making unnecessary efforts to virtualize certain applications may be doing more harm than good.

Saturday, June 5, 2010

Software Licensing Models Must Evolve to Match Innovation in Computing Resource Delivery

Software licensing has never been one my favorite topics. It has grabbed my attention lately however as a result of what I perceive as a gap between the evolution of infrastructure provisioning models and software vendor licensing models. As infrastructure virtualization and server consolidation continue to dominate as the main trends in computing delivery, software vendors seem bent on clinging to historical licensing models based on CPU. In today’s dynamically allocated world of processing power designed to serve SaaS models for software delivery and flexible business user capabilities, software providers still insist that we as providers of computing services be able to calculate exactly how many “processors” we need to license or exactly how many unique or simultaneous users will access our systems. In addition, many software salespeople use inconsistent and confusing language that causes confusion among business people and even among some CIO’s.

Is it a CPU, a Core or a Slot?

This is where I see the most confusion and inconsistency in language, even among software sales reps. Here is a little history of how things have gotten more complex. It used to be (in the bad old days) that one slot equaled one CPU which had one core. So, when software was licensed by the CPU it was simple to understand how many license you needed. You just counted the number of CPU’s in the server that was going to run your software and there you had it. As chip makers continued to innovate, this model started getting slightly more complex. For example, when multi-core technology was introduced it became possible to have a CPU in a slot that actually constituted multiple cores or execution units. This innovation continues today and is evident in the latest six and even eight core Nehalem and Westmere architectures. So now the question had to be asked to software sales reps “what do you mean by CPU?”

Processing Power Gets Distributed

At nearly the same time that the chips inside our servers where getting more powerful and more complex, our server landscapes where getting more complex as well. Corporate enterprises began migrating away from centralized mainframe computing to client/server delivery models. New backend servers utilizing multi-core processors began to be clustered using technologies such as Microsoft Cluster Server in order to provide high availability to critical business applications. Now the question of “how many CPU’s do you have?” became even more complex.

ERP Systems Drove Complexity in Server Landscapes

In the late 1990’s, many businesses introduced Enterprise Resource Planning (ERP) systems designed to consolidate sprawling application landscapes consisting of many different application for functions such as finance, inventory and payroll. The initial server architecture for ERP systems generally consisted of a development system, a quality system and a production system, each running on distinct computing resources. Most business users only accessed the production system while users from IT could potentially access all three systems. At this point, software sales reps began speaking of licensing your “landscape” and many ERP providers offered named user licensing. A named user could access the ERP system running on development, quality or production regardless of the fact that these where distinct systems.

ERP Technical Architecture Options Drive Confusion

ERP systems still run on databases and require appropriate database licensing. ERP packages such as SAP offer technical design options allowing the database used for the ERP development, quality and production systems to be ran separately in a one for one configuration or in a clustered environment with one or two large severs providing the processing power for all the system databases. With the database licensing needed to run the ERP system there was still a breakeven point where licensing the database by “CPU” could come out cheaper than licensing the database by named user. And so as ERP implementations continued, software salespeople now spoke to technology and business managers about “licensing their landscape by named users with options for CPU based pricing based on cores”. The fog was starting to set in. In fact these options became so confusing that when asked what the letters SAP stood for many business managers would reply “Shut up And Pay”. The situation only got worse in the early 2000’s when ERP providers like SAP saw the adoption rate of new core ERP systems begin to slow. At this point, the majority of enterprise organizations already had some sort of ERP system in place. ERP providers branched out into other supporting systems such as CRM, Supply Chain Management or Business Analytics. Now the “landscape” of applications became even more complex with named users crossing development, quality and production systems across multiple sets of business systems.

Virtualization - The Final Straw

In the 2007 timeframe, a major shift in the way computing power was allocated to business systems began to appear in enterprise data centers. Virtualization technology from companies such as VMware began to gain main stream adoption in enterprise class IT operations. Virtualization gave rise to terms such as Cloud Computing meaning that the processing power for any given business application was provided from a pooled set of computing resources, not tied specifically to any one server or set of servers. An individual virtual server now depended on virtual CPU’s which themselves did not have any necessary direct relationship to a CPU or core in the computing “Cloud”.

Despite this colossal shift in the provisioning of IT computing power, many software vendors clung to their licensing models, insisting on knowing how many “CPU’s” you would use to run their software. Even today in 2010 when Virtualization has been well vetted and proven to be a viable computing delivery model at all tiers of an enterprises technical architecture, some software vendors either insist on sticking to outdated license models or simply issuing vague statements of support for virtualization. These vague statements of support often seem to be based more on resistance to change traditional licensing models rather than on any clearly stated technical facts.
After a long and storied evolution of computing power delivery from single core, single CPU machines to Cloud Computing, it seems like software providers ranging from database providers to providers of business analytics have been slow to innovate in terms of licensing models. Enterprise class providers of business software such as SAP or Oracle who claim to have embraced virtualization yet continue to either license certain product sets by CPU or issue only vague statements of virtualization support seem to be struggling to provide innovate pricing structures aligned with the new realities of computing power delivery.

I am sure they will get there, but for now I still get emails from software sales representatives quoting prices for products “by the CPU”. Sigh…..

Friday, May 28, 2010

Your Data Center Hosting Provider is Stealing Your Money

Well, stealing is a strong word. What is happening however is that traditional data center hosting providers are getting in the way of thousands of small to medium sized businesses as it relates to realizing the true energy based cost savings associated with virtualization. Virtualization has changed many aspects of the traditional IT infrastructure and fostered innovations in all areas of traditional infrastructure service provision. What has not kept pace is innovation and investment on the part of data center hosting providers in facilities infrastructure geared towards delivering services such as power and cooling in a fashion that is aligned with the new realities of virtualized IT computing loads. This is directly limiting the ability of small to midsized companies who host their IT environment in these data centers to fully realize the total energy savings that virtualization can provide. In the following paragraphs I will provide a summary of APC white Paper 118 which does an excellent job of explaining how the total energy savings from virtualization is dependent on a realignment of data center infrastructure to meet the needs of a reduced and consolidated IT load. I will point out exactly where small to midsized organizations hosting there IT load in traditional hosting providers facilities are leaving money on the table as a result of their hosting providers in-action.

The following diagram demonstrates the primary sources of energy consumption in a data center. The support power represents energy that is lost due to the inefficiency of data center physical infrastructure such as power and cooling systems. This is energy that is consumed in the operation of the equipment itself rather than being transferred to the IT load and being used for useful computing work.

Data Center Power Sources

After you virtualize your server environment, your IT load will decrease. This decrease will make the PUE of the data center worse due to inefficiencies caused by a physical infrastructure continuing to operate at what is now over capacity for the new virtualized IT load.

PUE Decreases after Virtualization

So while a decrease in energy cost due to IT load consolidation and virtualization is certainly positive, it is only a fraction of the overall savings possible. The total energy savings made possible by virtualization can only be achieved if the data center physical infrastructure is re-architected to be more allinged with the new realities of virtualized IT loads.

Aditional Gains from PUR Optimization

Specific recommendations for changes to data center physical infrastructure to achieve a closer alignment of physical infrastructure services with virtual IT loads can be found in APC white paper 126.

In all fairness to hosting providers, realizing some of the efficiency gains of re-architecting physical infrastructure is a true challenge in an environment inherently designed to provide shared service across many organizations with unique IT loads, peak demand periods and degrees of virtualization and consolidation. Looking from the point of view of a mid-sized IT organization that has diligently virtualized and reduced IT load requirements only to find themselves “trapped” by existing power circuit contracts or by an inflexible hosting provider who has not invested in physical infrastructure innovation reveals a logical degree of frustration. In any market such as data center hosting where the barrier to entry is high due to large capital expenditure requirements, innovation by market leaders tends to be slow. What is needed is a new type of hosting provider built from the ground up to provide modern, flexible solutions such as “pay by the drink” for power services and individualized cooling solutions through innovations like row based cooling while maintaining independence from the overall environment of the data center.

Data Center Zones

Should traditional hosting providers fail to make these innovations then no doubt, a new breed of more agile competitors, unburdened by large historical capital investments in dated infrastructure will emerge and force fundamental change in the hosting industry. Mid-sized customers may also find the additional value hosting services provide such as physical security to longer be enough to prevent them from investing in their own facilities where they can innovate themselves and keep all the gains. Virtualization has changed almost everything with respect to IT infrastructure service delivery, it is time for hosting providers to catch up.

Sunday, May 23, 2010

A Simple Offshoring Model

I recently had the opportunity to travel to India to meet with a large provider of IT services. The purpose of the trip was to understand how best to work with this organization to deliver additional resources and thus business results to my organization. Some of the questions that needed answering where: When does it make sense to utilize offshore IT resources?, What is the best way to run projects using offshore service providers?, What are the criteria for deciding if a particular project is a fit for the utilization of offshore resources? What follows is a model I created based on my observations and peer discussions. This model has in it some implicit lessons learned from this particular trip as well as previous project experiences. The model is fairly self describing. The underlying principals are that projects with a high dependence on institutional knowledge require greater in-house resource involvement whereas projects that involve more standard business processes or technology are better fits for the utilization of offshore resources. Also, larger projects lend themselves better to the utilization of offshore resources due to the inherent overhead involved in managing offshore resources. Most IT managers will likely find this a common sense line of reasoning, the model simply provides a simple and clear representation of this logic.

offshoring model

Sunday, April 18, 2010

You Don't Have to be a Giant to be Green

Last week I was fortunate enough to attend SAP Virtualization Week at the SAP Co-Innovation Labs in Palo Alto, California. The conference consisted of three tracks: Virtualization, Cloud Computing and Green IT. Lots of thought provoking material was presented by SAP, consulting partners and customers. When I attend these type of events I look hard to find a few practical take aways that could be put in place immediately. Of course these sessions help frame my thinking on long-term strategies around SAP virtualization and Data Center design, but what can I go home with and ask my team about next week? One of those take away points for me last week was this: You don’t have to be a giant to be green.

At surface level Green IT seems like a concept relative only to the largest of IT organizations. Organizations such as Intel and Colgate-Palmolive who can tell stories of collapsing 50 or 60 global data centers down to one facility certainly have a green story to tell. Organizations such as NetApp who have pioneered efficient data center designs can go to conferences to show how they received a million dollars in rebates from their energy provider. These are great stories, but as I sat through the first couple of these last week sipping Starbucks coffee I was thinking to myself “This is awesome, but what can I do, how is this relative to me?” How can a < $1 Billion enterprise who leverages a colocation service for data center space devise a green IT strategy? What about very small organizations who have all of their servers hosted at their own facility, all in one rack that sits in a locked (or sometimes not) closet that is doubling as storage and data center space? As the week went on I managed to pick up several practical action items that can be leveraged by smaller IT shops to build a green story. While in smaller IT shops the savings may not be as dramatic and jaw dropping as global IT operations, keep in mind all things are relative. If you can show where you started and demonstrate a thoughtful effort that created a tangible reduction in energy consumption and thus cost then you have executed a successful green IT strategy. It may not be enough to save the planet, but it is your part and shows that you are doing all you can to be a good steward of your organizations IT operations.

Know Where You Are

Your tangible achievements with Green IT thinking will be measured in percentage points. For example, a 10% reduction in power consumption, a 2% reduction in utility costs etc.. Obviously, to show these numbers you have to know what you are consuming today. Even if you have a single rack of servers sitting in a closet, do you know how much power they are consuming? Do you know how much you are paying monthly in energy costs to run the equipment you have? The answer to these types of questions becomes slightly more challenging for mid-sized IT shops leveraging colocation providers for data center space. Do you know exactly how many power circuits and what type (120/280V, 20A/30A) are provisioned to your space? Do you know the current draw on those circuits? In a colocation environment, a very practical first step is to install intelligent Rack Distribution Units (RDU’s). There are lots of vendors who offer these, see this one from APC as an example. Intelligent RDU’s will give you the data you need to establish your baseline. Most of these devices provide an HTTP interface that gives you basic statistics and reporting. You don’t need anything fancy, just a browser and a spreadsheet. Get a total average draw for all of your equipment over a length of time long enough to cover any major fluctuations that may occur in your computing environment.

How Redundant Do You Need To Be?

In a typical server deployment, redundancy is built into the design. We expect a certain degree of equipment failure so we account for that in our capacity planning and design efforts. So ask yourself this, how much redundancy is enough? This has a direct impact on green IT thinking. Most servers have redundant dual power supplies. Think of the environments you have with redundancy built in so that if you lose a single server you will have no downtime. Now ask, why do each of those servers have TWO power supplies plugged in to prevent it from failing in the event a power supply goes bad or you have a problem with an electrical circuit? Doesn’t your design accommodate for losing a server anyway? The real answer to this sort of question is that you plug in both power supplies on each and every server even in a redundant server arrangement because that is how you have always done it. Challenge the notion that this is necessary. You may find that with this simple thought process you cut out 50% of your power consumption in certain environments. Also, categorize your systems into different classes of criticality. This is a natural exercise for disaster recovery planning but is not often thought of when planning for power design. If an application running on a server can be down for a defined period of time with no serious impact to your business, maybe you don’t need to plug in both power supplies and provide redundancy at the power level. Again, challenge the assumption that just because a piece of gear has two power supplies that they MUST both be in use.

Buy Green

Pat of knowing where you are in terms of your level of green IT operations is knowing where your vendors and partners stand. Make energy efficiency a part of your purchasing decisions when it comes to IT equipment and service. As you are gathering quotes for hardware, ask your vendor to provide you with energy rating information for the equipment along with the price. With respect to service such as consulting, ask your provider what they are doing to reduce their carbon footprint and provide services in a green way. For example, how much of the work can be done remotely versus onsite? This not only has a practical implication from a green mindset, it also reduced T&E expense. If you leverage a collocation provider for data center space, insist that your provider be able to tell you the PUE of their facility. Remember, you are paying your provider their cost plus margin. If their operations are not ran efficiently then their costs will be higher and you will pay the price. Leverage the information you are gathering from your intelligent RDU’s to renegotiate the way you are buying power in your colocation space. For example, many colocation providers charge you “by the circuit” for power. Their price often includes their cost for delivering you the energy on that circuit plus their cost for removing the heat generated by the consumption of that energy. If you can tangibly show that you are only consuming a fraction of the energy provided over a circuit and thus generating less than a 1:1 ratio of heat to circuit capacity then you have a strong case to moving to a “pay by the drink” model for power. Your case here is that you should be paying in a manner that covers your providers OPEX, not their CAPEX. Your providers true operational cost to remove heat from the data center is a function of how much heat you actually generate plus how efficient they are at removing it.

These are just a few practical suggestions. All of these things can be done on a modest IT budget. The key is to understand where you are starting from, take tangible steps based on that knowledge and measure your change. Again, success is measured in percentage points, not raw numbers. As manager for a small or mid-sized IT shop, you won’t put up numbers the likes of what you will see from global, Fortune 500 IT shops. Success is demonstrating that you are aware of the need to be a good steward of your organizations IT operations relative to its environmental impact, measuring the results of your efforts and arming your organization with your numbers to add to its overall sustainability efforts. Oh yea, you will likely save money too.

Saturday, April 3, 2010

The Value of an Outside IT Resource: Pointing at Elephants

There are times in the evolution of an IT organization where bringing in a new team member from the outside has significant value. While at key evolutionary points it might make sense for this new injection to come in the form of a new, permanent hire, there are times when simply bringing in a consultant can have the same value. The “value” in these cases is not fully supplied through additional technical expertise. Sometimes, an outside resource can prevent proposed solutions from being constrained by Group Think. IT teams who work together for a while come to know and understand the implied constraints that often surround proposed technical and process architectures. These implied constraints may come in the form of known manager biases, past group experiences, perceived realities of the organization (correct or not) and the simple desire to “fit” and be perceived as a team player. Many organizations do purposefully cultivate a specific corporate culture aimed at helping all members of the company to understand the guard rails within which the organization wishes to operate. New hires to IT teams such as data center operations, network engineering, database administration or software engineering certainly need to learn and understand the corporate culture which may help to define the big picture of the overall IT strategy. At a tactical level however, new hires can bring in new experiences, challenge team assumptions and ask fresh questions. In short, a new hire can point at the elephant in the room that current team members understand is there but also understand that they should not point out.

While new hires will likely cause the existing team to cycle through the typical stages of group development (forming, storming, norming, performing), it may be a move that pays dividends at key evolutionary milestones in an IT departments growth. There may be other times however where an outside resource can add value to a particular decision. Consultants are often sought out for their technical expertise or their real world experiences with other clients. Sometimes, the value of a consultant is also his or her “unbiased” opinion on a particular technical design or process improvement effort.

Whether it is a new hire at a strategic evolutionary milestone in the IT departments growth or a consultant brought in to evaluate a specific thought process, part of the value an outside resource provides is to point at your elephants.

Sunday, March 28, 2010

From Hero’s to Process: The IT Growth Challenge

I recently finished reading How to Castrate a Bull by Dave Hitz, one of the founders of NetApp. I found one concept in particular to stick in my head: as companies get larger they naturally gravitate away from needing hero’s to needing more process. Dave tells the story of how during the early years of NetApp, one support engineer went above and beyond to satisfy the needs of a customer. The support engineer took a call late in the evening and determined the customers NetApp unit needed to be replaced. The engineer went into manufacturing, took a new unit off the line, hopped a flight to the customers’ location and worked all night to install the new unit. Once word got back to Dave about the engineer's heroics, the engineer was nowhere to be found. Turns out, the engineer was asleep in the customers’ parking lot inside the van he had rented. Dave points out that at the time, this engineer was hailed as a hero but that now he hopes this kind of thing never happens again. Why? When NetApp was a small company its customer base was generally small organizations with very little gear. They themselves where very nimble and acted on failures in their enterprise very swiftly. The catch is, given their small size and the amount of IT gear in their enterprise, they usually did not encounter a large number of failures. When failures are rare, organizations can afford to rely on hero’s to step up in those rare occasions where duty calls. As enterprises get larger, the amount of IT gear and the complexity of the environment that gear supports also grow. Failures become more common place and the degree to which you can rely on hero’s is diminished. For example, for simplicity say a piece of IT gear has an average failure rate of once every 365 days. A small organization with only one of these devices can expect a failure once a year. A larger organization with 365 of these devices can expect a failure every day! Larger IT enterprises need process to deal with these repeated occurrences, not hero’s to step up on rare occasions. Hero’s come and go, process is permanent. Dave's point is that the same behavior that won the engineer and NetApp accolades from the small customer years ago, would likely lose business in a large enterprise

This evolution from needing hero’s to needing process is one of the most subtle yet important changes an IT leader must make as his or her enterprise grows larger and more complex. Tearing down technical fiefdoms, redefining reward systems, purposefully slowing down to gain control and potentially even making staffing adjustments is a daunting set of goals. This evolution inside of IT is a natural part of organizational growth and if handled correctly can be a very exciting managerial challenge.

Saturday, March 20, 2010

Considerations for Implementing SIP

In a recent Computer World article I discussed how the real cost savings associated with VoIP in the corporate enterprise comes from the implementation of Sesion Initiation Protocol (SIP). With most organizations paying very low long distance rates, justifying VoIP investments on long distance savings becomes a real challenge. SIP on the other hand allows you to get rid of local loop charges through a reduction in the number of PRI's you have dedicated to voice at remote and field locations. There are big savings to be had in SIP but in order to understand what your net savings will be you need to think through some of the following basic factors.

SIP Provider Pricing

Unlike traditional TDMS services, pricing for SIP is far less standardized. Different carriers will charge for the service in different ways. For example, PAETEC charges a flat fee for the bandwidth of the MPLS connection used to provide the SIP service plus $1.50/"virtual DID". On the other hand Time Warner Telecom charges more on an actual call volume basis. There really is no "best" way for a company to price its SIP service, what works best for you will depends on the dynamics of your particular organization. Make sure you understand the pricing scheme of the SIP providers you are evaluating and match that model to your organizations typical call dynamics.

Understand Where You Can Truly Remove Services

The implementation of SIP in your network is not the end to your PRI based TDMS service. Several factors will likely limit your ability to remove PRI's in certain areas. For example, make sure you understand for which of your remote locations your SIP provider can provide the virtual DID's. Not every SIP provider can provide local numbers in all markets, for those markets not covered by your SIP provider you will need to maintain your PRI service to maintain that locations local DID's. Also, consider services like faxing that may be riding on PRI's through FXO ports today. What will you do about this? How many local lines will you need to maintain for systems such as building security, fire alarms etc..? It may be that the need to maintain support for an antiquated technology such as faxing (my opinion) may hamper your ability to leverage modern network designs for financial gains. You may find yourself using your SIP implementation as a good time to rationalize the continued support for older communications mediums.

How Much Bandwidth Will You Need At Remote Sites?

Your current data link at each of your remote sites is running at some percent of utilization today. What will that be once you add voice traffic over the circuit? You need to make some estimates about a locations call volume combined with the type of voice compression you plan to use in order to determine if you will need to increase the bandwidth on your locations data pipe. This is significant because you could find yourself simply shifting costs around, from PRI to Data circuit. In order for SIP to represent a true cost savings at your field site, make sure the overall net spend decreases after PRI removal, additional services for local lines and potential increased data bandwidth.

Understand the Best Technical Architecture for your Organization

The technical details of a network design for SIP can be daunting. At the end of the day, there are really two major types of designs you will likely choose from. In a centralized model, all remote locations receive SIP trunks and thus call trunks through a centralized connection to the SIP provider. The SIP provider will deliver some type of connection to your corporate HQ or data center where you will connect to them through a CUBE. You can see an image of this here..
A distributed model requires each location to peer with the SIP provider. The distributed model provides a slightly more robust design in terms of backup and redundancy but at a higher CAPEX and OPEX cost. You can see the distributed model diagram here..

This is not an exhaustive list of considerations but thinking throh these things will help you to solidify your thinking on both the technical and financial benefits of SIP in your organization. I recommend that for evaluating SIP providers and doing your cost/benefit analysis, you utilize a third party who specializes in Telco analysis. It is critical that for you to make good decisions about SIP that you understand your current state. The SIP market is still young and services are still often tailored to your specific needs. Make sure you truly understand the net of it all before jumping in.

Sunday, March 14, 2010

Zen and the Art of Converged and Efficient Data Centers

What a few weeks it has been. Over the last month I have been fortunate enough to meet with the CTO from Frito-Lay, the CTO from NetApp, attend a joint SAP and NetApp executive briefing at SAP’s North American HQ in Philadelphia and tour two world class IT support centers (PepsiCo in Dallas and Dimension Data in Boston). I have spent days pouring through technical documentation geared towards architecting my organizations next generation data center centered around 10GB Ethernet, Virtualization, Blade Systems and efficient energy practices. So this post is probably as much for myself as anyone else, meant to simply document of few of the key learning’s I have taken away from the flurry of activity over the last few weeks. Hey, maybe someone else will find it interesting too?

PUE & The Green Grid

In a one on one conversation with Dave Robbins, the CTO of NetApp Information Technology, he asked what my data center space providers PUE is. My response was an inquisitive, what? PUE stands for Power Use Efficiency and is a measure of how effectively a data center is using its energy resources. Essentially, PUE is the amount of electricity used by a data center for cooling and mechanics divided by the actual IT load. Efficient Data Centers run at around 1.6. The concept of PUE and its measurement was created by an organization known as The Green Grid and you can find all kinds of great resources at their web site. This is an excellent tool for you to use when negotiating power costs with a Hosting provider. You should know their PUE and insist that you will not pay for their inefficacy. You can also find a cool tool for PUE calcualtion at 42U.com.

It is Time to Converge

The introduction of 10GB Ethernet in Data Centers (and perhaps even more important, lossless Ethernet) has truly created an opportunity to collapse Ethernet and Fiber Channel networks in the Data Center backbone, cutting huge costs in Fiber Channel infrastructure. 10 GB Ethernet and Lossless Ethernet serve as enablers for protocols such as FCoE and FIP which allow Fiber Channel frames to be encapsulated and carried across Ethernet backbones. There are a few watch outs when adopting FCoE that you need to be aware of. First, make sure your storage vendor has a CNA (Converged Network Adapter) that supports BOTH FCoE and other IP based traffic. Some of the early “converged” adapters only support FCoE, not much real convergence there. Put some effort in understanding Cisco’s current support of FCoE and Fiber Channel Initialization Protocol (FIP) in their Nexus line of switches. You will find some good resources here. The details of this are too complex for me to go into here but suffice it to say, you need to think long and hard about your data center switch layout in order to get full FCoE support across your 10GB backbone. Also, remember that lossless Ethernet or data center bridging are keys to FCoE success but are fairly new. So, when you hear people tell you they knew someone who tried FCoE a couple of years ago but found it lacking, take it with a grain of salt.

The FUD around Cisco UCS

Let me get one thing out of the way upfront, the Cisco Unified Computing System (UCS) is sexy. Cisco’s tight relationship with VMware, stateless computing and a seemingly end to end vision for the data center combine for a powerful allure. Competitors such as IBM and HP are quick to point out that their blade center products perform the same functions as Cisco’s UCS but with a proven track record. In general, these claims are true. I have been exposed to some competitive claims against the UCS that where simply meant to plant the seed of Fear Uncertainty and Doubt (FUD) in the mind of technology managers. What if Cisco changes their Chassis design, is your blade investment covered? UCS is meant for VMware only (not true). The list goes on. I have been heavily comparing the Cisco UCS to IBM’s H series Blade Center. I had originally convinced myself that the difference between these two offerings was all about the network. Cisco’s UCS does offer some interesting ways to scale across chassis and provides some great management tools. For a mid-sized organization, the ability to scale across chassis becomes less important however when you can get a concentrated amount of compute power inside one or maybe two chassis. Some new technology coming from IBM in the form of their MAX5 blades is going to allow for some massive compute power inside a two socket blade. If you are a large organization planning on adding many UCS chassis, the networking innovations in the UCS likely will fit your needs well. For a mid-sized company, consider getting more compute power inside fewer chassis by using some hefty blades. This not only reduces your need to scale across many chassis, it also helps lower your VMware costs. VMware is licensed by the socket so fewer sockets with more cores on blades with higher memory capabilities ultimately drives down your VMware licensing needs. Also, before you completely convince yourself that the Cisco UCS has a strong hold on the networking space in the data center, spend some time understanding IBM’s Virtual Fabric technology. This offers similar features to the VIC cards in the Cisco UCS. The point is this, don’t be immediately sucked in by the sexy UCS. Cisco has come to the blade market with some cool innovation and in some circumstances, it will be exactly what you need. Make the investment in time to really understanding competing products. Avoid FUD in all directions.

Saturday, February 13, 2010

Be Consistent & Flexible

I remember an experiment covered in an under graduate psychology class meant to demonstrate that people prefer consistency in thought over randomness even if they disagree with the principal of the consistent thought pattern. Here is the scenario:

Person X states that he dislikes all people from place Y. He then comes to know that his new friend to whom he has grown very close is from place Y. Person X has three options:

1. Change his views about people from place Y.

2. Maintain his view about people from place Y but rationalize an exception.

3. Denounce his friendship given his new knowledge of his friends place of origin.

Most people see option 1 as the "correct” choice, not surprising given the positive connotation this option holds. What is more interesting is that given only a choice between option 2 and option 3, most people choose option 3 as the "correct" choice despite the negative connotation that option holds. My point here is not to delve into the deep rooted psychological reasoning behind this fact but rather to point out the following: Given a choice between consistency and inconsistency, people prefer consistency even if they do not necessarily agree with the principal being consistently adhered to. This is an extremely important point for IT leaders to keep in mind across a wide facet of IT operations.

Consider certain IT policy related to issues such as who in the organization has local administrator rights to their workstation. Users will generally prefer "option 1", giving them free reign and full flexibility as it relates to installing software and making configurations on their own workstation. However, the very practical considerations of security and stability force us as IT leaders to take "option 1" off the table. So an IT workstation policy can really be seen as serving two purposes. First, it clearly outlines to users in a practical and no disputable fashion why "option 1" is not a possibility. Second, the policy tells users how the rules will be applied consistently across the organization. Armed with this information, users will prefer a consistent application of IT policies even if it means they are not given their "option 1". Remember, users will always prefer "option 1" but will accept "option 3" over "option 2" if it is the only alternative. Users will always be on the diligent lookout for the existence of "option 2" in the organization. The moment users perceive the application of policies in an inconsistent fashion the howling will begin.

Consider also the yearly ritual of the performance appraisals. An employee who is rated low on a category will accept that rating if he or she feels the standards for that category are being applied consistently across all staff members. IT team members may not necessarily agree that "number of support cases closed in a 24 hour period" is a relevant measure, however, if all team members are graded consistently with respect to this metric it will become accepted as a goal. The hidden gotcha here of course is to be careful what it is you consistently ask for because that is exactly what you will get!

These are only two examples, many more abound. So as an IT leader, ask yourself if you are being consistent in your actions. Do you users understand the rules and see them as being applied the same to everyone? Do your staff members feel they are being measured consistently against the goals you have set whether they necessarily agree with your goals or not? Be consistent in your actions but do not lose sight of the need to listen and be flexible. As things change, the rules and goals that govern your IT organization may also need to change. Be willing to adjust, inform everyone clearly of how the game has changed and adhere to the new standards in a consistent fashion. This last point reminds me of an adage told to me by a colleague who spent many years in the Navy:

On a foggy evening, a U.S. Navy destroyer is leaving harbor, heading out to sea. The Admiral is setting in his quarters working on paperwork when he notices a light dead ahead. He radios down to the bridge to inform the light to turn 20 degrees port side. The bridge calls back up to the Admiral and tells him the light has refused the order. Hearing this, the Admiral gets livid and asks to be connected directly. Once connected the Admiral roars: "I am an Admiral in the U.S. Navy and this is a U.S. Navy Destroyer". The voice on the other end responds "Understood Sir, I am a third class Navy seaman and this is lighthouse".....

Saturday, January 30, 2010

Don't Spend 80% Of Your Time On 20% Of Your Users

Some people just don't like change. This is especially true when it comes to changes in technology that they use every day to accomplish work tasks that bring their own set of stressors. Folks learn something, fall into a routine and then just want it to stay the same. This is understandable; technology is an enabler to accomplish a goal, not a goal in and of itself. Unfortunately, the only constant with technology is change. New versions of operating systems are released, new mobile platforms, new portal technology and on and on. Vendors drop support of older technologies over time, forcing us as technology leaders to impose change upon our user base. Not all of your users will react to change in the same way and you should therefore not adopt a one size fits all approach to your change management strategy.

Leverage You Champions

Not all of your users will be resistant to change. Just as consumer technologies have a predictable early adopters portion associated with their user adoption curve, so too will your corporate technologies. Some users will be excited about the new features of a technology; some will be excited to be "first" when it comes to something new. Whatever their motives, identify your champions, get the technology in their hands early and most of all, make sure they are happy. Let these folks serve as your sounding board across the organization. Let them go to meetings, present to a group of their peers and show off "cool" new features of your new operating system. Let them sit with their peers in airports, bring up your mobile app and gain access to information their peers don’t have. These are your evangelists, treat them well and let them spread the word.

Take Care of the Masses

The majority of your user base will adopt new technology with only a short period needed to get over the proverbial "hump". Most users won’t be vocal in either direction, positive or negative. It is important to actively generate feedback from the majority to ensure true issues are separated from expected transition grumps. Pay close attention to related tickets in support desk ticketing systems, talk to as many people as you can looking for common themes and clearly document your findings to identify trends. Don’t confuse standard grumping with true wider spread issues. Nearly everyone will be slightly more vocal regarding their dislikes versus their likes. The key is to separate true, constructive feedback from simple "I don’t like this because it is not what I had before" feedback. This leads to the last group of users.

Marginalize the Hold Outs

Some people will not be pleased no matter what. Once you have listened carefully to the majority of your users, addressed the true issues related to your new technology and have started getting wide spread, positive feedback, move on. Don’t let your team spend 80% of their time struggling to please 20% of the people. If you have pleased your early adopters, won the acceptance of your masses and received positive feedback from all levels of your organization, you have succeeded. Again, make sure the few hold outs do not truly have legitimate complaints, have you over looked something specific to their job? Be sure to share your positive feedback in a very public way so as to not let the few remaining complaints become the only remaining voice being heard following a technology change.

Saturday, January 23, 2010

Keeping Your Technology Infrastructure "Modern Enough"

How modern does your infrastructure need to be? The high level, seemingly safe answer to this question is that "enough" of any given technology or process has been put in place when that particular solution matches the business need. Finding this utopian point of solution fit is often trickier than it seems. It is critical however that as a technology leader, you think hard about the solutions you propose to the business. You must ensure you are not being whipped around by the latest technology trends being hyped to you by salesmen in blue blazers while at the same time ensuring that under your stewardship your organization is not missing opportunities engendered by emerging technologies. You must balance the opposing pressures of technology staff members who always see the benefit of the latest and greatest tools with the need to meet capital and operational budgets. The balance must be reached in a way that brings tangible benefit to the business. How do you deliver? I have found that staying focused on a few basics with respect to technology evaluations, business acumen and people management go a long way.

First, separate trends from true paradigm shifts. For example, no one would argue that the advent of virtualization technologies has brought about a true paradigm shift in the creation and management of corporate infrastructures. As a technology leader, you have to identify that and understand where your particular organization can benefit. With respect to something as technical as virtualization, it will likely be up to you to help educate the business about the benefits of making the virtual transition. Make sure you understand some the basics of your organizations business model and strategies in order to help guide your proposal and thinking around such paradigm changing technologies as virtualization. Be aware however that as vendors see these fundamental paradigm shifts happening, they will rush in with products to grab market share, not all of which will have staying power. Be cognizant of the vendors long-term plans for any given technology, how it fits into their strategic portfolio, how likely they are to continue to dedicate R&D dollars to the product 5 to 10 years from now.

Second, understand the real ROI potential of any new technology and take the time to analyze how your particular operation will benefit. See my post here for a discussion on calculating ROI and to download a spreadsheet model. Every particular technology operation has its specific set of characteristics that drive its cost structure. Before you recommend upgrading or changing any of them, know where you are and what your benefit will be. Don’t just use high level concepts in presentations to CFO’s such as “it will lower capital expense” or it will make us more flexible”. Dig deep and be specific, you may find there is no real benefit at all!

Third, understand your teams ability to support a new technology architecture on an ongoing basis. Think hard about what the true people cost will be when evaluating your need to move to a more modern technology infrastructure paradigm such as virtualization. Do you have the skill in house now? Do you have folks you can train? Will you need support from vendors on an ongoing basis? Make sure these “soft” considerations don’t get lost in the discussion over ROI with respect to a new technology. Training, retention, and outside support all help add the operational budget squeeze most technology managers feel. Make sure you are not putting your organization at risk with respect to available support resources just in order to introduce a newer technology or process.

Finally, make sure you are not pushing for a technology or process because you or your team “wants" to learn it. For example, many IT leaders see the value in and "want" to implement an ITIL based management process for their IT operations. This desire can easily turn into conversations with CFO's that start out as we "must" implement ITIL. Make sure any given technology or technology management practice fits the needs of your organization, that it truly has the potential to add value. Top notch technology folks crave to learn and use the latest technologies and process, harness that drive and focus it in the right places.

The above points are certainly not exhaustive but taken together they can help you think critically about technology architecture shifts or upgrades. Make sure you have truly thought out your decisions. Be willing to make hard decisions even if they are not the most popular with your team. Most importantly, be prepared to give a well thought out business case with respect to your current technology architecture state and your strategic plans for moving forward.

Saturday, January 16, 2010

Digital Distractions

In a recent meeting, I looked around the table and noticed many of the attendees typing on laptops or thumbing through emails on a Smart Phone. The thought crossed my mind wether these people where diligent multi-taskers’ dedicated to productivity or whether they simply had nothing to contribute to the topic at hand. Perhaps the organizer of the meeting had invited them and they had simply came out of either politeness or a sense of obligation due to the organizers rank on the company org chart. Perhaps they really did need to be there and where totally missing critical information about the current topic as they dazed into glowing LCD screens. Regardless, conducting a meaningful meeting with a room full of folks armed with digital distractions can be a daunting task. The easiest way to overcome digital distractions is to simply ban them from the meeting room. No laptops allowed, no checking email on smart phones, all phone ringers set to vibrate. If this is not possible however, here are a few tricks of the trade that may help minimize digital distraction.

Make sure follow-up items don’t become immediate tasks

When access is immediate though laptops, it becomes easy to let items tagged for follow-up become immediate action items. For example, someone may be assigned a task of sending someone else a copy of a system configuration as a follow up item. The assigned team member immediately jumps on and starts trying to grab the configuration. Problems occur when a small issue arises getting the configuration, the team member whispers to a colleague sitting next to him and they both start working on the issue. Before you know it, you have two separate streams of work and thought going on. Explicitly state which items are follow-up items and inform attendees not to work on these items now while the meeting is still focusing on the task at hand.

Let someone else "drive"

It is common for work to get done during meetings with one person projecting up a spreadsheet, word document or Visio diagram, filling in content with the input of meeting attendees. It is also common for the organizer or the most senior person at the meeting to "drive" the work by being the one projecting up the document and typing in the content. I have found it useful to let someone drive during working sessions. You will often know who the folks are most likely to be distracted by working on side items during meetings, let them drive. By having your most easily digitally distracted team members project heir desktops on the screen while work is getting done you can help them and the meeting stay on task.

Call people out

I often get to the end of a meeting or even a section of the meeting, turn to the person who I have noticed working on other tasks during the meeting and ask them to summarize for the group what we have just covered. This is not meant to embarrass anyone and I never push it if the person obviously does not have an answer. It is an effective tool if used consistently as team members will come to expect this and will be more likely to at least home in enough on the current discussion to be able to articulate the current tasks in a short summary.

Have an agenda and roll for everyone

This point is more of a general meeting principal than it is a way to overcome digital distractions. You will find though that if you follow this consistently, people will likely start to see your meetings (and even the fact that you are calling a meeting) as more relevant. Know who really needs to be there and only invite people who will play an active role in the task the meeting is meant to accomplish. Be clear at the start of the meeting why everyone is there, what you want from everyone during the meeting and what the end goal for the meeting is. Foster cooperation and interaction to keep people from falling into a digital distraction.

Saturday, January 9, 2010

New Year’s Resolutions of a Technology Director

As we enter into 2010, my mind is pulled in two seemingly disparate yet interrelated directions. After all the project planning, goal setting for staff members and financial budgeting in preparation for 2010, it is time to focus on two guiding principles: Staying Above the Frey and Sweating the Small Stuff.

Staying Above the Frey

Anyone in an IT leadership position knows how easy it is to get caught up in the plethora of daily issues confronting a typical technology department. There is never a shortage of small tweaks that need to be made, patches that need to applied, configurations that could be better, infrastructure upgrades that need to made etc... Each of these things is of critical concern to someone. It is very easy to find yourself engaged in never ending meetings arranged to address small fires. These types of issues are important and no doubt contribute to the overall stability of your operation. They also represent the types of issues for which you as a leader should hold your people accountable and insist on execution without your immediate involvement. Continuously jumping into daily operations is a huge distracter from your ability to set the broader course of technology operations. As a leader, focus on the overall program of projects your different teams are involved in. Just like the Director of a symphony, ensure that each team is working separately yet towards a common goal that you have defined. Make sure your team understands the overall vision for your organizations technology operations, give them the tools, training and support they need to execute and stay out of the way. Keep an ear to the ground and an eye to the sky making sure the teams are not straying from the goal, that energy is being spent in the correct areas and that progress is being made. Be flexible in your vision, making sure that you understand your organizations business objectives and adjust course when necessary. All sounds great in theory, just like the resolution to lose 50 pounds by summer. Sticking with it on a daily basis when temptations arise is harder than it seems.

Sweat the Small Stuff

A more accurate depiction of this goal is probably sweating the correct small stuff. Toping this list is focusing on customer service and support levels with respect to technology operations. The perceptions of users will drive our realities in technology operations. Pursue a relentless commitment to customer service. Take every opportunity to help an end user that you can get, listen to them, talk to them, and embrace their criticisms. Conversations your end users have regarding their satisfaction with your organizations technology operations can serve as your canary in the coal mine. Make every effort to stay in tune with the general feeling of your user base. Try as much as possible to interact with end users and help them through their day. Going the extra mile and doing the small things for the users will buy you much, much more than it costs so invest heavily.