Sunday, October 11, 2009

Give Me One Good Reason To Go To Winodws 7

If you are like most IT managers in a midsized shop, you probably skipped Windows Vista. Winodws XP probably still serves your needs well. While it is true at some point you may start hitting compatibility issues wherein software updates for XP will cease, but that is likely a while away given the massive install base of XP. While your end users likely demand new features in Office, driving an upgrade to Office 2007, users care less about the underlying operating system, as long as it is stable. So, give me one good reason to move my enterprise to Windows 7, right? OK, I will give you four.


OK, one of the first things to realize when talking about new features in Winodws 7 is that most of them depend on new features in Windows Server 2008. BranchCache is a great branch caching tool to speed up access to content to field sites. It can work in two modes, distributed or hosted. But don't believe me, watch the video.


DIrectAccess allows mobile users to connect to business resources without VPN connections. This is similar to the functionality provided by RPC over https for Outlook or an edge server for Office Communicator. This is driven by a certificate architecture and is heavily dependent on new features avaiable in Server 2008. Check it out.

BitLocker to Go

BitLocker was introduced in Winodws Vista and improved on in Windows 7. BitLocker allowed you to do full disk encryption on laptop or desktop drives. BitLocker to Go introduces the ability to encrypt USB drives and even lets you enforce your USB encryption policy through the use of group policies. Very cool.

Enterprise Federated Search

Winodws 7 enhances the desktop search capability to incorporate searches for unstructured data across and outside the enterprise. Prebuilt connectors to SharePoint 2007 make this a really cool feature.

There are many other improvements and new features in Winows 7. Here is a one page chart that compares Windows 7 features to those in both Vista and XP. It seems clear to me that Windows 7 offers enough benefit to make the change in the enterprise. Keep in mind however that many of the new "Windows 7 features" really become available once you implement Winodws server 2008 so, start planning....

Wednesday, September 30, 2009

Growth of the CCIE Population

I have always thought that one of the things that propelled Cisco to its prominence was its focus on training. After all, train more engineers and they will recommend your product. Its interesting to see how the number of CCIE's have grown over the years.

Here is a cool Network World Blog with a more detailed analysis of the numbers.

Network World Blog

Sunday, September 27, 2009

Tool for Calculating Computer Room Power Requirements

Have you ever been asked to calculate how much power you will need for a computer room? Maybe you are opening a new field site, standing up a call center or planning for a data center. I have always found it time consuming to chase down the power requirements of the specific gear I plan to install in order to come up with a reasonable estimate. APC offers a great web based tool for estimating critical IT loads. The tool has the power needs of a plethora of commonly used gear. You can build a solution that will give you your overall power needs. The end result is a recommended APC UPS system (of course) but the power requirements are also handy for estimating generator needs etc. The tool is available at

APC Calculation Tool

Thursday, September 3, 2009

Hey Intermec, show us your back side...

Any IT manager who works for a distribution, trucking or delivery company no doubt deals with ruggedized mobile devices. A big part of daily operations for mobile field workers is the ability to print at customer locations. This is generally accomplished by mounting some type of ruggedized printer inside a vehicle in a secure and safe fashion. So, as an I.T. manager then, it is important that you know exactly how new printer models are built to mount in truck cabs and check-in rooms. Logical then that Interemc (one of the leader’s in ruggedized devices and printers) would make this type of information clear on their product web site. Wouldn’t you think there would be clear photo’s of the rear panel of printer’s clearly showing how specifications have changed? Apparently, Intermec doesn’t think so. I recently had to RMA over $100K of fixed mount printers because the rear mounting panel in the new model had been altered. Had this been clear on the web site this could have been avoided. Hey, Interemc, we mount your printers and really need to see how new model’s will impact our current mounting solutions. So, here is a tip for your web site, show us your rear!

Sunday, August 30, 2009

Motivating Skilled I.T. Profffesionals

As an I.T. manager, you no doubt struggle with how to keep your skilled employee's motivated. I have always thought it was a fallacy to assume that high pay in and of itself would keep highly skilled people around. Pay serves as a "dis-motivator", not a motivator. That is, if people are not paid fairly at market rates, they will start to look elsewhere. Fair market rate pay however in and of itself is not enough to keep good talent. So what does it take? I recently came across this video by David Pink that I thought does a pretty good job at outlining some things to consider when managing (although he would not like that term) a creative workforce.

Friday, June 19, 2009

Get Rid of Custom Scripts to Reduce Risk and Improve Scalability

What are the advantages and disadvantages of custom scripts in your daily technology operations? It seems at first thought that using custom scripts for daily tasks such as database backups or ftp transmissions is a great way to keep down costs. Why pay for a product such as NetApp's Snap Manager for SAP when at first glance all it does is put the database in hot backup mode and then leverage SAP tools such as brbackup to execute the backup? The true "techie" baulks at paying for "fancy push buttons" when a good old shell script can accomplish the same thing. As a technology manager however you need to consider what the true costs and risks of custom scripts in your daily technology operations actually are. I make the argument here that the fewer custom scripts you have in your environment the better off you are.

Let me be clear on what I mean here by "custom script". I am referring to either UNIX shell scripts or Windows VBScript and PowerShell scripts. Let me also be clear on what I mean by "daily operations". Here I am referring to repetitive tasks that happen everyday in your environment such as database backups and file transfers. I am not referring to one time administrative bulk operations such as Active Directory maintenance for example.

Custom scripting introduces risk by building dependencies on the knowledge of the individual who creates the script.

We all know that we should have solid backups for individuals on our technology teams. In large organizations with hundreds of technology workers it is common to be two or three deep on every position. That is not the world most of us live in however. In the typical midsize organization with a 20 something head count in IT there is generally one or two "super stars" that have the technical horse power to automate daily operations with custom scripts. The more you allow these custom scripts to permeate your daily operations the more risk you bare should one of your super stars leave. Off the shelf products for daily operations counteract that risk in two ways. First, they ensure that through the use of best practice configurations and maintenance contracts that you have an 800 number you can call and expect support. With a large vendor’s support organization behind you, you should be able to sleep easier at night. Second, through the use of GUI interfaces and the publication of best practice documentation, you make it more likely that more of your technical staff, not just your superstars, will have a solid understanding of how systems are backed up, files are transferred etc.. The key concept here is transparency. It should be clear to everyone how things get done on a daily basis.

Custom scripting makes auditability of daily operations much more difficult.

Many technology managers today are feeling the pains of compliance with such rigorous audit and security standards such as SOX and PCI. Rarely will custom scripts be able to provide the robust set of operational reporting necessary for regulatory compliance. Auditors will also have certain packages that they are familiar with. Auditors will have specific things they are looking for relative to the most popular security and backup software vendors. Presenting auditors with a plethora of custom scripting and reporting that doesn’t "fit" their model will likely not bode well. I once had a technology manager tell me that auditors are some of the most unimaginative people he had ever meet. There is some real truth in that sentiment.

Custom scripts are not as scalable as off the shelf applications.

Even if you have a stable group of IT superstars, after a while custom scripts begin to get out of hand. After a certain amount of enterprise growth, it becomes difficult to manage the interconnected web that custom scripts can weave. Custom scripts may also not be capable of handling the sheer volume enterprise growth may bring to your organization. So, while custom scripts may seem cheaper in the short-term, their long term scalability should be called into question.

In summary, while true blood techies often espouse the benefits of custom scripts in daily operations, as a technology manager you should be aware of their true impact on your enterprise. Keep in mind the risks and scalability issues you may encounter by leveraging custom scripting versus standard products. Be aware that the true "fully loaded cost" of custom scripts may be higher than what you will pay for standard packaged software.

Wednesday, June 10, 2009

Asking “Should” A System Be Virtualized Is Not The Same As Asking "Can" It Be Virtualized

Like most IT managers, I recently had a virtualization assessment performed in my organizations Data Center environment. It was the garden variety assessment with consultants coming in and installing capacity planning software from VMware. We gathered statistics for several weeks and then formulated a list of virtualization candidates. Servers with average processing and memory loads within certain limits where classified as virtualization candidates and placed on the schedule to be brought into the ESX environment. There is nothing wrong with this exercise and it is absolutely necessary. It is however just the first step in identifying your virtualization candidates.

A recent IT manager panel discussion I attended on “Cloud Computing” made me realize how one dimensional the typical virtualization assessment is. One major topic of debate was security in the “cloud”. Who has access to the data, where is the data? These types of questions become paramount when dealing with regulatory issues such as PCI compliance. One IT manager at the panel shared an experience where auditors denied PCI compliance simply because their environment was virtualized. Other managers shared experiences where corporate politics and enterprise architecture standards prohibited systems from being virtualized. All this made me realize that to truly gauge a system as a virtualization candidate, a multidimensional criteria needs to be developed. Simply asking “can” a system be virtualized is not enough.

So, after your consultants show you the presentation with their capacity planner results your job is just beginning. You need to take each of your virtualization candidates and understand which business processes are enabled or impacted by that system. Understand both the regulatory, political and architectural implications of virtualizing that particular machine. Make sure you do a holistic virtualization assessment before you start cashing the checks you think you will get from your virtualization savings.

Wednesday, May 27, 2009

Demonstrating the Business Value of IT Infrastructure

A consistent theme in thinking around I.T. strategy is that your I.T. strategy must be aligned with your organizations business strategy. The term I.T. strategy is often used broadly to mean the overall set of activates and projects in which an I.T. organization is engaged. A recent McKinsey article titled “How CIO’s Should Think About Business Value” describes I.T. as adding value to an organization at two complimentary levels. The “core asset value” of I.T. consisting of hardware and software create tangible asset value for an organization. I.T.’s “value in use” varies from organization to organization and is a measure of how well I.T. is leveraged to enable core business strategies. The term “I.T. in use” here again is defined as a holistic set of I.T. activities. The problem with such a sweeping definition is that in most medium to large sized organizations there is usually two parallel streams of strategic thought and planning occurring inside I.T. There is “application strategy” which is the long-term planning of an organizations software landscape. It is the capabilities of software that will serve to enable key business processes. As such, application strategy is often seen by business executives as being synonymous with I.T. strategy. But where will the software run? How cost effective is an organization in its execution of activates required to support software functions? Answering these questions is the goal of “infrastructure strategy”. Infrastructure strategy is the long-term planning of technology investments such as networking, storage and data center design. The infrastructure investment portfolio should be closely aligned with both an organizations business objectives and its long-term application strategy. Collectively, application strategy and infrastructure strategy should coalesce to engender value creation from I.T. Due to the different skill sets between the two I.T. domains, application strategy and infrastructure strategy are generally planned separately. It is up to the I.T. infrastructure leader to ensure that his or her investment plans are geared towards building an I.T. infrastructure that is synergistic with the application and business landscape.

Infrastructure strategy rarely gets discussed at joint planning sessions between business and I.T. leaders. Metaphors such as “The Cloud”, “The Grid” as well as analogies such as “Utility Computing” have all added to the notion that technology infrastructure is a commodity with little intrinsic value. In the McKinsey article “Where I.T. Infrastructure and Business Strategy Meet” the authors suggest that thinking of I.T. infrastructure as a commodity is a mistake. McKinsey suggests that the individual pieces of an infrastructure such as storage devices and networking components may be commodities. However, the manner in which these technologies are designed, integrated and managed combine to form a whole that is greater than the sum of the parts. It is beholden on the I.T. infrastructure leader to help his or her business leaders “see” the vision of how the infrastructure strategy will generate business value. So how can the infrastructure manager clearly represent the relationship between technology investments such as storage arrays and business objectives such as supply chain execution? A good tool for connecting the infrastructure and business dots is the use of “Strategy Maps”.

A strategy map is a tool developed by Harvard Business School professors Robert Kaplan and David Norton. Kaplan and Norton are the developers of the Balanced Scorecard approach to business management. The balanced scorecard evaluates an organizations performance by examining key performance indicators in four perspectives: financial, customer, internal, learning & growth. You can learn more about balanced scorecard here. Strategy maps can be considered a method for visually displaying the set of organizational activities to be executed in each perspective and the impact these processes will have on each other. You can find a detailed discussion of strategy maps in Kaplan and Norton’s book “Strategy Maps. At a high level, strategy maps can provide you a one slide representation of how your I.T. infrastructure strategy ties directly to your business strategy. I will walk through an example strategy map explaining how the various pieces relate. You can download the example map here.

When Kaplan and Norton discuss the placement of I.T. assets on a strategy map they classify systems into four categories: transactional, transformational, analytical and infrastructure. In my example, I focus more on mapping infrastructure investments to business strategies rather than classifying the infrastructure investments into categories. Showing how your seemingly unrelated technology infrastructure and business strategies compliment one another is the goal. At the top of this simplified example you have a clearly stated business goal of growing U.S. market share by 10 percent by 2012. Directly under that goal are four operational initiatives that the business has defined as paramount for achieving that goal. This example shows an application level strategy focused around SAP and the SAP suite of business applications. For each business level objective, a corresponding SAP application is identified as mapping to the features necessary to achieve the goal. For example, in order to leverage a skilled sales force an organization will need a human capital management system for skill development, talent identification, training and performance appraisals. Similarly, excellence in supply chain execution will require a system such as SAP APO to streamline warehouse, production and logistic functions. The concept of mapping systems to business needs can be applied to any vendor suite as well as custom developed applications. The bottom part of the example strategy map serves to connect specific I.T. infrastructure investments to the application and business layers. In this example, certain infrastructure investments such as consolidated storage and virtualization will serve to enable features across the entire application portfolio. For example, in an SAP environment production systems are regularly copied to quality systems in order to accommodate testing of new features against up to date transactional data sets. The inter related nature of SAP e environments often necessitates what is termed a federated system copy meaning that all quality systems (HCM, APO etc..) will need to be refreshed at once. The speed and agility at which this can be done will directly impact the speed with which new development can be tested and moved to production. The time it takes to move new features to production impacts the time it takes the business to realize strategic benefit. Virtualization will have a direct impact on an organizations ability to scale its SAP based operation while maintaining a steady level of OPEX spending on servers. Large ERP landscapes such as SAP and Oracle often result in server sprawl as each landscape requires multiple systems for development, test and production. Virtualization will help provide those landscapes and the business value they create at a competitive cost. Initiatives such as strengthen business relationships and implementing a more integrated supplier network will necessitate targeted infrastructure investments such as enhanced network edge security. Mapping investments in technologies such as intrusion detection systems and firewalls to a strategic business imperative clarifies their relationship.

The decisions you make around your I.T. infrastructure strategy can either serve as a conduit for business value creation or as an impediment to it. It is important that as an I.T. infrastructure manager you help your business leaders understand how seemingly commodity technology investments relate directly to businesses strategy. The use of Strategy Maps provides a clear visual representation of the relationship between business strategy and I.T. infrastructure capabilities. This mapping should also help I.T. infrastructure managers think clearly about their own strategy development focusing on business value rather than bits and bytes.

Saturday, May 9, 2009

Nothing New When Evaluating Web 2.0 Technologies

I have recently finished reading "Groundswell, Winning in a world transformed by social technologies", in which Forrester researchers Charlene Li and Josh Bernoff outline the set of strategic considerations businesses face when evaluating the broad category of technologies labeled Web 2.0 tools. Groundswell provides a tangible set of adoption guidelines for businesses looking to leverage technologies such as blogs, wiki's and social networks for both customer facing applications or for use inside the organization. At the end of Chapter 2, "Jujitsu and the Technologies of the Groundswell", the authors outline what they call the Groundswell technology test. The test consists of several points to consider when evaluating the adoption of a new web 2.0 technology. Taken together the points provide litmus test designed to tease out technologies that will have staying power from those that may fail to gain significant adoption. While the authors provide this framework in the context of evaluating the new set of web 2.0 tools, the considerations are no different than what should be considered by any technology director evaluating any new technology in his or her enterprise. This post will show how Web 2.0 evaluation techniques are in many ways the same techniques that you as a technology manager should have been using all along. The technologies change, good practice and the principals of thoughtful consideration are timeless.The Groundswell Technology Test

1. Does it allow people to connect with each other in new ways?

The principal here is that if a tool allows people to interact in new ways that are interesting then it has the potential to gain wide spread adoption. As people will want to use this interesting new interaction medium, the technology will spread virally as existing users recruit new participants. This should sound familiar to any technology director who has managed any sort of large scale technology deployment in his or her enterprise. Let’s use Voice over Internet Protocol (VoIP) as an example. By allowing phone calls to be placed over the internet or over your organizations private data network, VoIP offers huge cost reduction. The key to engendering these cost savings however is user adoption and large scale participation. As you are planning to introduce VoIP to your enterprise, you have to figure out how to foster excitement over the new medium, how to introduce features that give your users new tools and communication options they will want to use. If the tools allow folks to interact in ways they could not before (e.g. video calling, instant messaging, click to dial) then it will gain adoption in the organization. Give folks something new and useful that they want to use and you are likely be successful.

2. Is it effortless to sign up for?

The principal here is that most Groundswell technologies are free and can be accessed through existing means. Folks can signup for accounts on services such as Facebook or MySpace at no cost and have the ability to access these mediums through their existing PC's or even their mobile phones. People don’t have to think much about the use of the new technology, it is intuitive and available where they already spend their time. To tie this point back to traditional enterprise technologies lets think through the implementation of Business Intelligence (BI). One of the biggest barriers to gaining acceptance of a new BI solution in your organization will be the technologies ease of use. When we say BI must be "effortless to sign up for" this means it must not require extensive training, the use of completely new tools or time spent away from traditional work activates. To ensure success, BI tools must leverage existing skill sets of users (e.g. Microsoft Excel) and it must be available where they are such as through embedded links in commonly used Microsoft Office applications. BI must be "free" in terms of the psychological impact on your end users.

3. Does it shift power from institutions to people?

The new breed of social networking technologies allow people to draw on information from the collective group, information that is not pushed downward from institutions with vested interests. It is true that information is power. Any technology that shifts information from the few to the many ultimately has the potential to weaken the perception of superiority of those at the top. This attribute, the ability to garnish information that gives oneself the appearance of being knowledgeable and informed, is a critical aspect that will drive the acceptance of any enterprise wide technology. If by adopting a new inventory control system, a middle manager is able to have direct insight into inventory numbers and appear more knowledgeable, he or she will certainly use it. BI is a great example of a technology that makes corporate information readily available, allowing folks at all organizational levels to gleam insights into the corporations true performance. Give people the ability to get their hands on information that helps them appear smarter, gives them the chance for promotion or makes them feel more important and they will take the opportunity. So when looking at that new warehouse monument system, ask yourself ,"will I give my warehouse manager information that maybe only his superior has today?" If the answer is yes, he or she will jump at the system.

4. Does the community generate enough content to sustain itself

In social networking, users have to have reason to come back. Services like Facebook and MySpace have long ago reached the tipping point of sustainable content generation. Services like Wikipedia have thousands of contributors adding new content everyday making it a vibrant and sustainable service. As the authors of Groundswell point out, services such as social networks and Wiki's often need a kick start, content needs to be infused into the service to generate an initial momentum building towards a self sustaining existence. When evaluating enterprise technologies such as VoIP, OLTP systems or any other line of business system the same question of content should be considered. If you are implementing a point solution meant to address the needs of a specific department then users will likely get value from the new technology without total enterprise adoption. Larger systems such as enterprise directory services or ERP systems will only generate their true value when a significant amount of users or business units are on board. So when evaluating new technologies for your organization ask yourself the degree to which you will need user adoption across the enterprise for the technology to become vibrant and integrated into everyone’s work routine. Systems that touch a significant number of employees during their course of normal business are likely to become self sustaining out of necessity. While systems such as ERP will survive for a certain time period out of necessity even if they provide little content back to the end users, over time satellite systems will emerge to fill the content gaps. Users will start to wonder why they are using two systems and the original ERP system's value will be questioned. If users don’t see others using your system and more importantly if decisions are not being made based on the content your system provides, users will not come back.

5. Is it an open platform that invites partnerships?

With respect to Web 2.0 technologies the authors of the Groundswell correctly point out that tools which tap into the collective creative power of the broad technology community will evolve quicker than those that don’t. Companies like Facebook who open their platform to developers will see new applications being built for their service and find users adopting their service in new and creative ways. The concept of invocation through open access should be applied when evaluating different large scale enterprise platforms be it ERP systems, VoIP systems or storage systems. Vendors that provide API's for their software and partner with Independent Software Vendors (ISV's) to allow new products to be built around their core platform will see their products flourish. You as a technology manager will benefit from having more options around supporting technologies like backup systems and system administration. Systems do not need to be totally "Open Source", meaning that anyone can modify the source code. Systems that guard the code that drives their core business functionality but who provide access to that functionality to external vendors through interfaces or web services will provide your organization with a broader range of opportunities and accommodate a more flexible business environment.

The new wave of web 2.0 technologies such as blogs, wiki's and social networks do require careful consideration prior to implementation. The considerations are not vastly different from those you should be familiar with through your experience evaluating more traditional enterprise technologies. As mentioned earlier, the technology changes but the core management principals remain the same. Understand the nuances of emerging technologies such as those discussed in the Groundswell but don’t loose site of the core principals that have served you well.

Friday, April 24, 2009

The Impact of Current Wireless Trends on Mobile Field Operations

There has been a lot of press lately around the concept of open access to broadband warless networks. Much of this was fueled by the Federal Communication Commission (FCC) decision to reclaim and auction off a portion of the 700MHZ radio band. When Google put their hat in the ring as a pontifical bidder and at the same time starting hyping their open source mobile operating system known as Andoid, the media hype hit a fever pitch. At roughly the same time the 700 MHZ drama was unfolding, QUALCOMM was working on a less hyped wireless broadband modem dubbed Gobi (Global Mobile Internet) that allows for connectivity to multiple broadband carrier network. Gobi is a shift from traditional broadband cards that where programmed by the manufacture to communicate with a particular carrier’s wireless frequency. So, in the past you had to decide which carrier you wanted to commit to before ordering your device, with Gobi that is no longer true.
There is no lack of press on what these developments mean to the fleet of mobile information workers dashing from hotel to air port lobby. But what does all this fast paced, sexy wireless technology innovation mean to Technology Managers responsible for large fleets of mobile, blue collar field workers using devices from non-consumer hardware manufactures such as Intermec? What will be the impact on cost structures for the technology that supports direct store delivery (DSD), shipping, field service repair etc...? I try to give some guidance on these points below.

How Does Open Access Help My Daily Operations?

The short answer (and the one always favored by consultants) is that it depends. Let me first briefly define what open access really means. When the FCC decided to auction off a portion of the 700 MHZ frequency it attached a small catch. Whoever won the bid would be required to allow ANY device to connect to the frequency. This has been compared to the Carterfone decision made by the FCC in 1969 that ordered the then telecommunications monopoly AT&T to allow consumers to buy any hand set to hook to the AT&T network. Prior to the Carterfone decision, the rotary handsets for homes where rented equipment from AT&T. So, companies like Verizon Wireless who had in the past had tight control over the devices that connected to their network would now be forced to loosen their standards. This means that any device that passes a set of standard tests outlined by the carriers would be allowed to connect. This is expected to open up a new round of innovation in devices such as water meters and vending machines, giving producers of these devices new communication options and the ability to build services around those options. So, the open access movement may in fact produce smarter field equipment that enables business to cut cost. Time will tell. What is the impact of open access on the current set of mobile devices you have in the field today being used for daily operations? Not much, nothing really. It is conceivable that over the next five years the open access of 3G network will make it attractive for niche hardware players to get into the ruggedized device market. So perhaps as you go to negotiate your next round of major hardware purchases you will have more options. The reality is however that the buzz around open access to 3G networks is really overplayed with respect to the traditional work horse devices such as Intermec and Symbol. As you can order most of these devices today programmed to connect to the carrier of your choice any real impact on your choice of devices in this market niche is probably five or more years away.

Gobi Devices are Coming

In late 2009 major manufactures of rugged mobile field devices may introduce Gobi broadband cards, allowing you to choose your provider through programming of the card. Now this is big news. Lets compare this to how things are done today. Today, you go through an analysis of which major carrier has the best average coverage at all of your field locations or service areas. You pick the one you think has the best coverage and negotiate contract terms. Once you agree on a wireless contract you order your devices with that carrier’s chipset. Let’s say your wireless contract is two years. At the end of that two years you have hundred or maybe thousands of devices in the field serving daily operations. Are really likely to switch out all these devices to move to another carrier? No, and trust me, your carrier is well aware of that fact. After your initial negotiation you really loose a ton of leverage when it comes time to renew. With the Gobi wireless cards, that all changes. Switching providers now becomes much less painful and thus you maintain some of your leverage in subsequent contract talks. You are also shielded against situations where carriers merge or get sold. Or maybe a new carrier comes online in a particular part of your geography that you want to use in that one warehouse. With the Gobi cards you can easily switch devices to that carrier. So, what impact does this development have on your current operations? Again the answer is not much. The Gobi cards should absolutely factor into your next round of hardware purchases however and if you are considering making a big purchase now it might even be a good idea to wait a few months.


Despite all the current media hype around open 3g network access and the Google mobile operating system the true impact on your current field operations is little to none. It is not likely that the open access rules enacted by the FCC will have short term impact on your procurement or platform decisions with respect to ruggedized field devices. The opportunity for long term innovation odes exist however and should be monitored. The advent of the Gobi technology from QUALCOMM will have significant future impact on the flexibility you have with respect to wireless broadband carriers. Again, the Gobi technology will hit the market in ruggedized devices in late 2009. If you are considering a hardware purchase now, the benefits of Gobi are great enough that you may want to consider waiting it out.

Sunday, April 19, 2009

Beyond the Hype of ROI: A Real World Model

How many times have you read vendor literature promising that their technology would “decrease TCO and maximize ROI”. All technology managers have, at some point, been promised Return on Investment (ROI) should he or she choose to invest in a particular technology. We all know that ROI is a “Good” thing and our business owners and senior management like it. Most technology managers understand in general what ROI means in that it affords their organization a chance to recoup or gain money by investing in a particular technology. What many technology managers do not know however is that ROI is not a purely theoretical concept. ROI is a specific number derived through a numerical analysis of the costs and benefits of a particular technology. My goal in this article is to give technology managers a high level understanding of what exactly ROI is and arm them with the facts to ask pointed questions to vendors who spout flowery prose to their management teams. I will work through a simple example explaining the important concepts at a high level. At the end of this article I also point you to a blank spreadsheet model that you can use to do your own ROI analysis. This is a model I have used many times and it has served me well with respect to cutting through vendor hype.

It’s All About the Journey

Before we start working through the example I want to emphasize a critical point. The actual number that pops out the bottom of the ROI model is much less important than the interactive conversations you will have with your stake holders while deriving the numbers to plug into the model. By forcing folks to sit down and think through the potential benefits and possible pitfalls of a particular technology investment you can be sure your decisions will be well thought out. The hours you will spend with folks debating things like “How much time will we actually save as a result of this technology?” or “Are we introducing cost elsewhere in the organization as a result of this investment?” are the most valuable dimension of this process. The numbers are important, yes. However, getting agreement and buy in on the assumptions being made to derive the numbers is absolutely critical to success.

Finance 101 in Ten Minutes or Less

You are a technology person, not a bean counter. You speak TCP/IP and .Net, not debits and credits, I get it. The cold fact is however that the folks in your organization holding the purse strings are Finance people. If you want to gain their respect and trust you need to make an attempt to speak their language. After all, you should never be making technology investments for technology’s sake. Every technology investment should have a solid business driver and a financial benefit for your organization. If you have made it to the level of Technology Manager or Director you have probably figured that out by now.
There are three main documents that serve to explain the financial performance of an organization. These are the Income Statement, the Balance Sheet and the Cash Flow Statement. In order to execute an ROI analysis you do not need to know the gritty details of these documents but you do need to know at a high level what each is used for. While you won’t directly use these documents, your ROI model is ultimately demonstrating the effect your proposed technology investment will have on each. Here we go, I know this is not as exciting as a reading a networking white paper or debugging C# code but stick with me….

Income Statement - At a high level this document shows the difference between revenue and cost. The result is a numerical entity knows as Net Income. Net Income is NOT the same as cash because some of the “cost” items on the Income Statement are things that do not represent real money being paid out (e.g. depreciation). Just remember that Net Income is the difference between revenue and cost but does not represent the amount of cash a company has.

Balance Sheet - When you think about the balance sheet think assets. Assets are the things a company owns. The Balance sheet also lists liabilities. Liabilities are the things a company owes. At a high level the Balance Sheet shows the difference between assets and liabilities. So, if a company sold everything it owned (its assets) would it have enough cash to cover everything it owes (its liabilities). All you really need to know in terms of ROI analysis is that the Balance Sheet is where your fixed assets such as servers and networking gear will go.

Cash Flow Statement - Cash is king and the Cash Flow Statement essentially defines the size of your organizations throne. The Cash Flow Statement shows how much cash your organization had on hand at a particular point in time. As the name implies it really shows the flow of cash in and out of an organization during a time period. For our purposes here and for your purposes as a technology manager just remember that the Cash Flow statement shows how cash rich an organization is.

OK, that was rough. In the next section we will start working through a tangible example and you will truly see that only a basic understanding of the documents outlined above will carry you through a detailed ROI analysis of your technology investment.

The ROI of Virtualization

Let’s work through an example. In order to follow along you can download the PDF of the analysis here. The link to a blank version of the spreadsheet model is available at the end of this article. OK, lets first look at the impact your virtualization investment will have on your organizations net income.

The Income Statement

There will be both benefits and costs to your investment represented in the model as Increased Operating Profit and Increased Operating Cost, respectfully. The Increased Operating Profit section is where you want to list the financial benefits of your virtualization investment. These are items that will reduce expense in areas of IT operations. As a practical matter, you should not enter these numbers directly in the ROI model. In the blank model you will find a tab called work. Use that area as your scratch pad and enter all you numbers there and have them roll up to your model. Remember, a huge part of the benefit you will receive from this process is in debating the actual amounts for each benefit. You will spend most of your time in the work tab actively debating what you will really get out of your investment. The Increased Operating Cost section is where you want to list the costs associated with your investment.
The cost section has two parts. The Depreciation cost is a calculated figure derived from your investment in fixed assets (described below). Basically, you do not need to enter anything for the depreciation line item, it is calculated for you in the model. Just in case you are curious, depreciation is an accounting method for matching the realization of an items expense to the realization of its benefit. The costs you want to enter on the work tab and actively debate are the ones that populate the SG&A (Sales, General and Administration) line items. These costs are NOT the one time cost you will have buying the hardware, and software needed for your project. These costs represent increased run rates in your I.T. operations that will be engendered by your investment. For example, you may incur additional yearly fees such as conference expenses that you did not have prior to the investment.
The final line items of the Income Statement section are EBIT, tax and Net Income. Earnings before Interest and Tax (EBIT) is an accounting measure meant to capture a company’s true operating profit before money is either added through investment income (e.g. income made through interest on savings accounts rather than through actual business operations) or taken away by non operating expenses such as tax. In the model EBIT is a calculated figure that is equal to operating profit minus operating cost. The model then subtracts a standard 35% U.S. Corporate tax rate from the increase in operating profit created by your project (Uncle Sam has to get his piece). Your organizations tax rate may be lower and you can usually get it from either you controller or your CFO. The difference between EBIT and tax is Net Income. Net Income represents from an accounting perspective the yearly benefit your technology investment will have on your organizations performance. Of course, a positive impact on Net Income is good, but it is not the ultimate factor that should drive your investment decisions. Remember earlier I said cash is king? The reason I emphasize that Net Income shows the benefit of your investment from an accounting perspective is that you should ultimately be concerned with the impact of your investment on your organizations cash flow. What good is a positive Net Income if you don’t have cash to pay the light bill? The rest of the model is concerned with getting at the actual increased cash flow your investment will provide.

The Balance Sheet

In our ROI model, the Balance Sheet section consists of one line item, fixed assets. This is the amount you are asking for with respect to your technology investment. The fixed asset number is generally the amount of CAPEX (Capital Expense) you will be asking your CFO to approve for your project. Again, as a practical matter, you should generally populate a separate tab with your investment costs and have them rollup to the model so that you can easily manipulate things as you negotiate with your vendor (you will negotiate with your vendor, right?). The fixed Asset number drives the calculation of depreciation in the income statement section and you will see it again as Net Capital Spending in the Project Cash Flow section below.

Project Cash Flows

OK, this is about to get a little hairy so stay with me because we are now getting to the heart of the whole ROI subject. Most of the line items in this section are auto populated from figures earlier in the model. EBIT, Depreciation and Tax are pulled directly from preceding values in the model. Operating cash flow is a derived value equal to EBIT plus depreciation minus tax. Why? Remember EBIT is what the company made before adding interest or taking away tax. EBIT also includes in it the depreciation figure. Remember, depreciation is an accounting entity, NOT a true cash expense. That is a key point. We are worried with the amount of actual currency we are taking in and paying out, cash money! So, to get at the true cash being generated here we have to take depreciation out. We take depreciation out by adding it BACK to EBIT because remember, we took it away before as an expense in the income statement. Now we take away the amount that we paid for tax because that is real cash (try giving Uncle Sam an accounting entity….). What we come out with is Operating Cash Flow, the TRUE amount of cash that your investment will generate (or consume) for your organization every year. This is a critical figure especially for smaller organizations or for those with long accounts receivable cycles (e.g. organizations that take a long time to make a product and get paid for it but who have to incur expenses while all that is happening, think Boeing making air planes). The last two line items of the Project Cash Flow section are Net Capital Spending and Increase in Net Working Capital. Net Capital spending represents the amount of cash your organization will spend on your investment in a year. For the first year it is the same as the amount you plan on spending on fixed assets. If your investment calls for any additional fixed asset spending in subsequent years this is where you would show that spending. If you put in additional capital spending after your initial investment be sure to adjust the depreciation accordingly to reflect the cumulative effect of the fixed asset addition. The final line item, Increase in Working Capital, is rarely used for technology investments. The best example for this is a project that will temporally increase inventory on the warehouse floor for the duration of the project. It is assumed that the cash spent on the increased working capital in year one is redeemed at the end of the project. Don’t worry about this too much. Again, it is rare that a technology investment will use this line item.
Great, now you have the cash flow your project will generate for your organization for each year over the next five years. It is now time for the grand finale, the introduction of the concept known as the time value of money and how it is used to calculate the ROI of your project.

The Time Value of Money

A bird in the hand is worth two in the bush, right? In the same vein, a dollar today is worth more than a dollar you get three years from now. Why, because the dollar you get today can be invested and earn interest. So, to really compare apples to apples the value of the cash you will give me today versus that you will give me three years from now I have to account for or discount those future dollars to reflect the loss in interest. The way you do that is to use a formula that takes each years cash flow and discounts it back to be equal to the value of dollars received in year one of your project. The formula is already in the model, all you need to do is plug in your organizations WACC (Weighted Average Cost of Capital) in the work tab. You can get this number from your controller or CFO and it represents what it costs your organization to acquire capital (technically it is the weighted average between what it costs your organization to raise equity and what it cost to raise debt). At a minimum, someone in your organization will be table to tell you what is the typical discount rate used for ROI analysis (often it is more of an art than a science and an organization may simply have a number they prefer to use).
Once you plug in the WACC value all of your project cash flows will be discounted and you will see a number appear for NPV (Net Present Value). NPV is the sum of the discounted cash flows for each year in your project. The NPV value gives you the total amount of cash your investment will generate in today’s dollars. Thus, NPV gives you a way to compare the amount of cash you will pay today for your investment with an equivalent amount of cash in today’s dollars that your investment will generate. In fact, when you divide the NPV by the amount of your investment the result is a mathematical entity called ROI! ROI is expressed as a percent and reflects the percentage return your investment will give your organization on the money you are asking them to invest.


The goal of this article was to give technology managers a high level understanding of the mathematical computation of ROI. I hope that after reading this article, you will be more informed and in a better position to ask punted questions to sales folks offering great “ROI”. While a Certified Public Accountant (CPA) or other finance professionals would quibble with the specifics of what I have explained here, there is no doubt that this model is a solid, high level tool for evaluating technology investments. It has worked for me for years and I have used it to evaluate everything from storage systems to complete Enterprise Recourse Planning (ERP) implementations. I can not stress enough that the exercise of thinking critically about the benefits and cost your investment will create is the highest generator of value in the ROI analysis process. I hope you use this model to help drive meetings and discussions and to gain alignment among key stake holders in your organization.

You can download a blank copy of the spreadsheet model here.

Tuesday, April 14, 2009

How to Manage a Team During the Adoption of Virtualization

Virtualization is one of those disruptive technologies that will no doubt change the long term look and feel of all corporate datacenters. So it is no wonder that when you start to explore implementing virtualization you will find a plethora of technical best practices from your network vendor, your storage vendor and your application providers. The one area in which you may find a paucity of information however is in the arena of ensuring your internal IT team is prepared for the shift in datacenter philosophy virtualization will bring. It is easy to confuse being prepared for the philosophy shift with being technically trained on virtualization technology. While training is paramount for ensuring your team buys into the virtualization philosophy, in and of itself it does not ensure team commitment. Often, virtualization will impact so many dimensions of your datacenter operations that even trained team members may find the change in operational strategy daunting. Below are three points I have had to manage as a result of adopting the virtualization philosophy in my organization.

1. Why do we need to make so much change as a result of this? Things where working fine before virtualization…..

You will find that adopting the virtualization philosophy in your datacenter affords you a fantastic opportunity to rethink nearly every dimension of your technical landscape. Be sure to understand your organizations business strategy and where you may need to go in terms of technical infrastructure to support that strategy. Use this opportunity to align your datacenter strategy with your organizations goals and to create a scalable technology infrastructure. Be aware however that doing this will surely upset some folk’s apple carts and cause them to question the need for all the change. Common arguments will include things like “It works like it is” and “We are introducing more complexity”. The reality is both of these arguments may very well be true. Your team has probably dome a good job building the current environment. As you build additional scalability into your infrastructure in areas such as datacenter network design, storage design and standard server configurations, operational management may in deed become slightly more complex. Don’t hide from this; acknowledge your teams concerns as valid.
Marshal Goldsmith published a book in 2007 titled “What Got you Here Won’t Get you There: How Successful People Become even more Successful” The high level gist of this book is that folks often mistakenly believe that by continuing to do the things they have always done, they will continue to excel. And why not, it has worked to this point? The reality is however that the needs at the next level are often different. Circumstances change and require flexibility. This is true in terms of both business acumen as well as your technical infrastructure. Sure it has worked to date but in order to continue to be successful you have to adapt to the changing needs of your organization. Be proud of what you have done; be excited about the opportunity to do better in the future.

2. A silent indifference to best practices…..

There is a saying that goes ‘Standards are great because there are so many to choose from”. The same can be said about best practices. As you start your virtualization project you will find that every vendor at every point in your infrastructure will likely have best practice guidelines. There is absolutely no paucity of information here much of which will be very detailed. It is likely that as you start adopting the virtualization philosophy many of these Best Practices will initially seem irrelevant. Following detailed steps such as ensuring disk partition alignment or setting a plethora of detailed advanced parameters on your storage device will likely engender, at best, audible groans from your team and at worst a silent indifference. You will need to reiterate to your technical team that you are building a foundation for the future. The tweaks being made now may not likely impact performance for two or more years; however, getting it right now will be easier than fixing it once your entire datacenter is live. A good methodology here is to assign certain Best Practice readings to specific team members. Hold them accountable for presenting these practices to their peers and soliciting constructive debate on the best way to proceed. By instilling a sense of ownership in the solution being architected you will help solidify the buy in of your team and ultimately ensure the overall architecture is built around established Best Practices.

3. The end goal seems unreachable and we have many competing priorities…..

Gartner has established a well documented trend for new technology adoption they refer to as the technology adoption curve. The principal is that new technologies get hyped to the point of inflated expectations. The peak of inflated expectations leads to a rush to implement without a clear vision. This hurried adoption leads to the trough of disillusionment. Once early adopters begin to make real use of the technology in a tangible way the adoption curve will start to level off and real value will be gleamed from the technology. In some ways you can expect to see a similar pattern among your team with respect to the adoption of virtualization technology. Initially, as your team is getting trained and setting up your first sandbox environment you may see a lot of excitement about the potential of virtualization. Your team may embrace the classes and spend a good deal of time learning, reading and discussing virtualization. Many lofty ideas will be debated and a general buzz may exist around the project. As you move into your first phase of actually converting certain servers you may see this enthusiasm continue as you start picking off the low hanging fruit in the datacenter. The tricky part comes when all the low hanging fruit has been picked and now you have to start climbing some ladders to get the prize. Maybe the next round of servers can’t be taken down during production hours or there is debate on whether systems should be rebuilt or P2V’d. The team may likely start posing a lot of questions about how to move forward and may even get discouraged about how to actually plot the road ahead. Competing priorities may begin to take on more weight in the mind of team members as the initial euphoria of virtualization turns into a practical work effort. This is where you have to emphasize the importance of eating an elephant one bite at a time. Sit with your team and help to plot out a detailed road map for conversion. Set practical, incremental goals that can be reached in reasonable time frames. This will help keep the momentum going while alleviating the discouragement surrounding the seemingly overwhelming task ahead. Emphasize key areas that will demonstrate practical value to the business and make sure you publicize your team’s success regarding each critical milestone.

In summary, the adoption of a virtualization philosophy has the potential to deliver enormous value to your business. In order to realize the value of virtualization you, as a manger, will have to manage many team issues that can be harder to solve than the pure technical issues. Listen to your teams concerns, help them see the opportunity that virtualization affords them and help plot a pragmatic path that keeps them engaged and excited. Remember, once all the technology is in house and the lights are blinking, it is ultimately your team who will determine the true success of your virtualization efforts.