Is your organization global or international? Do you know the difference? Does it matter in terms of how you plan for and carry out technology operations such as support, procurement, hosting or messaging? You should know and yes, it makes a huge difference to the structure of your technology operations. Trying to operate a global technology strategy at a company who is truly an international entity will surely pit business needs against your technology goals, setting the stage for failure. So are you global or international? I am sure there are volumes of business school literature on this topic but here is a simplified distinction that is immediately relevant to your technology planning efforts.
Global companies will produce standard products or services that are not tailored to the specific tastes or needs of specific country markets. Usually, such products or services will not be differentiated by brand recognition and may represent either commodities that serve as inputs to other products or services or as standard outputs that don’t vary across markets. An example could be an organization who produces chemicals that go into other industrial products to produce a final result. While certain aspects such as specific packaging or regulatory considerations may vary from market to market, there is more commonality across markets than differences. This has a huge impact on how you are able to design and structure your technology systems and associated services. There will likely be a single set of business systems that have been slightly customized to fit the needs of specific markets. A centralized infrastructure and associated support model may function well in a global organization. Such centralized services could potentially go beyond commoditized IT services such as email hosting to include more advanced services such as centralized Service Desks. Can all technology services be centralized in a global organization? Of course not, local market needs will always dictate some degree of local flexibility. In a global organization however, the technology synergies across operations in different regions around the world will outweigh the need for local differentiation.
International companies may operate in markets around the globe, but tend to offer distinct products or services in each market that are customized to local tastes or market requirements. Products or services such as consumer goods that rely heavily on brand recognition tend to be heavily tailored to local tastes, customs and trends. For example, a packaged meat company operating in Asia, Europe, Latin America and the US will likely have highly differentiated SKU’s for each specific market. In addition to product differentiation, consumer goods often have very distinct and varying methods of distribution across global markets. For example, in the US, consumer goods may be delivered to central warehouses of large big box retailers who then ship products through their own logistics chains to grocery shelves. In some Asia markets, the dominance of the small, local shops may dictate a Direct Store Delivery (DSD) model where product is delivered straight from the manufacturer to the store shelf. In International organizations, the need for local market flexibility will dictate a need for market specific technology services. This does not mean there are no technology synergies to be had at international organizations. Again, commodity services such as email hosting or centralized infrastructure support along with leveraging buying power through centralized procurement may represent true opportunities for working globally while acting locally.
I have obviously painted this topic with a wide brush but hopefully the point is clear. Understand the needs of your organization in terms of its operations around the world. Don’t try to centralize services where local flexibility is needed in order to win in the local market. Don’t leave services local that represent cost saving or opportunities for scale when managed centrally. Understand that there are times when being global and being international are not the same and that a misaligned technology strategy can hamper your organizations success in either case.
The Technology Directors' Office
Tuesday, October 5, 2010
Sunday, September 12, 2010
Evaluating Staff with High Level Questions
As a Technology Director, I want all my staff to be successful. I would like to see all my managers go on to become Directors, Vice Presidents or CTO's. How can you tell if you are coaching your managers to obtain the skills they will need? What is a simple test for understanding where a particular manager may need more work? Here is a simple test, ask them a question. Before you roll your eyes and click out of here allow me to explain.
As you move up the technology profession career ladder, both the types of questions you get asked as well as the types of answers you are expected to give will change significantly. A Service Desk representative gets asked a myriad of specific technology questions on a daily basis. At the level of Service Desk representative, success is defined by how well the individual is able to articulate instructions clearly and provide direct input on very tangible and immediate problems. A Network Engineer may be approached by his or her manager and be asked to provide a design for a specific business case. The engineer may be given a loose set of guidelines along with some specific constraints. The Network Engineer produces a solution that he or she formulated using a combination of technical expertise, past experiences and critical thinking. The questions asked the Network Engineer are less tangible and immediate than what typically confronts the Service Desk representative. While answers given from the Service Desk representative to an end user may generally be right or wrong (they fix the issue or not), answers given by the Network Engineer may not so easily be classified as “correct” or “incorrect”. A technology manager responsible for more than one discipline such as Networking and Server Operations may get asked even broader questions from his or her Director. The Director may not (and usually is not) looking for the specifics that went into the answer but is simply interested in the input he or she needs to help guide their thinking on broad topics that effect strategic technology directions (e.g. hosting options, outsourcing options, budgeting etc..). The Director may not get asked anything at all! The CTO or VP to which the Director reports may have broad reasonability for business applications, development, project management etc.. At the VP level, the Technology Director should be a trusted advisor who stays abreast of the technology market trends and opportunities. It is expected that the Director has a strategy and direction for the organizations critical IT infrastructure that aligns with business goals. The director will often simply be asked to present his or her plan in terms of budgeting needs, resource commitments, business disruption and ROI/TCO models. The Director is rarely instructed to look into specific technologies like desktop virtualization, however if an opportunity exists for an organization in the area of desktop virtualization it is expected that the Director has examined it and has a logical opinion. At a certain level, people stop asking you specific technical questions and start looking to you to be an intelligent, informed and articulate advisor. A successful Technology Director never finds himself or herself waiting to be told what to do!
So, is your Server Operations Manager learning the skills he or she will need to sit in the Directors chair? Is he or she prepared and do they understand the types of deliverables they will need to provide? It is your job as a Technology Director to make sure you are giving your team opportunities for growth. While there may not be any immediate promotion opportunities in your organization, people will feel good about working for you if you can provide them with skills that they feel and understand will benefit them at some point in their career. So ask your mangers the same types of questions you will get asked. "Please provide me a projected capital spending budget for your area", "Please prepare your thoughts on the best practices for our company with respect to server platforms". Some managers will find these questions strange, they will see you as being "to high level" and start to grump about how "you don’t understand the specifics". These people aren’t ready. If they push back on you and give answers like "I don’t know the details" or if you hear "some of it is up to you", you need to coach this person. You may be surprised that some managers will provide you with very good market analysis, strategic plans and proposed budgets. These are your managers who are on the right track and you should foster their energy among the larger management team. Hold their work up as examples and reward them accordingly. Of course, better yet are the managers who proactively provide you with insight without you specifically asking for it!
As you move up the technology profession career ladder, both the types of questions you get asked as well as the types of answers you are expected to give will change significantly. A Service Desk representative gets asked a myriad of specific technology questions on a daily basis. At the level of Service Desk representative, success is defined by how well the individual is able to articulate instructions clearly and provide direct input on very tangible and immediate problems. A Network Engineer may be approached by his or her manager and be asked to provide a design for a specific business case. The engineer may be given a loose set of guidelines along with some specific constraints. The Network Engineer produces a solution that he or she formulated using a combination of technical expertise, past experiences and critical thinking. The questions asked the Network Engineer are less tangible and immediate than what typically confronts the Service Desk representative. While answers given from the Service Desk representative to an end user may generally be right or wrong (they fix the issue or not), answers given by the Network Engineer may not so easily be classified as “correct” or “incorrect”. A technology manager responsible for more than one discipline such as Networking and Server Operations may get asked even broader questions from his or her Director. The Director may not (and usually is not) looking for the specifics that went into the answer but is simply interested in the input he or she needs to help guide their thinking on broad topics that effect strategic technology directions (e.g. hosting options, outsourcing options, budgeting etc..). The Director may not get asked anything at all! The CTO or VP to which the Director reports may have broad reasonability for business applications, development, project management etc.. At the VP level, the Technology Director should be a trusted advisor who stays abreast of the technology market trends and opportunities. It is expected that the Director has a strategy and direction for the organizations critical IT infrastructure that aligns with business goals. The director will often simply be asked to present his or her plan in terms of budgeting needs, resource commitments, business disruption and ROI/TCO models. The Director is rarely instructed to look into specific technologies like desktop virtualization, however if an opportunity exists for an organization in the area of desktop virtualization it is expected that the Director has examined it and has a logical opinion. At a certain level, people stop asking you specific technical questions and start looking to you to be an intelligent, informed and articulate advisor. A successful Technology Director never finds himself or herself waiting to be told what to do!
So, is your Server Operations Manager learning the skills he or she will need to sit in the Directors chair? Is he or she prepared and do they understand the types of deliverables they will need to provide? It is your job as a Technology Director to make sure you are giving your team opportunities for growth. While there may not be any immediate promotion opportunities in your organization, people will feel good about working for you if you can provide them with skills that they feel and understand will benefit them at some point in their career. So ask your mangers the same types of questions you will get asked. "Please provide me a projected capital spending budget for your area", "Please prepare your thoughts on the best practices for our company with respect to server platforms". Some managers will find these questions strange, they will see you as being "to high level" and start to grump about how "you don’t understand the specifics". These people aren’t ready. If they push back on you and give answers like "I don’t know the details" or if you hear "some of it is up to you", you need to coach this person. You may be surprised that some managers will provide you with very good market analysis, strategic plans and proposed budgets. These are your managers who are on the right track and you should foster their energy among the larger management team. Hold their work up as examples and reward them accordingly. Of course, better yet are the managers who proactively provide you with insight without you specifically asking for it!
Labels:
Coaching IT Staff,
IT Management,
IT staffing,
Leadership
Monday, August 23, 2010
You Don't Understand Your IT Costs
Do you know what your IT costs are? I hear many technology Directors now saying, absolutely! They will rattle off their IT CAPEX budget for the current year and perhaps the last two or three years. They probably have a line item view of the IT P&L with a deeper view into some areas such as maintenance contracts, consulting expense and supplies. For the majority of technology directors however who came up through the ranks of network engineering, data base administration or something equivalent, that is where the understanding ends. Many do not understand how CAPEX ultimately ends up on their P&L (through depreciation) and will struggle when asked to classify their total IT spending into categories such as business application expense versus collaboration application expense. Ultimately however being able to classify your total IT spending into categories that you can then easily use to bench mark your spending levels against industry standards is fundamentally what is meant by “understanding” your IT spend. Below are three practical tips that can be used by Directors just trying to get a handle on overall IT spend or by new comers trying to truly understand how his or her new IT operation functions.
Know Your Asset Classes
Every company will categorize capital spending into classes. These classes are used by accounting for grouping specific assets together for depreciation purposes. For example, IT may have the asset classes “PC's”, “Servers” and “Software”. The depreciation on your P&L is the sum of the depreciation from each of these asset classes. The more detailed your asset classes the better, this will enable you to answer more specific questions about how your IT OPEX breaks down. For example, the total amount of your IT OPEX dedicated to PC's for 2009 may be equal to the amount of your total maintenance contracts dedicated to PC hardware & software PLUS the amount of depreciation carried on your P&L that is directly attributable to PC purchases. To know the amount of depreciation directly attributable to PC purchases you must of course have an asset class specific to PC's. So, go to your accounting department and ask for an IT depreciation report by asset class. See if you get the level of detail you need to make sense of the depreciation on your P&L, if not talk to your accounting manager and ask about creating IT asset classes that match your desired categorization strategy.
Know Your Maintenance Costs
Anyone who has had responsibility for managing maintenance contracts within a medium to large IT operations knows what a challenge this can be. You may literally have hundreds of hardware & software contracts all with different renewal dates. Very large IT operations may have a dedicated person and a sophisticated IT management software package dedicated to keeping track of maintenance contracts. In the typical medium IT operations, this duty is loaded onto the technology director who is generally armed with a self created spreadsheet for tracking. Here are some tips. First, get a feel for what contracts you have that are “pre-paid” expenses and which ones get expensed in the period they are due. Usually larger contracts (read Microsoft or SAP) get amortized over the course of a year so from an accounting perspective they effect net income evenly across the year. The cash is of course paid when the invoice comes but the “expense” for these contracts is spread evenly across your P&L throughout the year. This is important from a budgeting standpoint, you should create a list of your “pre-paid” maintenance contracts and have accounting start the amortization of these amounts on the first day of your fiscal year. You should classify these expenses on your tracking sheet into categories that match your asset classes. Once you have a depreciation report by asset class along with a detailed view of your large maintenance contracts both categorized into your desired OPEX classifications, you are well on your way to being able to categorize your total IT yearly OPEX. You cant forget your smaller maintenance contracts that simply expensed as they are received. In aggregate, these can add up to be significant so you need to capture them in detail on your maintenance contract tracking sheet. Ask your accounting manager for a report detailing the line items that made up the maintenance contracts on your IT P&L. Generally, maintenance line items listed as “accruals” will be the “chunks” of the larger maintenance contracts being spread evenly throughout the year. There should be detailed enough description on all other line items for you to ascertain what the expense was for (e.g. web filtering software etc.). Classify these contracts into your desired categories as well and you should have a firm grasp on your yearly IT maintenance contract commitment.
Understand your IT GL Codes
General Ledger (GL) codes are used by your accounting department to classify expenses into line items that will be shown on your IT P&L. For example, if you look at your P&L and see a line item called Wide Area Network, chances are your company has an IT GL code labeled Wide Area Network. As bills come in from your WAN carrier, they are “coded” to this GL account. It is important that you have IT GL codes at an appropriate level of detail that will allow you to easily classify your P&L costs into your desired IT OPEX categories. For example, if one of your desired categories for tracking IT OPEX expense is “Customer Support Operations”, it would be ideal to have a GL code labeled that and have all expenses related to that activity coded to that GL account. Avoid GL codes like “miscellaneous”, these create black holes into which IT expenses disperser, never to be understood again. Certain GL codes may be dictated by the need for your organization to report expenses in certain ways for regulatory or tax purposes. This means you will likely never see your IT P&L arranged totally into the categories your desire. For example, you will likely always have GL codes for “Salaries”, “Benefits”, etc.. So you will always have some leg work to do to completely organize your P&L into your desired OPEX tracking categories but with a set of IT GL accounts that match your goals as close as possible, your job will be much easier.
With these three steps you will be well on way to having a full understanding of your IT OPEX. Having this understanding and being able to classify your IT expenses into categories that can then be used to benchmark your IT operations against industry norms will gain you huge credibility among your management team and will give you a clear set of goals to manage towards.
Know Your Asset Classes
Every company will categorize capital spending into classes. These classes are used by accounting for grouping specific assets together for depreciation purposes. For example, IT may have the asset classes “PC's”, “Servers” and “Software”. The depreciation on your P&L is the sum of the depreciation from each of these asset classes. The more detailed your asset classes the better, this will enable you to answer more specific questions about how your IT OPEX breaks down. For example, the total amount of your IT OPEX dedicated to PC's for 2009 may be equal to the amount of your total maintenance contracts dedicated to PC hardware & software PLUS the amount of depreciation carried on your P&L that is directly attributable to PC purchases. To know the amount of depreciation directly attributable to PC purchases you must of course have an asset class specific to PC's. So, go to your accounting department and ask for an IT depreciation report by asset class. See if you get the level of detail you need to make sense of the depreciation on your P&L, if not talk to your accounting manager and ask about creating IT asset classes that match your desired categorization strategy.
Know Your Maintenance Costs
Anyone who has had responsibility for managing maintenance contracts within a medium to large IT operations knows what a challenge this can be. You may literally have hundreds of hardware & software contracts all with different renewal dates. Very large IT operations may have a dedicated person and a sophisticated IT management software package dedicated to keeping track of maintenance contracts. In the typical medium IT operations, this duty is loaded onto the technology director who is generally armed with a self created spreadsheet for tracking. Here are some tips. First, get a feel for what contracts you have that are “pre-paid” expenses and which ones get expensed in the period they are due. Usually larger contracts (read Microsoft or SAP) get amortized over the course of a year so from an accounting perspective they effect net income evenly across the year. The cash is of course paid when the invoice comes but the “expense” for these contracts is spread evenly across your P&L throughout the year. This is important from a budgeting standpoint, you should create a list of your “pre-paid” maintenance contracts and have accounting start the amortization of these amounts on the first day of your fiscal year. You should classify these expenses on your tracking sheet into categories that match your asset classes. Once you have a depreciation report by asset class along with a detailed view of your large maintenance contracts both categorized into your desired OPEX classifications, you are well on your way to being able to categorize your total IT yearly OPEX. You cant forget your smaller maintenance contracts that simply expensed as they are received. In aggregate, these can add up to be significant so you need to capture them in detail on your maintenance contract tracking sheet. Ask your accounting manager for a report detailing the line items that made up the maintenance contracts on your IT P&L. Generally, maintenance line items listed as “accruals” will be the “chunks” of the larger maintenance contracts being spread evenly throughout the year. There should be detailed enough description on all other line items for you to ascertain what the expense was for (e.g. web filtering software etc.). Classify these contracts into your desired categories as well and you should have a firm grasp on your yearly IT maintenance contract commitment.
Understand your IT GL Codes
General Ledger (GL) codes are used by your accounting department to classify expenses into line items that will be shown on your IT P&L. For example, if you look at your P&L and see a line item called Wide Area Network, chances are your company has an IT GL code labeled Wide Area Network. As bills come in from your WAN carrier, they are “coded” to this GL account. It is important that you have IT GL codes at an appropriate level of detail that will allow you to easily classify your P&L costs into your desired IT OPEX categories. For example, if one of your desired categories for tracking IT OPEX expense is “Customer Support Operations”, it would be ideal to have a GL code labeled that and have all expenses related to that activity coded to that GL account. Avoid GL codes like “miscellaneous”, these create black holes into which IT expenses disperser, never to be understood again. Certain GL codes may be dictated by the need for your organization to report expenses in certain ways for regulatory or tax purposes. This means you will likely never see your IT P&L arranged totally into the categories your desire. For example, you will likely always have GL codes for “Salaries”, “Benefits”, etc.. So you will always have some leg work to do to completely organize your P&L into your desired OPEX tracking categories but with a set of IT GL accounts that match your goals as close as possible, your job will be much easier.
With these three steps you will be well on way to having a full understanding of your IT OPEX. Having this understanding and being able to classify your IT expenses into categories that can then be used to benchmark your IT operations against industry norms will gain you huge credibility among your management team and will give you a clear set of goals to manage towards.
Labels:
IT Benchmarking,
IT Expense,
IT Finance,
IT Management,
IT Planning,
IT strategy
Friday, August 6, 2010
Being a True Business Partner as an Infrastructure & Operations Manager
Several things I have been working on lately seem to have a common theme, defining “what is enough”, enough redundancy, process, response time etc.. I have spent some time studying several of the popular Infrastructure and Operations maturity models as defined by industry leaders such as Gartner, Forrester or IDC. All of these models make the not so implicit assumption that you “should” be driving your IT department “Up” the model. In Gartner's model for example, your IT operation is not considered a “business partner” until you have ascended through all the various layers, each requiring higher levels of process discipline, scalability and business service uptime. On the surface it seems hard to argue against continuous improvement as it relates to your IT operations. I certainly believe you should always search for ways to run your IT operation or to build your IT infrastructure in better ways, however it seems an over generalization to imply that you ALWAYS need more process, more redundancy etc.. I would make the argument that to be a true partner to the business, you as an IT leader should focus on defining exactly what your business is trying to accomplish in the marketplace and target your level of IT infrastructure and operations rigor accordingly. It may be tough for the die hard technology professional to accept that a certain degree of risk is acceptable or that it is OK to not be on the latest hardware or software. What really enables you to drive value for the business is understanding the level at which your business “needs” you to operate and ensuring that you do not operate below or ABOVE that level. Operating below an acceptable level in terms of service levels, infrastructure scalability or redundancy certainly puts your business and its go to market strategy at risk. Operating above where your business needs you to be may be diverting capital and resources away from more value added activities whose value to the business outweighs the risks you may be taking in certain areas of your IT operations. So before you to go your CFO and start showing him or her charts and models outlining where you SHOULD be as an IT organization, make sure you have a clear understanding of where your business NEEDS you to be.
Saturday, July 10, 2010
Picking an SAP Business Warehouse Accelerator (BWA) Hardware Provider
My team recently completed an evaluation of the SAP Business Warehouse Accelerator (BWA) product from three different vendors and I felt the insights gleamed during this process were worth sharing. First, if you are not an SAP customer and even more specifically an SAP Business Warehouse customer, you probably have never heard of the SAP BWA offering. The BWA is SAP’s “in memory database” solution designed to speed up complex analytical queries submitted by information workers. The way this works is that data is moved from the SAP BW OLAP database into the BWA columnar database located in the BWA RAM during a load and fill phase. End user request are directed to the BWA which provides much faster response time than the traditional OLAP database located on regular spinning disks. SAP markets the BWA as an “appliance” which is a complete misnomer. Here is what the BWA truly consists of:
Hardware
A dedicated infrastructure committed solely to the BWA workload. The infrastructure consists of dedicated disks (in most cases in the form of a dedicated SAN chassis) and a blade chassis and associated blades. The BWA blades are delivered as a SUSE Linux cluster with one hot spare server blade. The term dedicated is very important here. SAP is very strict on what they support from a hardware perspective. They have certified specific solutions from various different hardware vendors such as HP, Dell, IBM and Cisco with all of these solutions required to be solely dedicated to the BWA environment. Already have a SAN with space? Too bad, you can’t use it. Already have a data center full of IBM or HP blade chassis with room to handle the additional capacity of BWA? So sorry, you can’t use it. The entire technology solution is required by SAP to be totally segmented from you other data center workloads. A dedicated subnet is also required between the BWA and the message service associated with your BW application server logon group. The dedicated networking can take the form of a VLAN riding across your existing data center network. Not only can you not share the BWA infrastructure among other data center workloads, you cant share the infrastructure between your BWA test and BWA production environments. So you bought a BWA solution with a dedicated SAN and blade server chasis and you want to put a couple of BWA test blades in the same chassis as your production blades and leverage the same SAN as production? Sorry, that is not supported. Simply take the hardware quote you got for all that dedicated infrastructure for BWA production and multiply it times two and thats your cost for a test and production SAP BWA environment. SAP does not support any intermingling of your test and production BWA infrastructure. The notion here that SAP does not “support” a shared infrastructure between BWA test and BWA production is important. This does not mean a shared infrastructure won’t work, it will work just fine. The gotcha is however that if you do encounter performance problems on the BWA and you contact SAP for support and they discover you have shared the test and production infrastructures, you will politely be asked to separate those environments before you receive any assistance. This of course goes directly against the movement towards efficient and virtualized data centers. SAP is being slow to accept alternative solutions to the BWA product delivery model, primarily because of what it is selling with the BWA which is performance. The BWA software is not cheap but it really does deliver amazing performance results. Unlike other SAP products like CRM where what SAP is selling is streamlined and efficient business processes, with the BWA SAP is selling a promise of enhanced query performance and thus faster decision support. Regardless, it feels like this promise could be fulfilled in a more elegant fashion that does not require IT managers to invest in infrastructure when they may already have computing capacity.
Software
SAP licenses the BWA software “by the blade”. This is an unfortunate choice of terminology on SAP’s part because it adds to the confusion that exists in the market about the BWA product. By the term blade here SAP is NOT refereeing to the physical server blades you have purchased for your blade chassis. SAP defines a BWA “blade” as a logical unit consisting of 4 GB of RAM. When you size your specific BWA environment, you will be asked to run some reports in your existing BW system and provide those to the hardware providers you have chosen to work with. Each hardware provider who offers a SAP certified BWA “architecture” has a group who can use the BW report information to help you size your BWA environment in terms of the amount of RAM you will need to “accelerate” your specific quantity of data. The minimum BWA size is four blades or 16GB of RAM. So to determine your software costs, you can divide your overall BWA RAM size by four and multiply the result by the SAP price per blade. Currently, SAP does not charge for the BWA test environment software, only production.
What we found when looking at different vendors for the SAP BWA hardware “appliance” is that even though SAP has a strict certification process for various combinations of blade server chassis and storage arrays; you will still get inconsistent answers from vendors on what is and is not supported from SAP. The big area we saw this in was whether you can share test and production infrastructure. Some vendors said yes, others said no. The final word came from SAP who clearly states a shared infrastructure is not supported (again, not that it won’t work). Another dimension to consider when talking to hardware vendors is how well defined you feel their respective SAP BWA product is. For example, some of the vendors we meet with seemed to be trying to put together an offering based on their latest offering initiatives “on the fly” stating they would simply need to check with SAP on cretin configurations. Keep in mind, SAP has a stringent certification process around the BWA “appliance” offering. It should be very straight forward, a given combination of storage and servers are either certified or not, period. You should also get a comfort level around the particular hardware providers service offering as it relates to the SAP BWA implementation. The “appliance” is sold as turnkey meaning that you as a customer cannot install it yourself. In addition to certifying a hardware combination, a vendor also certifies a service offering for installing and configuring the BWA solution. Because you have to use this install service, it is important to ask and understand how a vendor will deliver the service. Do they have a certified team or will the work be contracted to a third party? Again, the definition of the vendors install service should be clear and straight forward without requiring a lot of back and forth and “checking”.
Hardware
A dedicated infrastructure committed solely to the BWA workload. The infrastructure consists of dedicated disks (in most cases in the form of a dedicated SAN chassis) and a blade chassis and associated blades. The BWA blades are delivered as a SUSE Linux cluster with one hot spare server blade. The term dedicated is very important here. SAP is very strict on what they support from a hardware perspective. They have certified specific solutions from various different hardware vendors such as HP, Dell, IBM and Cisco with all of these solutions required to be solely dedicated to the BWA environment. Already have a SAN with space? Too bad, you can’t use it. Already have a data center full of IBM or HP blade chassis with room to handle the additional capacity of BWA? So sorry, you can’t use it. The entire technology solution is required by SAP to be totally segmented from you other data center workloads. A dedicated subnet is also required between the BWA and the message service associated with your BW application server logon group. The dedicated networking can take the form of a VLAN riding across your existing data center network. Not only can you not share the BWA infrastructure among other data center workloads, you cant share the infrastructure between your BWA test and BWA production environments. So you bought a BWA solution with a dedicated SAN and blade server chasis and you want to put a couple of BWA test blades in the same chassis as your production blades and leverage the same SAN as production? Sorry, that is not supported. Simply take the hardware quote you got for all that dedicated infrastructure for BWA production and multiply it times two and thats your cost for a test and production SAP BWA environment. SAP does not support any intermingling of your test and production BWA infrastructure. The notion here that SAP does not “support” a shared infrastructure between BWA test and BWA production is important. This does not mean a shared infrastructure won’t work, it will work just fine. The gotcha is however that if you do encounter performance problems on the BWA and you contact SAP for support and they discover you have shared the test and production infrastructures, you will politely be asked to separate those environments before you receive any assistance. This of course goes directly against the movement towards efficient and virtualized data centers. SAP is being slow to accept alternative solutions to the BWA product delivery model, primarily because of what it is selling with the BWA which is performance. The BWA software is not cheap but it really does deliver amazing performance results. Unlike other SAP products like CRM where what SAP is selling is streamlined and efficient business processes, with the BWA SAP is selling a promise of enhanced query performance and thus faster decision support. Regardless, it feels like this promise could be fulfilled in a more elegant fashion that does not require IT managers to invest in infrastructure when they may already have computing capacity.
Software
SAP licenses the BWA software “by the blade”. This is an unfortunate choice of terminology on SAP’s part because it adds to the confusion that exists in the market about the BWA product. By the term blade here SAP is NOT refereeing to the physical server blades you have purchased for your blade chassis. SAP defines a BWA “blade” as a logical unit consisting of 4 GB of RAM. When you size your specific BWA environment, you will be asked to run some reports in your existing BW system and provide those to the hardware providers you have chosen to work with. Each hardware provider who offers a SAP certified BWA “architecture” has a group who can use the BW report information to help you size your BWA environment in terms of the amount of RAM you will need to “accelerate” your specific quantity of data. The minimum BWA size is four blades or 16GB of RAM. So to determine your software costs, you can divide your overall BWA RAM size by four and multiply the result by the SAP price per blade. Currently, SAP does not charge for the BWA test environment software, only production.
What we found when looking at different vendors for the SAP BWA hardware “appliance” is that even though SAP has a strict certification process for various combinations of blade server chassis and storage arrays; you will still get inconsistent answers from vendors on what is and is not supported from SAP. The big area we saw this in was whether you can share test and production infrastructure. Some vendors said yes, others said no. The final word came from SAP who clearly states a shared infrastructure is not supported (again, not that it won’t work). Another dimension to consider when talking to hardware vendors is how well defined you feel their respective SAP BWA product is. For example, some of the vendors we meet with seemed to be trying to put together an offering based on their latest offering initiatives “on the fly” stating they would simply need to check with SAP on cretin configurations. Keep in mind, SAP has a stringent certification process around the BWA “appliance” offering. It should be very straight forward, a given combination of storage and servers are either certified or not, period. You should also get a comfort level around the particular hardware providers service offering as it relates to the SAP BWA implementation. The “appliance” is sold as turnkey meaning that you as a customer cannot install it yourself. In addition to certifying a hardware combination, a vendor also certifies a service offering for installing and configuring the BWA solution. Because you have to use this install service, it is important to ask and understand how a vendor will deliver the service. Do they have a certified team or will the work be contracted to a third party? Again, the definition of the vendors install service should be clear and straight forward without requiring a lot of back and forth and “checking”.
Thursday, June 17, 2010
Are You Over Virtualizing Your Technology Infrastructure?
In a recent key note address at Gartner’s 2010 Infrastructure and Operations Summit, Raymond Paquet, Managing VP at Gartner, made the point that consistently over the last eight years 64% of the typical IT budget has been dedicated to tactical operations such as running existing infrastructure. A driving factor behind this consistent trend is the tendency of technology managers to over deliver on infrastructure services to the business. Paquet makes the point that technology managers should always be focused on providing the right balance between technology solution cost and the quality of service the solution provides to the business. As the quality of a technology service increases the cost of that service will naturally increase. There is often still a gap in the quality of service a business requires and what is architected and built by technology professionals. The key is to understand the businesses requirements in terms of parameters such as uptime, scalability and recoverability and to architect an infrastructure aligned with these requirements. Sounds simple, right? Often however technology professionals find themselves (usually inadvertently) swept up in implementing technology solutions that at specific levels of adoption add value to a business but at unnecessary levels of adoption serve to pull budgets and resources away from more value added activities. Virtualization is a case in point.
The gap between quality of technology services required by the business and the cost to deliver them can be observed today in the tenacious drive by many technology managers to virtualize 100% of their IT infrastructure. Gartner research VP Phillip Dawson points out what he calls the “The Rule of Thirds” with respect to virtualizing IT workloads. The theory is that one third of your IT workload will be “easy” to virtualize, another one third will be “moderately difficult” with the remaining third being “difficult” to virtualize. Most technology organizations today have conquered the easy one third and have began working on the moderately difficult workloads. The cost in terms of effort and dollars to virtualize the final one third of your IT workload increases exponentially. Are you really adding value to your business by driving 100% of your environment into your virtualization strategy? The recommendation from Dawson is to first simplify IT workloads as much as possible by driving towards well run x86 architectures. Once simplified, IT workloads with low business value should be evaluated for outsourcing or retirement. Retirement may consist of moving the service the application provides into an existing ERP system thus leveraging existing infrastructure. Workloads with high business value will likely now (after simplification) be cost effective to bring into your virtualization strategy.
So be selective in the applications you target for virtualization. Be sure that all steps have been taken to simplify the respective applications technology infrastructure so that it is not in the upper third of costs and effort in terms of virtualization effort. Outsource or retire complex applications where applicable rather than force fitting them into your own virtualization strategy. Look for ways to retire low business value applications with complex technology architectures (such as moving them into an existing ERP package). The bottom line is that all technology investments and initiatives must be aligned with your organizations needs and risk tolerance. Virtualization is no different. Making unnecessary efforts to virtualize certain applications may be doing more harm than good.
The gap between quality of technology services required by the business and the cost to deliver them can be observed today in the tenacious drive by many technology managers to virtualize 100% of their IT infrastructure. Gartner research VP Phillip Dawson points out what he calls the “The Rule of Thirds” with respect to virtualizing IT workloads. The theory is that one third of your IT workload will be “easy” to virtualize, another one third will be “moderately difficult” with the remaining third being “difficult” to virtualize. Most technology organizations today have conquered the easy one third and have began working on the moderately difficult workloads. The cost in terms of effort and dollars to virtualize the final one third of your IT workload increases exponentially. Are you really adding value to your business by driving 100% of your environment into your virtualization strategy? The recommendation from Dawson is to first simplify IT workloads as much as possible by driving towards well run x86 architectures. Once simplified, IT workloads with low business value should be evaluated for outsourcing or retirement. Retirement may consist of moving the service the application provides into an existing ERP system thus leveraging existing infrastructure. Workloads with high business value will likely now (after simplification) be cost effective to bring into your virtualization strategy.
So be selective in the applications you target for virtualization. Be sure that all steps have been taken to simplify the respective applications technology infrastructure so that it is not in the upper third of costs and effort in terms of virtualization effort. Outsource or retire complex applications where applicable rather than force fitting them into your own virtualization strategy. Look for ways to retire low business value applications with complex technology architectures (such as moving them into an existing ERP package). The bottom line is that all technology investments and initiatives must be aligned with your organizations needs and risk tolerance. Virtualization is no different. Making unnecessary efforts to virtualize certain applications may be doing more harm than good.
Saturday, June 5, 2010
Software Licensing Models Must Evolve to Match Innovation in Computing Resource Delivery
Software licensing has never been one my favorite topics. It has grabbed my attention lately however as a result of what I perceive as a gap between the evolution of infrastructure provisioning models and software vendor licensing models. As infrastructure virtualization and server consolidation continue to dominate as the main trends in computing delivery, software vendors seem bent on clinging to historical licensing models based on CPU. In today’s dynamically allocated world of processing power designed to serve SaaS models for software delivery and flexible business user capabilities, software providers still insist that we as providers of computing services be able to calculate exactly how many “processors” we need to license or exactly how many unique or simultaneous users will access our systems. In addition, many software salespeople use inconsistent and confusing language that causes confusion among business people and even among some CIO’s.
Is it a CPU, a Core or a Slot?
This is where I see the most confusion and inconsistency in language, even among software sales reps. Here is a little history of how things have gotten more complex. It used to be (in the bad old days) that one slot equaled one CPU which had one core. So, when software was licensed by the CPU it was simple to understand how many license you needed. You just counted the number of CPU’s in the server that was going to run your software and there you had it. As chip makers continued to innovate, this model started getting slightly more complex. For example, when multi-core technology was introduced it became possible to have a CPU in a slot that actually constituted multiple cores or execution units. This innovation continues today and is evident in the latest six and even eight core Nehalem and Westmere architectures. So now the question had to be asked to software sales reps “what do you mean by CPU?”
Processing Power Gets Distributed
At nearly the same time that the chips inside our servers where getting more powerful and more complex, our server landscapes where getting more complex as well. Corporate enterprises began migrating away from centralized mainframe computing to client/server delivery models. New backend servers utilizing multi-core processors began to be clustered using technologies such as Microsoft Cluster Server in order to provide high availability to critical business applications. Now the question of “how many CPU’s do you have?” became even more complex.
ERP Systems Drove Complexity in Server Landscapes
In the late 1990’s, many businesses introduced Enterprise Resource Planning (ERP) systems designed to consolidate sprawling application landscapes consisting of many different application for functions such as finance, inventory and payroll. The initial server architecture for ERP systems generally consisted of a development system, a quality system and a production system, each running on distinct computing resources. Most business users only accessed the production system while users from IT could potentially access all three systems. At this point, software sales reps began speaking of licensing your “landscape” and many ERP providers offered named user licensing. A named user could access the ERP system running on development, quality or production regardless of the fact that these where distinct systems.
ERP Technical Architecture Options Drive Confusion
ERP systems still run on databases and require appropriate database licensing. ERP packages such as SAP offer technical design options allowing the database used for the ERP development, quality and production systems to be ran separately in a one for one configuration or in a clustered environment with one or two large severs providing the processing power for all the system databases. With the database licensing needed to run the ERP system there was still a breakeven point where licensing the database by “CPU” could come out cheaper than licensing the database by named user. And so as ERP implementations continued, software salespeople now spoke to technology and business managers about “licensing their landscape by named users with options for CPU based pricing based on cores”. The fog was starting to set in. In fact these options became so confusing that when asked what the letters SAP stood for many business managers would reply “Shut up And Pay”. The situation only got worse in the early 2000’s when ERP providers like SAP saw the adoption rate of new core ERP systems begin to slow. At this point, the majority of enterprise organizations already had some sort of ERP system in place. ERP providers branched out into other supporting systems such as CRM, Supply Chain Management or Business Analytics. Now the “landscape” of applications became even more complex with named users crossing development, quality and production systems across multiple sets of business systems.
Virtualization - The Final Straw
In the 2007 timeframe, a major shift in the way computing power was allocated to business systems began to appear in enterprise data centers. Virtualization technology from companies such as VMware began to gain main stream adoption in enterprise class IT operations. Virtualization gave rise to terms such as Cloud Computing meaning that the processing power for any given business application was provided from a pooled set of computing resources, not tied specifically to any one server or set of servers. An individual virtual server now depended on virtual CPU’s which themselves did not have any necessary direct relationship to a CPU or core in the computing “Cloud”.
Despite this colossal shift in the provisioning of IT computing power, many software vendors clung to their licensing models, insisting on knowing how many “CPU’s” you would use to run their software. Even today in 2010 when Virtualization has been well vetted and proven to be a viable computing delivery model at all tiers of an enterprises technical architecture, some software vendors either insist on sticking to outdated license models or simply issuing vague statements of support for virtualization. These vague statements of support often seem to be based more on resistance to change traditional licensing models rather than on any clearly stated technical facts.
After a long and storied evolution of computing power delivery from single core, single CPU machines to Cloud Computing, it seems like software providers ranging from database providers to providers of business analytics have been slow to innovate in terms of licensing models. Enterprise class providers of business software such as SAP or Oracle who claim to have embraced virtualization yet continue to either license certain product sets by CPU or issue only vague statements of virtualization support seem to be struggling to provide innovate pricing structures aligned with the new realities of computing power delivery.
I am sure they will get there, but for now I still get emails from software sales representatives quoting prices for products “by the CPU”. Sigh…..
Is it a CPU, a Core or a Slot?
This is where I see the most confusion and inconsistency in language, even among software sales reps. Here is a little history of how things have gotten more complex. It used to be (in the bad old days) that one slot equaled one CPU which had one core. So, when software was licensed by the CPU it was simple to understand how many license you needed. You just counted the number of CPU’s in the server that was going to run your software and there you had it. As chip makers continued to innovate, this model started getting slightly more complex. For example, when multi-core technology was introduced it became possible to have a CPU in a slot that actually constituted multiple cores or execution units. This innovation continues today and is evident in the latest six and even eight core Nehalem and Westmere architectures. So now the question had to be asked to software sales reps “what do you mean by CPU?”
Processing Power Gets Distributed
At nearly the same time that the chips inside our servers where getting more powerful and more complex, our server landscapes where getting more complex as well. Corporate enterprises began migrating away from centralized mainframe computing to client/server delivery models. New backend servers utilizing multi-core processors began to be clustered using technologies such as Microsoft Cluster Server in order to provide high availability to critical business applications. Now the question of “how many CPU’s do you have?” became even more complex.
ERP Systems Drove Complexity in Server Landscapes
In the late 1990’s, many businesses introduced Enterprise Resource Planning (ERP) systems designed to consolidate sprawling application landscapes consisting of many different application for functions such as finance, inventory and payroll. The initial server architecture for ERP systems generally consisted of a development system, a quality system and a production system, each running on distinct computing resources. Most business users only accessed the production system while users from IT could potentially access all three systems. At this point, software sales reps began speaking of licensing your “landscape” and many ERP providers offered named user licensing. A named user could access the ERP system running on development, quality or production regardless of the fact that these where distinct systems.
ERP Technical Architecture Options Drive Confusion
ERP systems still run on databases and require appropriate database licensing. ERP packages such as SAP offer technical design options allowing the database used for the ERP development, quality and production systems to be ran separately in a one for one configuration or in a clustered environment with one or two large severs providing the processing power for all the system databases. With the database licensing needed to run the ERP system there was still a breakeven point where licensing the database by “CPU” could come out cheaper than licensing the database by named user. And so as ERP implementations continued, software salespeople now spoke to technology and business managers about “licensing their landscape by named users with options for CPU based pricing based on cores”. The fog was starting to set in. In fact these options became so confusing that when asked what the letters SAP stood for many business managers would reply “Shut up And Pay”. The situation only got worse in the early 2000’s when ERP providers like SAP saw the adoption rate of new core ERP systems begin to slow. At this point, the majority of enterprise organizations already had some sort of ERP system in place. ERP providers branched out into other supporting systems such as CRM, Supply Chain Management or Business Analytics. Now the “landscape” of applications became even more complex with named users crossing development, quality and production systems across multiple sets of business systems.
Virtualization - The Final Straw
In the 2007 timeframe, a major shift in the way computing power was allocated to business systems began to appear in enterprise data centers. Virtualization technology from companies such as VMware began to gain main stream adoption in enterprise class IT operations. Virtualization gave rise to terms such as Cloud Computing meaning that the processing power for any given business application was provided from a pooled set of computing resources, not tied specifically to any one server or set of servers. An individual virtual server now depended on virtual CPU’s which themselves did not have any necessary direct relationship to a CPU or core in the computing “Cloud”.
Despite this colossal shift in the provisioning of IT computing power, many software vendors clung to their licensing models, insisting on knowing how many “CPU’s” you would use to run their software. Even today in 2010 when Virtualization has been well vetted and proven to be a viable computing delivery model at all tiers of an enterprises technical architecture, some software vendors either insist on sticking to outdated license models or simply issuing vague statements of support for virtualization. These vague statements of support often seem to be based more on resistance to change traditional licensing models rather than on any clearly stated technical facts.
After a long and storied evolution of computing power delivery from single core, single CPU machines to Cloud Computing, it seems like software providers ranging from database providers to providers of business analytics have been slow to innovate in terms of licensing models. Enterprise class providers of business software such as SAP or Oracle who claim to have embraced virtualization yet continue to either license certain product sets by CPU or issue only vague statements of virtualization support seem to be struggling to provide innovate pricing structures aligned with the new realities of computing power delivery.
I am sure they will get there, but for now I still get emails from software sales representatives quoting prices for products “by the CPU”. Sigh…..
Subscribe to:
Posts (Atom)