You may think that your IT dollars are going towards new and exciting systems, but roughly 60 to 70 cents are for the applications that existed when you first joined the firm.
Among the many challenges facing a firm’s CIO is attempting to change the balance of dollars spent on maintaining legacy systems, also known as KTLO as in “keeping the lights on”, and those funds available to spend on new technology and solutions.
Generally the industry-accepted ratio for this balance of new and old is in the range of 60 to 70 percent for maintenance and 30 to 40 percent available for “investment projects”. With the economic downturn over the last few years the CIO has seen the dollars for investment squeezed hard.
The maintenance number of 60 to 70 percent has remained remarkably insensitive to change, and the application of technologies that promised to make some profound impact on the bottom line, such as code reusability, service based architectures, and object frameworks, have failed to make any dynamic change to the balance, although they do provide greater capacity to manage change, in itself a huge value.
Outsourcing may have a short-term impact on costs, but have not shown that is has any significant long-term impact on the fundamental drivers, and may in fact have the opposite effect over time. The uses of methodologies such as Lean show promise, but can involve significant investment and management resolve. There is also difficulty in providing business management with measures that prove that the investment is justified.
Approaches such as agile are very effective at drastically improving the quality of the end product as it allows an iterative idea development process, rather then the rigid misinterpreted specification process of waterfall. The real benefits come from applying modern development engineering processes to automate as much of the process as possible, collapsing idea to implementation (and iterate) by 40 percent or more. But that works well within new efforts using the latest technologies, but doesn’t help in the legact world where most of the costs live, and agile approaches imply a constant iteration, but often once past the initial implementation the team moves on to other things, and the efforts moves to a traditional maintenance waterfall approach but lacking the knowledge of the architects.
The Maintenance Impact
One of the issues that current cost-saving approaches are trying to tackle is the syndrome of maintenance begetting maintenance. Simply put, the rigor of the initial design and structure is lost as layers of maintenance are applied, and this pushes the application towards legacy. Also, in times of expense squeezes, maintenance can be the target of cuts that often consign many necessary changes to the “quick fix” category. Often these changes never get the “long term fix” necessary to improve maintainability.
This inevitably pushes up maintenance costs over time and moves the system towards a level of un-maintainability, and eventually into the realm where CIOs must seek a cost justifying replacement. All of which impacts the value of the discretionary spend, if part of it has to be given to replacing support for an existing business function with a solution using contemporary technology that's only ROI is reduction in maintenance costs. This can be a hard sell, especially to a business that is asked to pay for something that they think they already have.
The Technology Impact
The continuing changes in the choices of languages, development schemas, databases, middleware, and so on have a significant impact on costs. Setting standards only addresses the future, not choices made in the past. Maintaining standards is a fraught exercise, and layers of “standards” just recreate the legacy problem.
One way of mapping these is using an “inverse bell curve” where emerging technologies which are initially high cost (but should have an associated ROI), become commoditized and cheaper in the mid-life, and then grow expensive again as they enter their post-legacy phase and the knowledge becomes itself a legacy. Keeping costs in line with the “commodity” part of the bell curve will have an impact on a system assets’ life time, and decisions made about the future need to be made as soon as it moves into legacy.
The Effect of Outsourcing Maintenance
Keeping to the design principals by providing clear, quality documented code, and using trained staff can have a significant impact on the lifetime and cost of a solution - even if the technology is “old hat.” One of the goals of outsourcing maintenance is attempting to hold the lid on costs in what otherwise would be the rising costs of using “post-legacy” technologies.
Although the industry tends to focus on the gross savings of outsourcing — a U.S. programmer’s salary versus a programmer’s salary in e.g. Bangalore — it often ignores the costs of getting the situation in place and operating it. Nor does anyone mention the fact that IT has a miserable track record in managing multi-site development, especially where some of the providers are 10,000 miles away or locked in a controlling outsource agreement. Add to that the fact that often the business context is lost, and as such the business fit over time decays faster than it would do if the maintenance was local to the user.
However, a structured approach to outsourcing may not only help reduce the costs of maintenance but also affect the longevity of a system, and over time this will change the dynamic of how the IT dollar is split.
The “New” Dollar Dilemma
Having dealt with the maintenance portion of the IT dollar, let’s look at what remains for technology investment. Despite being only 30 to 40 cents of the initial dollar this can still be a significant amount.
Two factors effect how well this money is spent:
- Choosing the things that have the greatest potential business impact;
- Managing the work so that the expectations are met in terms of function, time and cost.
In both cases IT and business management do not have a great track record. The IT governance processes in many firms lack rigor, and usually only happens during the annual budget planning process. Many choices are based on “chummy” agreements or on which part of the business is the top earner today or has the best senior management connections.
The State of IT Governance
It is unusual to see an IT governance process that takes a balanced view of all requirements, or one that has continuity, one that brings them to a set of common metrics that surface those with the greatest impact with the lowest risk of failure to deliver, and then manages throughout the entire plan cycle, creating a high degree of efficiency.
Industry efforts to quantify the loss in efficiency in the actual spending of the “new” dollars, caused by poor choices, management and execution, indicate that it is somewhere in the range of 35 percent. Making the wrong choice, and then failing to deliver, eat away at about a third of “new” dollars. This means that the IT “dollar in your wallet”, which only has 30 cents of “investment” value in it, loses a third of that, leaving on only 20 cents of real value to invest.
Moving towards a solution
In addition to addressing project management, quality and cost issues, enlightened firms are beginning to work on the entire IT governance process. To do this they must create what is often called a “balanced scorecard” for each new investment. This defines benefits, impacts and risks associated with each initiative. Also they must install an overall management approach that not only assesses each of these and gives approval (or not), but also tracks progress and benchmarks results to original projections.
No simple fixes are available. Any approach employed will initially impact the dollars available. However, here are a few action items that must have a place in the solution:
- Employ a review process that evaluates the system portfolio on where it stands in the technology “bell curve”. One that reviews where the business sits in its own lifecycle, its business fit and its costs structure so that an overall “Buy, Sell, Hold” picture can be built;
- Evaluate outsourcing of post-legacy applications and technologies with a view to replacement or re-architecting. This will help avoid the tail of the “inverse bell-curve”;
- Tighten management and cost metrics to get the most value from outsourcing. Otherwise, in the long term the situation may become worse as the knowledge is not only geographically separated, but “owned” by a third-part, but “owned” by a third-party;
- Create a consistent and rigorous project management approach that stress tests assumptions and costs against those initially defined, as well as the design and delivery components;
- Design new solutions with future maintenance in mind to the same levels of quality and process;
- Install a fully-fledged transparent IT governance process that has a regular review process to monitor continuing efficiency. Projects could be multi-year and have to be viewed against the backdrop of changing business needs, as well as the simple ability to deliver what was required;
- Stop wayward or “low efficiency” efforts before they become a future maintenance burden;
- Build an organizational structure that ensures decisions that impact costs in the long term are taken at the right level with the skills to understand the impact;
- Use consistent approaches across the managed domain, so that a real basis for comparison, and oversight, is established.
What are some IT governance challenges that your organization faces?