by Hub Vandervoort
Last Thursday night I was treated to two pleasures simultaneously: seeing the Celtics trounce the Knicks 104 to 59 and being vindicated for a blog entry by David Linthicum. What better place than a skybox full of SOA analysts and reporters to discuss "measures for improvement"—given the Celts' amazing, measured improvement? Among others, Dennis Gaughan of AMR Research kindly acknowledged my point was taken out of context and was sound.
This all started when Joe McKendrick interviewed me (MP3) for an InfoWorld SOA Executive Forum supplement (PDF) and podcast about how Progress customers had realized ROI through their SOAs. In each of my examples, which Joe referenced in a recent blog post, the measured change in business performance was objective and illustrated by before and after measures.
Some examples used cost savings; others, regulatory, revenue, risk mitigation, customer satisfaction, competitive advantage, or key IT performance metrics. The IT backlog example simply illustrated one way a customer of ours had discovered to measure agility—a hard concept to quantify. In this case it was measured, benchmarked, and tied to activities that would bring clear improvement over earlier measurements. The point was: genuine models for assessing ROI and SOA success must be contextual and concretely comparative—not an abstract equation masquerading as a universal silver bullet.
Unfortunately, Linthicum singled out one example for his [mis]interpretation, and in labeling it 'too rudimentary' implied Progress is so naive it would view IT backlog as a single measure for use in all circumstances, or that this is the definitive measure of SOA success. Tony Baer (OnStrategies) clearly interpreted me correctly, blogging that this "can be a sure indicator of success (assuming…"—not the only or best indicator, just, given certain assumptions, it may be ideal.
While I agree with David that "the rubber meets the road in determining the ROI before you fund your SOA" and "most business leaders are going to demand a business plan," the formulation of those makes a difference. A plan must show how change will occur in an unambiguous, quantified way, using measures that are institutionally accepted as meaningful and which can be practically obtained. Then it must show how improvement will be realized by stating the current benchmark and the measure of beneficial change that will come to it by executing the plan—which Linthicum calls a "rear-view mirror approach." But the alternatives are less effective.
ROI assertions often lack objectivity in either of two ways: They establish a theoretical model that looks sound but there's no way to make an objective, quantitative measure; or they compare outcomes to a hypothetical approach that will never be followed. Thus, the comparative results will never be manifest and always be open to subjective interpretation. You must use numerical data, sensibly obtained, and measure before and-after to see if predicted results were achieved. Otherwise, the exercise is simply academic.
In his Real World SOA blog, Linthicum lays out his 'more sophisticated' model. It requires accounting for: degree of change over time, ability to adapt to change, and relative value of change, in order to compute the value of agility to a business. However, I believe that beyond classrooms (or high-level consulting gigs), this isn't practical for three reasons, it:
After deep involvement in delivering over 400 "real-world" SOA infrastructure projects using Progress technology, I can safely say not one would have been cost-justified or approved this way. And if they had, I can think of no way to reasonably measure the implementation after go-live to prove if results were achieved.
SOA What? Don’t think there's a silver bullet for ROI assessment. Don’t waste time implementing some theoretical model. For a practical approach:
Last Thursday night no one doubted the Celtics' improvement. Everyone knew--not through theoretical comparison to what would have been had the Celtics drafted different players. Nor did anyone conclude the improvement resulted from subjective estimates of their ability to change. We knew because they won by 45 points, and we could explain their improvement over last season using field-goal, free-throw, 3-point, and rebound measures—stats that no one would argue, and everyone agreed were the appropriate measures for this context.
View all posts from Hub Vandervoort on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites