Structure in Threes: Metrics and Measurements

Spent the last few weeks divided between multiple projects and goals.  However, that gave me time to think about the topic of measurements.  Often in the IT field I see it being a great effort used to prove value or performance that ultimately has as its goal justifying existence.  When I hear about a new metrics initiative in most organizations it seems to boil down to how can we show with numbers that we’re meeting our targets.  All too often I think back to the radio show “The Prairie Home Companion” The closing words of the monologue are “Well, that’s the news from Lake Wobegon, where all the women are strong, all the men are good looking, and all the children are above average.”

This leads me to believe either we’ve set the bar too low or we’re measuring the wrong things.  About ten years ago I was at a senior leadership meeting at one of the well know technology powerhouses.  The one of the senior executives had, by edict, required a “serious” effort to measure performance.  The problem occurred not with the desire to measure, but the implementation of such.  There was no goal, other than measuring.  Thus each division, sub-division, unit, etc. identify metrics to report on.  In total over 1000 individual metrics of which cost the corporation lots of headcount to capture, collate, massage, and report at leadership meetings.  When asked if any of the these metrics were used to manage the organization or inform executives to make better decisions. I saw a lot of staring at shoes by people.  I asked why measure things if you’re not going to use the measurements to inform your decisions and directions; more shoe examinations.  Needless to say I was not invited back to a senior leadership meeting for a while.

Fast forward a few years later, the corporation is not doing as well as it once was and IT budgets are under fire.  I was brought back by a new leadership team; under cover.  That is I would dial in unadvertised and then consult to select members of the leadership team regarding my impressions and insights.  I think I was given this opportunity because during the previous incident I had predicted a scenario of what would happen to the IT org under its present course.  It was not a feat of great insight, it was merely a matter of connecting the dots. In either event I spent time with them, not specifically telling them what or how to measure, but why?  The old maxim “you get the behavior you measure” I’ve found almost true.  Often people measure what makes them look best.  Metrics are often tied to performance rewards.  This often eliminates these from being diagnostic tools.  This a little like the joke about the airplane captain announcing  on the PA system:  “I have bad new and good news.  The bad news is we’re lost. The good news is we’re making great time.”   Eventually there will be a crash.

Which brings me back to the metrics and measurements discussion.  Establishing a measurement program, one should first identify the strategic goals and objectives of the organization, then create metrics around those.  Next identify Key Performance Indicators (KPIs) that support those goals and objectives.  I look at KPIs a little differently than most I guess.  I think the label says it all these are “indicators” of performance, not the actual performance.  These are sort of like road sides telling you your goal is 100 miles ahead, then 50 miles, etc. and your speedometer it indicating 50 miles an hour.  Together these KPIs help you determine if you’re on the right path and operating at the right levels of efficiency and effectiveness.

Which again gets back to a metrics program.  A useful program has a mixture of “strategic” and “operational” metrics.  The strategic metrics tell you where your going, the operational metrics support you in letting you know you’re making progress towards getting to your goal with your resources.

IT Planning and Agile

Some of the new organizations I’m dealing with are going crazy with “Agile”.  The Hype Cycle  is in full force.  Soon Agile with be advertised to put the kids to bed on time, solve world peace, and feed the world.  The problem I see is not with Agile its with its close cousin “Addle” which is often what I see organizations implementing (I.e., the worst of each methodology glued together).   I will point out that I am neither a zealot for Waterfall or Agile; its a tool and like other power tools –notice the woodworking reference– it should be used carefully.  Too often I’ve seen Agile be used for an excuse for poor or no planning and/or poor development practices.

If one reads the original materials, nowhere does it say don’t plan.  Actually it indicates you need plans, just not at the level of precision and scope previously and wrongly drummed into people’s heads.  Agile suggests or recommends planning in short enough horizons such that the problem doesn’t change before you finishes planning (accuracy) and to a granularity (precision) that is enough to get the job done.  Another way to look at it:

If my problem was to cross the river and you spent five years planning to build a bridge, by the time you actually built the bridge I may have cross the river with a boat and now I’m faced with climbing a mountain (problem has changed).  If I want something to sit on I may want a chair, but it may not need to be designed and constructed to 1 millionth of an inch tolerance –which would cost a significant amount.  I could probably get by with 1/32 of an inch, produced at lower cost but yielding the same level of performance I desire.  So in the end Agile is a balancing act that I believe still requires planning; just different types of planning and a whole lot of thinking.

 

Modern IT Portfolio Management: Risk: System Definition

System Definitional Risk

One of the serious risks in Architect side is system definition. This is partitioned into completeness(e.g., maturity level [none to integrated], Validation level [none, author’s bench check……Simulation of requirements], and requirements freshness (i.e., things change, was what was needed specified and validated too long ago….).

 

 

Benefits Realization

Been a bit busy stoking the home fires of late.  Attended COFES, amazing conversations as usual.  This month I’ve been focused on several areas of applied business architecture research.  IT Portfolio Management, Benefits Management, and Complexity Management.  All three are related to my Structure in Threes project.

  • I continue to develop the portfolio model section by section along with a working prototype.  Started considering the technology and system to offer to the market.
  • Benefits Management this month is really a parallel track, both R&D for ensuring a portfolio action supports Enterprise Goals as well as applied practice for the projects I’m working on at Microsoft.  The past few weeks I’ve been creating a Benefits Dependency Network for one of the subprojects.  I’ll be reviewing and revising that today with stakeholders as well as creating a draft Benefits Management Plan to help ensure the initiatives realize the promised benefits.  Part of that will be a Results Chain Contribution Matrix, a Benefits Distribution Matrix, and a Stakeholder Management plan.  Most of these artifacts I’ll recommend to my group for future projects
  • Complexity Management R&D is part of the BPR/M activities at work as well as Portfolio Management R&D.  Had a great discussion with Dr. Jacek Marczyk discussion elements of complexity.  We’ll have lots more to discuss.  I like his high level model:  Structure Elements x Uncertainty = Complexity.    I had previously separated Uncertainty from the equations I was developing:  Business Process Complexity =  {Information Complexity} x {Activity Complexity} using BPMN models as the base to calculate each factor.  As of yesterday I revised calculations from a standard node count bases to also include network linkages between nodes in each factor.   Later this week I’ll look at how I include Dr. Marczyk’s perspective of accounting for uncertainty.  I think I may also expand on that and use some of Courtney, Day, Schoemaker, and Primozic research into risk and uncertainty.  They’ve a lot of good materials that could apply to the problem space.

Structure in Threes: Budgeting and Planning -observations and muses

The beatings will continue until moral improves, or so goes the typical planning cycle each year.  Although being an observer of the process in many enterprises for 30 years and a unwilling participant at times, I would classify these activities as anything but planning.  These are more like the High School Senior Prom with all the politics and drama leading up to the final ceremonies.  Management spends hours of their as well as staff time building a case to justify their groups existence rather than actually looking at how and what to contribute to the enterprise’s bottom like.  Then having to “re-plan”  funding / headcount don’t match political expectations.

It doesn’t have to be this war dance each year.  Developing a real enterprise portfolio management system could remedy this, however, methodology zealots (aka methodologist, of which I’m sad to say I associate myself with) often get caught up in the minutia rather than the goes.  A joke I’m fond of telling during presentations when asked about strict adherence to any methodology might clue you into my worries about methodologies that too often become dogma:  What’s the difference between a terrorist and a methodologist? Answer: You can negotiate with a terrorist.

Much of the research and design work I’ve done over the years is now starting to converge:  Process Modeling (IDEF0 & BPMN),  Simulation and Analysis of systems (System Dynamics and Viable Systems Models), Digital Nervous System (AD), IT Economics ( ISIS, Information Economic/BEAM, REJ, and VRF) and now integrating financial Portfolio Management Concepts with VA/VE and Systems Engineering Concepts with the focus on designing enterprises like one would design any other product or service.  As I near competition of my basic research for Modern IT Portfolio Management I look forward to thinking about how to deploy and enable adoption.  Now that the basics of a digital nervous system are in place, its time to create the “brain of the firm “which I see is a portfolio management system connected to the DNS driven by people making fact based decisions.

The one caveat to using this approach is taking it too far, enterprises are composed of people as are markets.  And people are emotional, thus making decisions that address attempts at gratification for those emotions.  Thus Portfolio management needs to be applied with flexibility to not only accommodate this phenomena but exploit it in the positive sense.  Then perhaps budgeting and planning will be more of a constructive than destructive activity, creating intrapreuers in the company.

Structure in Threes: Building vs Planning

One of the problems systemic to the design and planning world is the often famous quote:  “Its done its just a small matter or implementation or programming or etc.”  All too often I’ve seen designers or planner consider the job done once the design is finished.  However, there is often a lot of work to get it accomplished; even it there are not changes to the design.  This got me thinking about the terminology used in regard to projects.  To me implementation take is a larger concept than just deployment which seems to be the typical thoughts around projects, especially in IT.  “The application has been installed we’re done”.  True implementation carries with it both deployment and adoption, anything less is just dumping.

The proposed remedy to this is to have adoption KPIs included in project metrics, not just development cost and schedule.  This would foster a greater concern that what is built is actually used.

Today’s Applied Research Agenda

Today’s agenda: R&D around Capacity Management for Services and Processes.  My presentation on metrics and measurement for processes went well, but I know risk and capacity management are a significant lack in this field.  Somehow everyone has gotten the opinion that Moore’s Law will bail them out…that hope is what the other engineering disciplines know now leads to unsustainability.  Received old book I ordered from Amazon Market [Computer Systems Performance Modeling, 1981 –Sauer]   that covered some of this in regards to computer systems.  Coupled with Meadows Limits to Growth (Systems Dynamics that Forrester introduced) I believe I can develop an approach to monitor, measure, and manage processes in more than a reactive way currently the industry norm.  While it will not likely be Nobel Prize winning stuff, I think it will help make Microsoft more competitive and responsive to customer needs     

Unified Business Modeling and Architecture Capability: Using a common language across multiple business tools

Started building a Capacity Planning Model to evaluate processes I’m modeling in BPMN.  Contacted a colleague who shares Model-Driven process execution vision with the idea to build a capacity and financial modeling capability add-on to his execution engine.  I can see a real benefit in the ability to model, analyze and execute processes using the same graphical language.  Currently the tool suite for Business Architects is a cobbling of various tools which means having to do multiple translations and transcriptions from one system to another.  The results of which is a lost of productivity, agility, increased configuration management overhead and reduced benefits to the business.  This is similar to the problem programmer have had with using multiple tools not meant to work together.  What’s needed for the business architect is a common tool suite or Framework that integrates the various models and variables making them available for multiple uses; Different types of Analysis, Execution, Monitoring and Reporting.  I had this vision years ago during my CIO Workbench / Activity Directory Conceptual Design days.  Now maybe the time to build it

Structure in Threes: Process Value

About two decades ago I was fortunate enough to collaborate with several brilliant people in IBM working in manufacturing research.  One of them Dr. Arno Schmachpfeffer had coauthored a paper in the IBM Journal of Research call Integrated Manufacturing Modeling System.   One of the key aspects of the paper was a taxonomy of activities in a process.  I was struck by the simplicity of the taxonomy and the ability for it to catalog any process activity into one of four categories: Rest, Move, Make, and Verify.  After working with this taxonomy for a while using it to catalog activities I had previously created in IDEF0 for various BPR engagements I came up with several simple insights:

  1. Most business ventures derive their value through the execution of one of these activities. Example, a Product development firm creates most of its value through the make activity, a Consulting firm typically from verify activities and an Airline from move activities
  2. Extending that insight further one can determine the efficiency of a process by inventorying, classifying and analyzing the rations of the activities in the processes these firms use to create their value.  Thus comparing the ratio of the firm’s primary value creating activity to the quantity of other activities provides one the BPR equivalent to a Asset Efficiency Ratio in Finance

Throughout the years –as a personal research project– I had been inventorying, cataloging and analyzing processes I have been reengineering.  This past few months as I started to look into valuation of services and processes, the question has come up often.  How does one create a valuation for a process.  Initially I was looking for a hard formula based upon standard accounting practices.  However, after considerable applications of such concepts as Activity Based Costing (ABC) I came to the conclusion that the formula may be standardize but the actual value of the parameters would change.  That is one could use the ration of primary value add activities to non-value add activates to determine the allocation of value applied to each process.

While this is a simplistic approach it enables the Process Analyst and the Portfolio Manager to work together to determine the value of services through a hierarchy without having to get too detailed in data collection.  The next aspect of using this relative allocation approach is to add adjustments for non-value add activities that are required or mandated (e.g., safety and regulation compliance).  However a case can be made for calling such activities value add as they enable a firm to fulfill its mission and requirements.  Thus compliance and safety activities are feature requirements of a product or service and without meeting such do not perform as required.

This month’s agenda is to merge my activity ratio spreadsheet with the value portion of the IT Portfolio Management spreadsheet.

Structure in Threes: Capabilitiy Models

Most of yesterday was dedicated to continuing to fix my wife’s IPhone contacts and syncing with desktop.  By 10pm I had finally reloaded a restored copy of her contacts to both laptop and phone.  Later today its back to AppleCare to restore her apps.  Apple is still suggesting using ICloud to sync multiple devices, but at this juncture its unlikely she will trust any cloud provider.  This has taught her a lesson businesses are either about to learn the hard way or have Enterprise Architects, like myself, developing disaster contingency and continuity plans prior to jumping to the cloud:  Technology changes that you run your business on require serious change management.  And even then the a poor implementation can require many hours to recover.  Another lesson or side benefit, she’s starting to see what my career really is about: enabling others,  preventing crises, and recovering from crises.  The unfortunate thing about the Enterprise Architecture profession is that goals one and two are what enables goal three, but goal three is the only one that gets other’s attention.

This morning before jumping back on IPhone recovery, I’m reading through the COBIT 5 management guide.  It’s rather odd how most standards, processes and models now take the form of the CMMI maturity model.  While I’m a supporter of the maturity model construct it has it’s deficiencies or rather I should say poor adaptation of the concept leaves deficiencies in the applied domain.  Often I’ve seen various organizations adapt the maturity model construct for their domain of expertise but as a marketing tool to infer gaps which their product or service naturally fulfills.  However, these capabilities are often not completely filled by the product or service as the capabilities have to be built with these tools, knowledge, processes and behaviors.

The COBIT 5 generic capabilities model points to level characteristics, level generic enablers, and generic level enabler capabilities.  However, it appears the rating of conformance is a subjective exercise.  May be that is acceptable at present as this field become more defined.  However, linking performance to organizational design then become more subjective also.  It may be the design methodology I’m working on will have to include the concept of tolerance and performance ranges (similar to physical product engineering) for the results of selecting the various design attributes.  I’ll have a better handle on that when I encode the governance model into a spreadsheet and link it to the organizational design spreadsheet.  The results should give me a more robust assessment criteria as well as a diagnostic tool for clients that will guide them to a more specific areas to address rather than a shotgun approach.