Working Insights: Process Analysis presentation creation and the movies

Finished my presentation deck for Wednesday’s Business Architect call, sent it out to my manager & colleague, and other friends for comments.  The topic is business process analysis methods.  Its a short 20 minute presentation on the benefits and some methods of value for analyzing processes.  During developing I rediscovered what I already knew: Difficult to compress 2 hours of material into less than 20 minutes.  Makes me truly appreciate movie editors and directors…it take a lot of skill to cut things down to the bare minimum and no more.

Which has me reconsider my thoughts around BPR/M compliments I get in the form: “I’ve never seen anyone but you be able to diagram a process on the fly while we’re discussing it” or “How can you trim these processes down so well that there are minimum steps and each one contributes real value so easily?”  Often I just smile and say thank you, but really don’t think much of it its just what I do.  But I guess its because I’ve spent years honing my craft like editors and movie directors learning how to cut away what’s non necessary to the accomplish the job; in once case telling a story and in mine making a highly efficient and effective process for an enterprise.  I’m sure by now my colleague Sarah is smiling, nodding her head, and saying of course.

BPR/M: Tools and Techniques

Hate to start out discussing tools regarding BPR/M right off the bat.  However, it looks like I’ll need to find some of my old BPR/M applications for this latest project.  While the client‘s repository can scale to a large size, the configuration management and analysis tools leave a lot to be desired.  Something to be said about building your own tools for the job you’re doing.  The MS Access Process DB application I’d build years ago and continually modify looks like will have yet another year or two of life.  The last modification was to add ITIL packaging enabling me to box up processes for my FEDE client’s Shared Services organization.

This latest engagement looks to be a reduced set of the same issues, without the built in organizational resistance.  With this engagement the internal clients already are looking for a service.  What’s missing is a solid methodology to move from IT Product Delivery to a Service Delivery paradigm.  I’ll box up all the pieces into a Service Level Package (SLP) this week to demonstrate the concept.  This may help the IT organization’s client’s to visualize what is to be delivered and how the pieces fit to accomplish the results they would like.  I’ll see this afternoon whether I’ll get buy-in from the project team.  Either way I expect to build the SLP to make it easier to create a comprehensive solution.

Structure in Threes: Process Value

About two decades ago I was fortunate enough to collaborate with several brilliant people in IBM working in manufacturing research.  One of them Dr. Arno Schmachpfeffer had coauthored a paper in the IBM Journal of Research call Integrated Manufacturing Modeling System.   One of the key aspects of the paper was a taxonomy of activities in a process.  I was struck by the simplicity of the taxonomy and the ability for it to catalog any process activity into one of four categories: Rest, Move, Make, and Verify.  After working with this taxonomy for a while using it to catalog activities I had previously created in IDEF0 for various BPR engagements I came up with several simple insights:

  1. Most business ventures derive their value through the execution of one of these activities. Example, a Product development firm creates most of its value through the make activity, a Consulting firm typically from verify activities and an Airline from move activities
  2. Extending that insight further one can determine the efficiency of a process by inventorying, classifying and analyzing the rations of the activities in the processes these firms use to create their value.  Thus comparing the ratio of the firm’s primary value creating activity to the quantity of other activities provides one the BPR equivalent to a Asset Efficiency Ratio in Finance

Throughout the years –as a personal research project– I had been inventorying, cataloging and analyzing processes I have been reengineering.  This past few months as I started to look into valuation of services and processes, the question has come up often.  How does one create a valuation for a process.  Initially I was looking for a hard formula based upon standard accounting practices.  However, after considerable applications of such concepts as Activity Based Costing (ABC) I came to the conclusion that the formula may be standardize but the actual value of the parameters would change.  That is one could use the ration of primary value add activities to non-value add activates to determine the allocation of value applied to each process.

While this is a simplistic approach it enables the Process Analyst and the Portfolio Manager to work together to determine the value of services through a hierarchy without having to get too detailed in data collection.  The next aspect of using this relative allocation approach is to add adjustments for non-value add activities that are required or mandated (e.g., safety and regulation compliance).  However a case can be made for calling such activities value add as they enable a firm to fulfill its mission and requirements.  Thus compliance and safety activities are feature requirements of a product or service and without meeting such do not perform as required.

This month’s agenda is to merge my activity ratio spreadsheet with the value portion of the IT Portfolio Management spreadsheet.

Structure in Threes: One Enterprise’s Capability is another’s Function

Spent a portion of yesterday going through my project archive (two four drawer lateral file cabinets).  Mostly to clear out duplicated or outdated materials, but some was information mining for the book.  Typically on engagements I spend a fair amount of time to understand clients mental models, language and corporate culture with the objective to operate and communicate in the organizational structure with as little disruption as possible.  While this assists my clients, as I go to the effort of translating concepts into their language so they don’t have to spend the cycles.  This allows them to focus on exploiting the solutions, recommendations I propose and get value from these faster.  Which gets to the point of this post.

The mysterious language of strategy-speak.  Pick up any book on strategy the past decade or two and you’ll see a discussion on the importance of Corporate Capabilities.  The problem being though is the definitions are rather fuzzy.  Is CRM a function or a Capability or both or Software Product or now a Service?  Clearly if we are discussing the “ability” for an organization to manage customer relationships effectively we’re talking capability.  If we discuss the organizational unit tasked with preforming tasks associated with managing customer relationships it would be labeled a business function; which become even more confusing when we add the software products meant to enable performance of those tasks which could be either a product or a service.  And you thought your teenager’s language was confusing.  Those that enter the discussion without context need to spend extra effort to decipher if they can from the dialog.

Why I bring up this semantic problem?  This becomes a critical matter when designing an optimum organizational design.  I stress designing the “optimum” design; like product engineering there are many possible configurations possible.  However, there are a fewer configurations that fit the specific enterprise’s working environment, corporate culture and resources.  These are the high level design inputs parameters which affect the design decisions.  Unfortunately, there hasn’t been a metrics driven approach toward organizational design –something I’m in the process of remedying now through this book– that would include usage of concepts such as Taguchi’s Design of Experiments for design optimization.  However, to use such an approach one needs to standardize on terms such as these, the attributes and metrics associated with these.  Which again brings me back to my achieve and the broad spectrum of models and definitions I’m wading through to create a consistent ontology for Enterprise Design.

I had hoped to avoid the Yet Another Framework (YAF) trap, but I’m still seeing wide variance between associations and societies languages, as broad as what I experienced working on the ISO 103030 (STEP) standard.  My goal is not to create still one more framework, but rather the transformations similar to those John Zachman mentions as key with he discusses his framework. These would not be between cells but between other framework constructs.  The point behind such activity is to make the specific representation system any enterprise uses transparent to the design effort.


These materials though will likely be in the appendix or associated supporting website for the book, as it will likely be a long term effort and would delay development of the core design methodology.

Structure in Threes – Resources

This morning got a email from a colleague asking about capacity planning which is not exactly the Candlestick Charting Applied R&D I had scheduled for today, but he has an active engagement I’m helping him on as I have cycles to spare during my search for a new role.  It seems the more things change the more they stay the same.  Maybe this is why I decided to write the book.  I keep on getting asked the same questions about “How-to” –not in not in the academic sense– design an IT Organization or a Marketing Function or Consulting Group, etc.  It seems despite all the literature out there, there appears to be not practical hands on it this arena or if there is its disguised as something else.

His question was “how-to” calculate resources for an IT Organization.  Several years ago I built a calculator that did part of this on a smaller scale; a limited set of services.  I had wanted to scale the methodology up; part of my reason for rejoining Microsoft.  The approach is actually very simple.  Most of the methodology high level design is in my SharePoint Saturday presentations on Slideshare.  The basic approach is a derivative of ITIL/ITSM concepts using the modeling techniques I’ve used for the past 25+ years of engagements reengineering business processes and functions when I get called in to fix failed BPR projects — a lot more than these large corporations would like to admit to.   Usually the failures can be traced back to other issues –which is a post for another time.  However, occasionally the technical design of the function, process or group has been just a fantasy of moving boxes in the org-chart or a swag as to the resources needed.  After it becomes apparent the restructuring hasn’t worked as planned, maybe for the 2nd or 3rd try, management reaches out to find someone like me that has successfully done this multiple time over the decade and doesn’t plan on making this engagement his/her retirement home.  The difference between a corporate raider (Chainsaw Al) and myself is I’m looking at how to reuse and remission resources effectively rather than cut staff to the bare minimum.  I see reduced resource requirements as new capacity to apply to initiatives that were past over during the previous budget cycle because they didn’t have the capacity.  Enough management philosophy.

On the technical design approach, I start out with value streams and business processes which are supported by IT Services.  I use a loose definition of IT Services that are a mixture of People, Processes and Technology, a little bit more that as subscription to a computer transaction capability.  From this root I decompose the activities and transactions down to IT Assets (People, Processes and Technology) needed to support the activities.  With a little modeling, predictive analytics, and simulation a fairly robust set of scenarios [scenarios as in Scenario PlanningRoyal Dutch Shell]  can be created to define the range of capacity needs.  The can be then translated into a capacity model for IT Planning.   I’ve presented concepts before to people who somehow thing this is all academic theory. 🙂  However, had they spent time researching they would have found this has been done for years.  During my days at IBM I create a lot of capacity planning models for customers to forecast mainframe utilization and growth.  The basic logic works and continues to do so.  I’m not sure why the consulting field does not pick up on this perhaps because is more quantitative and the field likes to give more qualitative answers.  Below is a portion of a capacity planning spreadsheet I create several years ago for a SharePoint Shared Services organization.

Service Resource Calculator








Structure in Threes –Modern IT Portfolio Management: Research

Spent today on a variety of tasks; home and career.  Finished pressure washing back porch, put final coat on souvenir cup rack I built on Sunday, and started brainstorming how I could apply Candlestick Charting to managing Business and IT Portfolios.  I reason that if you can follow trends in the market, a similar set of dynamics may be at work in the business and IT ecosystems.  So far the portfolio research I’ve been doing relating IT functions and its components to business capabilities is proving out.  Last week I when through my project achieve to review the properties and attributes at play in the various reengineering projects I’ve done.  I’ve noticed a few patterns surfacing and now I’ll be working on analyzing these for correlation, then cause/effect.   The results should enhance the Modern IT Portfolio Management methodology I’m writing about.

Tomorrow I’ll go back to reviewing book outline and detailing it to the next level.  Been away from the project long enough to have fresh eyes looking at it.

Information Management Methodology: Part 2 – Are you sure Kelly Johnson started this way?

Last week I started documenting the “what” with an eye towards the “how” I’ve used during past engagements.   This post will not be the “how” but will list some of the methods I’ve assembled through the years that will eventually become the “how”.

In the previous post I mentioned listing the forms and other artifacts in an enterprise or quanta as I call these.  I’ve used an abstract term, quanta, for these collections of information as the physical or electronic equivalents bring other biases and aberrations.  I’m only interested in the pure semantics and relationships of this information.  However, to get to these I first have to collect the artifacts.

The technique I’ve used is much the same as those created by Taylor a century ago; observation.  My twist on observation is that I typically submerge myself into the processes that operate on this information.   When I designed systems for the management of aerospace tubing and wire harnesses, I followed the process. However, I followed it from the inside.  What I mean by that was I became the tube.   Not is the physical sense.  I put a post-it note on my forehead and walked the entire process; from when the engineer first was assigned the task to when it rolled of the assembly line and was installed in the aircraft.

Mind you I got a lot of laughs from all those concern.  Some of the management I dealt with was sure I was two steps away from a rubber room.  However, by walking the process I got more direct information and an appreciation of the context of how the information was created and handled.  This is something that is missing from only examining a data dictionary or other sterile reports that contains information about information.  True I will eventually remove some of the attributes as I create my quanta, but that information will be saved and associated with the quanta later.

When I had completed by little trip around the corporation gather this information I charted it out, all the flows to/from, activity steps, state changes and name changes.   Once I had a comprehensive diagram I invited representatives from all the groups associated with the information I just collected to review.  At that time White Boards were not as ubiquitous as they are now.  Instead I had access to a HP plotter, so I created a forty-five foot plot of the information’s journey in the corporation from birth to death.

The interesting thing about the meeting was; first people for the first time –may be ever—got a comprehensive view of the entire journey, second discussions emerged asking why certain information was needed or why it wasn’t available.  In a short three hours –the meeting was scheduled for only one, but people did not want to leave—I had a marked up diagram that not only had more detail (business rules, etc.) about the information but on the fly the group determined how to reduce the process steps, cycle time and non-value add information.

This short project synopsis illustrates the first objective.  Get intimate with the information.  Analysts that handle the information at a distance miss details and insights.  This was illustrated by the fact that all of the reviewers came away with a greater appreciation of what it took to design and manufacture a tube.  Prior to that it was just data passing around, after this point it was representative of a product in some stage of creation.  It was the interface between people and the task they performed.                

Rediscovering Information Management Methodology

About a century ago –no I’m not that old—companies were using advance information management tools called paper, file folders and filing cabinets.   Much of the daily course of business was concern with moving physical parts and products.  The movement and storage of information was also a physical effort; paper and files were routed throughout the organization which contained the information needed to monitor and control the organization. 

A gentleman Frederick Taylor came on the scene.  People either praise him or curse him now as he introduced the practice of scientific management which was instrumental in introducing the management consulting practice.  Later Marvin Bowers of McKinsey fame improved upon the practice. 

Mr. Taylor’s approach seems like a simple idea today, but it radically changed the landscape of industrial work and today is influencing information work (aka Information Worker).  The approach was to find the most efficient step of steps to accomplish a task, document and train others to use follow it thereby optimizing the time to output ratio.

Today we’re posed to accomplish similar automation for workers using information.  The problem that arises is that those in I.T. are ill prepared to accomplish this task.  Sure Developers and I.T. staff can lay down code to create a sequence of steps a computer will repeat at lightning speed till the cpu fries itself in some distant time.   The problem comes into play when you realize most programmers know software, software tools, and hardware, but don’t know your business. 

This is where business architects and analysts come in.  Analysts spend time learning how businesses conduct work and reduce these down to a set of descriptions in a standard language that programmers can further translate into computer-ease.   Architects look identify patterns and replicated them to accomplish similar work across the corporation.  Thus they are for lack of a better term super-analysts.

This entire preamble is to reintroduce a simple concept.  When management consultants where called in years ago, one of the techniques they used was to toss out every form in the place.  Next they would ask staff to create forms only when they needed them and only contain the specific information they needed to accomplish the immediate job.  This had the ability to streamline processes.  The past few decades’ variations of this approach have hit the BPR and IT domains with mantras like “simplify and automate”.  The question become at what level of simplification or generalization should one stop at.

Tonight I’m working on developing two SharePoint implementation architectures which will become these company’s operational infrastructures. [Yes, I’m crazy enough to two projects simultaneously]  Fortunately both are new, small firms, so much of the politics and complexity have not developed yet.

So how to start?  My typical approach has been to identify quanta of information that is produced and used throughout the organization, then document how these are created, distributed, used and disposed of.  Now my data-oriented friends will cheer at that.  However, my process-oriented friends will point out that I’m documented processes also.  And if the rest of my friends across the Zachman Information Architecture spectrum are reading yes, I go through all the columns with special emphasis on why.

Being I’m translating this to a SharePoint implementation, I’ll define these quanta as “Content Types” and define the attributes that are required for each. [My Information Architecture colleague Carol Corneby] will be happy.   This is where translation from business architecture and SharePoint design overlaps and transitions.   SharePoint developers will now start to understand what must be build.  Concepts such as information flows and stores will be converted to libraries –which are nothing but viewports limiting enterprise data to the subset a business person using the system needs—and workflows that are subsets of the business process or the value chain an enterprise manages to operate the business.

The methods for identification and translation of these abstractions and the conversion process is what I hope to eventually explicate this year as I’ve been doing this so long as have other peers like Ruven Gotz that we just do it without a second thought.  As we continue our SharePoint Salons our objective is to accomplish this and disseminate the information to others.                              

Enterprise Taxonomy: Wicked Problems

One of the interesting this about taxonomy work is that it exposes value systems and mental models of the participants.  Take for instance a simple organizational design and development project.  Various terms are thrown around in a casual manner and have different meanings depending upon your background.  The term Business Function has been used to mean the activity or action taken within an company –this could be a synonym for process—or the association of people with a common purpose and management, commonly called groups, divisions, or departments (e.g, HR Department).  While the term or label used often infers a result that both definitions produces, when working to define taxonomy this ambiguity and automatic context shifting people do unconsciously results in serious nightmares for information systems.


Above is a simple data model of an enterprise I’m working on today.  The goal is to define a model of how an organization functions.  There are many models out there; Value Chain (Porter), Value Net (Slywotzky), Division (Sloan) and Generic Enterprise (McKinsey) to name a few.  These models all create taxonomies in which to classify and place instances of a working company’s structure into a framework for thinking.  However, the ontology beneath these taxonomies is often hidden.  Thus without the context of the authors the terms definitions diffuse into ambiguity eventually becoming overloaded and creating chaos.  Did you mean the activity or the group of people when I say function?  Information Systems are not smart enough yet to ask programmers or end users what you mean by that term.

 While this sounds like a fairly esoteric and academic issue, I assure you it’s not.  The project is real.  The various companies and stakeholders are trying to decide how to restructure to gain competitive advantage or at the least gain efficiencies to capitalize of new opportunities on the horizon.   Without a common framework to organize around companies become misaligned and operate very poorly.  In not the worst case scenarios, departments work at cross purposes or even counter to each other.  In the 80s I watched two divisions literally undo each other’s work RIGHT IN FRONT OF THE CUSTOMER.  It was painful to watch from a professional organization.   When I debrief both organization heads both were positive they were following the company’s direction. 

 The next step in this activity would be to associate processes, capabilities, and other resources to these functions.  However, the context switch between activities vs. management structure (org-charts) models that the prior are based upon results in many misalignments and wicked programs.  If a Business Function is a people association structures at which decomposition layer into smaller units to you assign roles which is the linkage between key processes, the value stream and competencies needed to perform these activities within the “function”.   Add to this mixture that multiple functions and multiple competencies participate all participate in multiple value streams and processes, it become a web of complexity quickly.    

 Again while it sounds fairly esoteric I had to be brought in to straighten out this issue when one corporation decided to implement another’s process.  They wanted to keep how the company was structured but they also wanted to use the process.  It took several months to be able to rationalize the differences in terms and frameworks for the corporation to successfully deploy the process.   The alternative was to turn the corporation inside out to match the model the processes and software had. The choice to rationalize vs. change organizational structures was one of find the least objectionable alternative.                           

 –all this is neded to resolved before you could place the taxonomy into SharePoint’s Term Store for publication to various farms