Working Insights: Process Analysis presentation creation and the movies

Finished my presentation deck for Wednesday’s Business Architect call, sent it out to my manager & colleague, and other friends for comments.  The topic is business process analysis methods.  Its a short 20 minute presentation on the benefits and some methods of value for analyzing processes.  During developing I rediscovered what I already knew: Difficult to compress 2 hours of material into less than 20 minutes.  Makes me truly appreciate movie editors and directors…it take a lot of skill to cut things down to the bare minimum and no more.

Which has me reconsider my thoughts around BPR/M compliments I get in the form: “I’ve never seen anyone but you be able to diagram a process on the fly while we’re discussing it” or “How can you trim these processes down so well that there are minimum steps and each one contributes real value so easily?”  Often I just smile and say thank you, but really don’t think much of it its just what I do.  But I guess its because I’ve spent years honing my craft like editors and movie directors learning how to cut away what’s non necessary to the accomplish the job; in once case telling a story and in mine making a highly efficient and effective process for an enterprise.  I’m sure by now my colleague Sarah is smiling, nodding her head, and saying of course.

Advertisements

BPR/M: Tools and Techniques

Hate to start out discussing tools regarding BPR/M right off the bat.  However, it looks like I’ll need to find some of my old BPR/M applications for this latest project.  While the client‘s repository can scale to a large size, the configuration management and analysis tools leave a lot to be desired.  Something to be said about building your own tools for the job you’re doing.  The MS Access Process DB application I’d build years ago and continually modify looks like will have yet another year or two of life.  The last modification was to add ITIL packaging enabling me to box up processes for my FEDE client’s Shared Services organization.

This latest engagement looks to be a reduced set of the same issues, without the built in organizational resistance.  With this engagement the internal clients already are looking for a service.  What’s missing is a solid methodology to move from IT Product Delivery to a Service Delivery paradigm.  I’ll box up all the pieces into a Service Level Package (SLP) this week to demonstrate the concept.  This may help the IT organization’s client’s to visualize what is to be delivered and how the pieces fit to accomplish the results they would like.  I’ll see this afternoon whether I’ll get buy-in from the project team.  Either way I expect to build the SLP to make it easier to create a comprehensive solution.

Structure in Threes: Process Value

About two decades ago I was fortunate enough to collaborate with several brilliant people in IBM working in manufacturing research.  One of them Dr. Arno Schmachpfeffer had coauthored a paper in the IBM Journal of Research call Integrated Manufacturing Modeling System.   One of the key aspects of the paper was a taxonomy of activities in a process.  I was struck by the simplicity of the taxonomy and the ability for it to catalog any process activity into one of four categories: Rest, Move, Make, and Verify.  After working with this taxonomy for a while using it to catalog activities I had previously created in IDEF0 for various BPR engagements I came up with several simple insights:

  1. Most business ventures derive their value through the execution of one of these activities. Example, a Product development firm creates most of its value through the make activity, a Consulting firm typically from verify activities and an Airline from move activities
  2. Extending that insight further one can determine the efficiency of a process by inventorying, classifying and analyzing the rations of the activities in the processes these firms use to create their value.  Thus comparing the ratio of the firm’s primary value creating activity to the quantity of other activities provides one the BPR equivalent to a Asset Efficiency Ratio in Finance

Throughout the years –as a personal research project– I had been inventorying, cataloging and analyzing processes I have been reengineering.  This past few months as I started to look into valuation of services and processes, the question has come up often.  How does one create a valuation for a process.  Initially I was looking for a hard formula based upon standard accounting practices.  However, after considerable applications of such concepts as Activity Based Costing (ABC) I came to the conclusion that the formula may be standardize but the actual value of the parameters would change.  That is one could use the ration of primary value add activities to non-value add activates to determine the allocation of value applied to each process.

While this is a simplistic approach it enables the Process Analyst and the Portfolio Manager to work together to determine the value of services through a hierarchy without having to get too detailed in data collection.  The next aspect of using this relative allocation approach is to add adjustments for non-value add activities that are required or mandated (e.g., safety and regulation compliance).  However a case can be made for calling such activities value add as they enable a firm to fulfill its mission and requirements.  Thus compliance and safety activities are feature requirements of a product or service and without meeting such do not perform as required.

This month’s agenda is to merge my activity ratio spreadsheet with the value portion of the IT Portfolio Management spreadsheet.

Structure in Threes: One Enterprise’s Capability is another’s Function

Spent a portion of yesterday going through my project archive (two four drawer lateral file cabinets).  Mostly to clear out duplicated or outdated materials, but some was information mining for the book.  Typically on engagements I spend a fair amount of time to understand clients mental models, language and corporate culture with the objective to operate and communicate in the organizational structure with as little disruption as possible.  While this assists my clients, as I go to the effort of translating concepts into their language so they don’t have to spend the cycles.  This allows them to focus on exploiting the solutions, recommendations I propose and get value from these faster.  Which gets to the point of this post.

The mysterious language of strategy-speak.  Pick up any book on strategy the past decade or two and you’ll see a discussion on the importance of Corporate Capabilities.  The problem being though is the definitions are rather fuzzy.  Is CRM a function or a Capability or both or Software Product or now a Service?  Clearly if we are discussing the “ability” for an organization to manage customer relationships effectively we’re talking capability.  If we discuss the organizational unit tasked with preforming tasks associated with managing customer relationships it would be labeled a business function; which become even more confusing when we add the software products meant to enable performance of those tasks which could be either a product or a service.  And you thought your teenager’s language was confusing.  Those that enter the discussion without context need to spend extra effort to decipher if they can from the dialog.

Why I bring up this semantic problem?  This becomes a critical matter when designing an optimum organizational design.  I stress designing the “optimum” design; like product engineering there are many possible configurations possible.  However, there are a fewer configurations that fit the specific enterprise’s working environment, corporate culture and resources.  These are the high level design inputs parameters which affect the design decisions.  Unfortunately, there hasn’t been a metrics driven approach toward organizational design –something I’m in the process of remedying now through this book– that would include usage of concepts such as Taguchi’s Design of Experiments for design optimization.  However, to use such an approach one needs to standardize on terms such as these, the attributes and metrics associated with these.  Which again brings me back to my achieve and the broad spectrum of models and definitions I’m wading through to create a consistent ontology for Enterprise Design.

I had hoped to avoid the Yet Another Framework (YAF) trap, but I’m still seeing wide variance between associations and societies languages, as broad as what I experienced working on the ISO 103030 (STEP) standard.  My goal is not to create still one more framework, but rather the transformations similar to those John Zachman mentions as key with he discusses his framework. These would not be between cells but between other framework constructs.  The point behind such activity is to make the specific representation system any enterprise uses transparent to the design effort.

 

These materials though will likely be in the appendix or associated supporting website for the book, as it will likely be a long term effort and would delay development of the core design methodology.

Structure in Threes – Resources

This morning got a email from a colleague asking about capacity planning which is not exactly the Candlestick Charting Applied R&D I had scheduled for today, but he has an active engagement I’m helping him on as I have cycles to spare during my search for a new role.  It seems the more things change the more they stay the same.  Maybe this is why I decided to write the book.  I keep on getting asked the same questions about “How-to” –not in not in the academic sense– design an IT Organization or a Marketing Function or Consulting Group, etc.  It seems despite all the literature out there, there appears to be not practical hands on it this arena or if there is its disguised as something else.

His question was “how-to” calculate resources for an IT Organization.  Several years ago I built a calculator that did part of this on a smaller scale; a limited set of services.  I had wanted to scale the methodology up; part of my reason for rejoining Microsoft.  The approach is actually very simple.  Most of the methodology high level design is in my SharePoint Saturday presentations on Slideshare.  The basic approach is a derivative of ITIL/ITSM concepts using the modeling techniques I’ve used for the past 25+ years of engagements reengineering business processes and functions when I get called in to fix failed BPR projects — a lot more than these large corporations would like to admit to.   Usually the failures can be traced back to other issues –which is a post for another time.  However, occasionally the technical design of the function, process or group has been just a fantasy of moving boxes in the org-chart or a swag as to the resources needed.  After it becomes apparent the restructuring hasn’t worked as planned, maybe for the 2nd or 3rd try, management reaches out to find someone like me that has successfully done this multiple time over the decade and doesn’t plan on making this engagement his/her retirement home.  The difference between a corporate raider (Chainsaw Al) and myself is I’m looking at how to reuse and remission resources effectively rather than cut staff to the bare minimum.  I see reduced resource requirements as new capacity to apply to initiatives that were past over during the previous budget cycle because they didn’t have the capacity.  Enough management philosophy.

On the technical design approach, I start out with value streams and business processes which are supported by IT Services.  I use a loose definition of IT Services that are a mixture of People, Processes and Technology, a little bit more that as subscription to a computer transaction capability.  From this root I decompose the activities and transactions down to IT Assets (People, Processes and Technology) needed to support the activities.  With a little modeling, predictive analytics, and simulation a fairly robust set of scenarios [scenarios as in Scenario PlanningRoyal Dutch Shell]  can be created to define the range of capacity needs.  The can be then translated into a capacity model for IT Planning.   I’ve presented concepts before to people who somehow thing this is all academic theory. 🙂  However, had they spent time researching they would have found this has been done for years.  During my days at IBM I create a lot of capacity planning models for customers to forecast mainframe utilization and growth.  The basic logic works and continues to do so.  I’m not sure why the consulting field does not pick up on this perhaps because is more quantitative and the field likes to give more qualitative answers.  Below is a portion of a capacity planning spreadsheet I create several years ago for a SharePoint Shared Services organization.

Service Resource Calculator

 

 

 

 

 

 

 

Structure in Threes –Modern IT Portfolio Management: Research

Spent today on a variety of tasks; home and career.  Finished pressure washing back porch, put final coat on souvenir cup rack I built on Sunday, and started brainstorming how I could apply Candlestick Charting to managing Business and IT Portfolios.  I reason that if you can follow trends in the market, a similar set of dynamics may be at work in the business and IT ecosystems.  So far the portfolio research I’ve been doing relating IT functions and its components to business capabilities is proving out.  Last week I when through my project achieve to review the properties and attributes at play in the various reengineering projects I’ve done.  I’ve noticed a few patterns surfacing and now I’ll be working on analyzing these for correlation, then cause/effect.   The results should enhance the Modern IT Portfolio Management methodology I’m writing about.

Tomorrow I’ll go back to reviewing book outline and detailing it to the next level.  Been away from the project long enough to have fresh eyes looking at it.

Information Management Methodology: Part 2 – Are you sure Kelly Johnson started this way?

Last week I started documenting the “what” with an eye towards the “how” I’ve used during past engagements.   This post will not be the “how” but will list some of the methods I’ve assembled through the years that will eventually become the “how”.

In the previous post I mentioned listing the forms and other artifacts in an enterprise or quanta as I call these.  I’ve used an abstract term, quanta, for these collections of information as the physical or electronic equivalents bring other biases and aberrations.  I’m only interested in the pure semantics and relationships of this information.  However, to get to these I first have to collect the artifacts.

The technique I’ve used is much the same as those created by Taylor a century ago; observation.  My twist on observation is that I typically submerge myself into the processes that operate on this information.   When I designed systems for the management of aerospace tubing and wire harnesses, I followed the process. However, I followed it from the inside.  What I mean by that was I became the tube.   Not is the physical sense.  I put a post-it note on my forehead and walked the entire process; from when the engineer first was assigned the task to when it rolled of the assembly line and was installed in the aircraft.

Mind you I got a lot of laughs from all those concern.  Some of the management I dealt with was sure I was two steps away from a rubber room.  However, by walking the process I got more direct information and an appreciation of the context of how the information was created and handled.  This is something that is missing from only examining a data dictionary or other sterile reports that contains information about information.  True I will eventually remove some of the attributes as I create my quanta, but that information will be saved and associated with the quanta later.

When I had completed by little trip around the corporation gather this information I charted it out, all the flows to/from, activity steps, state changes and name changes.   Once I had a comprehensive diagram I invited representatives from all the groups associated with the information I just collected to review.  At that time White Boards were not as ubiquitous as they are now.  Instead I had access to a HP plotter, so I created a forty-five foot plot of the information’s journey in the corporation from birth to death.

The interesting thing about the meeting was; first people for the first time –may be ever—got a comprehensive view of the entire journey, second discussions emerged asking why certain information was needed or why it wasn’t available.  In a short three hours –the meeting was scheduled for only one, but people did not want to leave—I had a marked up diagram that not only had more detail (business rules, etc.) about the information but on the fly the group determined how to reduce the process steps, cycle time and non-value add information.

This short project synopsis illustrates the first objective.  Get intimate with the information.  Analysts that handle the information at a distance miss details and insights.  This was illustrated by the fact that all of the reviewers came away with a greater appreciation of what it took to design and manufacture a tube.  Prior to that it was just data passing around, after this point it was representative of a product in some stage of creation.  It was the interface between people and the task they performed.