Make it so –the dream of cloud computing – Part 1 Cost and Service Models

Lately I’ve been embroiled in a series of debates between colleagues on the topic of Cloud computing.   These “active” discussions revolve around a few themes; is cloud new or just timeshare rebranded, is there a market for I.T. professionals after cloud, the security red herring, everyone is going to the cloud, and finally implications to businesses

First off Cloud has been partitioned into multiple capability models.  Recently a Microsoft Cloud Evangelist provided a MS taxonomy of the market which included Hosting, IaaS, PaaS, and SaaS.  This continuum is based upon the balance of maintenance and operational responsibility the customer must take on.  

  • At the hosting level a customer offloads the ownership and maintenance of the physical hardware.  The vendor provides assurances that the hardware will be available for use; customers are responsible for the operating systems, servers, applications, etc.  So basically the hardware costs more from assets (CAPX) to operational expenses (OPEX). 
  • IaaS takes responsibility to the next level having vendors provide the infrastructure for a customer.  This removes the next layer of expenses from the customer’s capital expense side of the balance sheet and maintenance costs to operational expense (OPEX again).
  • PaaS raises the bar of “capabilities” a vendor provides to a customer to a higher level, where an environment to develop and support applications systems that customers use to perform line of business activities.   Again changing capital investments and maintenance costs to operational expense.  This does come as a trade-off; AZURE as an example has multiple types of limitations as to what an organization can or the approaches used to develop.  If the systems needed to be developed or to be migrated fit within this profile a PaaS model could provide beneficial to a corporation as these expenses could expand and contract in alignment to business operations.
  • SaaS raises the bar still one more time by providing customers business systems that support line of business and business functions as a service.  Examples of this have been around for a while such as Payroll and more recently Customer Relationship Management (CRM).  This model removes the application maintenance responsibilities from the customer at the expense of customization and flexibility.  A customer conforms to the application as opposed to the application conforming the customer’s way of doing business.  The iceberg underneath this model is a customer is standardizing functions within its value chain.  This could remove its competitive advantage by using a commoditized capability. If a customer doesn’t have a competitive advantage with those functions or does not see them as providing such, this model may be of value.

 These services models promise benefits to the savvy CIO and Enterprise Architects should be looking at these as alternatives to on premise solutions.  However, these need to be evaluated in balance with other enterprise concerns of security, flexibility, and risk.  While the cloud I would profess has no more significant acess risk than today’s on premise solutions, you are placing YOUR data and information in the hands of another organization. 

There are stories and reports of concern about Google reading everyone’s email for marketing purposes, as well as other provider’s usage of your intellectual assets.  This topic should not be address lightly given there is no Fiduciary-like responsibility or standard these vendors are contractually or legally bound to.  These service agreements are not much more than extensions to software EULAs.  We all know how well those work for customers and it is the wise customer that investigates their purchase before the signature is made.   

This would suggest that customers take an active part in owning their information as an asset, which infers the need for content strategy and information management competency will be a core capability all organizations will need to develop and explicity act upon at the executive level.

The Big Elephant in the SharePoint Room

After all the “SharePoint Saturday The Conference” excitement has died down.  I looked at both the agenda and attendance and came to the obvious conclusion.  We’re still more enthralled with technology than content.   I will not say that is necessarily a bad thing, however, for a technology that professes to help manage content as a community we’re still more focused on widgets, parts and branding than on getting a handle on actually helping the end-user manage content.

I hear lots of pushback on this topic both at the conference and work.  We’re here to provide a tool; it’s the end-user’s responsibility to use it correctly.  The only problem with this answer is that if you don’t assist the end-user community in seeing how the technology enables one to accomplish a task better and in a new way you’ll end up with the same problems with just a new face on them.

How many SharePoint implementations are just Web front-ends to G-Drives?  I’ve done several snap surveys and audits to see this trend continuing.  In an effort to deploy the latest and greatest technology we’ve lost the rational for its usage to start with.  In order to be successful in delivery, we’ve sandbagged its capabilities.

Helping end-users understand their role in information management is not an insurmountable task, but it does suggest SharePoint community members need to move out of their comfort zone of lists, sites, and web parts and into the more abstract area of content strategy, information architecture beyond UI design and into how content’s lifecycle needs to be managed beyond slamming a quick and dirty article into a content management system (CMS) or I.T. launch process.

Simple Records Mgt using Out of the Box SharePoint 2010

My team at Satory developed a simple, fast out of the box records management solutions using SharePoint 2010.  Here’s the video:

http://www.youtube.com/watch?v=I3KBBETo6Gs

A complete Value Case for projects

What seems to be one of the hardest tasks for technical (I.T.) staff is developing a business case for a project or documenting the actual benefits that occur after deployment.  Investment Strategies for Information Systems (ISIS) from IBM and Rapid Economic Justification (REJ) from Microsoft are methodologies that guide someone in the creation of a business case.

Neither of these however extended past justifying the project initially.  Thus once the project was given the green light all measurements and promises of benefits typically fade from memory.  It is then a matter of measurement on delivery time and on budget which a project seems to be judged successful or not.  A poor trade in my opinion.  Not capturing the actual results, rather than delivery, misses the point that business executive want to know.  This in turn has senior executive then question the value of I.T. and continue to ask for reductions as they view I.T. as a cost center.

I’ll be covering some of the concepts and approaches I helped developed for each of these methodologies at SharePoint Saturday The Conference DC  http://www.spstc.org

 

SharePoint Saturday The Conference D.C. Presentation Marathon

Putting final touches on presenations for Sharepoint Saturday the Conference D.C. I’m doing a mini presentation marathon for SPS Community [ http://www.spstc.org ] and Satory Global [ www.satory.com].

After the conference the presentaions will be posted on SharePoint Saturday site, SlideShare, and this blog .  I’ll add links here for those interested.

For a sneek peek at topics presented check out this blog.

A hugh collection of authors and speakers will be presenting at the conference. So big its noit practical to list on blog, here’s the link: http://www.spstc.org/SitePages/Speakers.aspx

Using SharePoint to enable ITIL: Consequences of unmanaged services

Since technology appears simple to operation the assumption is that it’s simple to build and manage.  What isn’t seen by end-users today is all the complexity behind a product or service.  The consequences of operationally mature, technically naïve stakeholders and end-users are having to justify investments in information technology.  While it’s not unreasonable to ask for such, it’s difficult due to the complexities and variables of how the services are created and used.  Adding to this complexity, the financial measurements and analysis is still an immature field.  Accounting systems only within the past few decades have started addressing services and other abstract assets.  However how to measure benefits still raises strong disagreements between CIOs and CFOs.      

In addition the expectation of I.T. Services is that they are as stable and reliable as other mature services telephone, water and electricity.  As a result organizations have moved past I.T. as a “tool” that is adjunct to how work is accomplished and have or are weaving information technology into the very fabric of the business and in some cases integrating I.T. into the products and services they deliver.  Two examples: First look under the hood of any automobile lately you’re faced with a mass of connections, even that mechanic that tune up your car now plugs it into a diagnostic computer.  Second want to deal with the government or a financial institution, you’re likely to either go online for forms or visit an ATM.  When was the last time you actually saw and spoke to a teller?

The level of integration of computer technology has skyrocketed and with it the dependency upon services hosted on them via various network connections and backend infrastructure, including the people that support all the onboarding, operations and creation of these services.

Thus this dependency has created increased risk.  United Airlines recent reservation systems outage cost them dearly, not only in operations costs, but good will, penalties and legal issues from passengers denied access to their flights.  An estimate of the damage has not been publically announced but you can be sure it was significant.  

Without a good handle on managing services I can foresee a day when a business fails due to service interruption or I.T. failure.  Imagine how long it would take Amazon to crash if its infrastructure failed.  For Amazon, I.T. failure cannot be an option, they don’t have a physical storefront and they are totally reliant on technology to point them to where the closest inventory is to fulfill and order.  I would expect a 12 hour failure like United’s would be deviating to the bottom line.

I.T. Product and Service failures could in the future lose the shield from disclaimers and fitness of use loopholes.  The end user licenses that have been used to limit liability may eventually be used as evidence that the product is unsafe or unfit to be sold or that the company has engaged in false advertising.  Thus I.T. Management may have legal exposures similar to Senior Corporate Executives via SOX and GAAP regulations.  

Some of these service issues have gotten management mindshare and attention; witness the strong efforts around I.T. Governance and service delivery models.  Still other efforts to cobble together some form of service agreements and measurements.

While customers (internal and external) are judging them on their own performance criteria which may or may not match up to Service Level Agreements (SLAs) these organizations have stitched together often on criteria that customers do not see and therefore do not measure against  [i.e., network or server availability vs. service availability –what good is either to measure if they are not in sync to allow me to utilize the service e.g., the server was up 20 hours a day and the network was up 21 hours a day, however combined the complete service that required utilization of both was up only 18 hours (only 1 hour of the network outage overlapped the hours of server outage).  Server and Network availability where reported a poor 84% and 88% respectively, however the Service Availability the customer experienced was a dismal 75%.  That’s a quarter of the time a customer cannot use the service. Think of it as one in four times you go to use your car it wouldn’t work or worse yet, 6 out of every 24 hours a plane flies it just stops –hope you’re not over the Rockies during that window.  

Thus the consequences of having unmanaged and poorly managed I.T. services are increasing.

Technology Management Revisted

I don’t wish to come across like some old “back in my day” type of person reminiscing about the good old days for several reasons.  First when I was starting out the powers that be, thought it would be a good idea to buddy me up with a lot of the senior technical staff and fellows within the company.  This had a profound effect on how I think today.  Second my employers engaged me in active participation brainstorming solving a wide range of big corporate problems rather than having me as a go-fer.  The lessons I would later learn from these experiences helped shape who I am today and are still with me even today, though I’m on the other side of the fence now mentoring others starting out or growing their careers at Satory Global

 First lesson:  That old guy you think is slow, may be thinking a lot deeper about the problem than you are.  It may be he/she has recognized a pattern from the past; Experience is being able to recognize when you’re about to make the same mistake again.  Or maybe see a learning experience that can be passed along to you. 

Second Lesson:  My mentors never gave me the answers, they asked me questions that provoked me to think deeper about the problem.  At Toyota these people are call Ninja Mentors.  I was fortunate to have many in my career at multiple companies.  

 This was a rather exhilarating experience as later, as my father’s generation was retiring, I was asked to work on capturing some small percentage of knowledge from these grand old wizards.  Today I still see people trying to recreate the same knowledge that was hard learned but captured in another technology and is not recognized as the same problem.  This may have occurred as the corporate knowledge retired with that person.  Sad but True.

Yesterday I posted about technology adoption Vs. information management.  We could go into a long discussion as to whether the cloud is nothing more than timeshare rebranded, but it’s a waste of time.  The technology trend is here, the important factor is whether its managed to improve business productivity or just following the latest fad.  In that line of though here is a reposting, slightly updated of an article I did in 1998.  I am actively working on updating the last methodology with the intent of sharing it at one of the SharePoint Saturday events.

 Technology Management and Investment

By Brian K. Seitz © 2011

Why “technology management?” While it has not been a cause for business failure, or at least not identified as such, the inability to manage technology is becoming a significant factor in a company’s success. This in turn has made developing technology management one of a company’s core competencies. It is a critical skill that technical professionals must develop. 

 To understand why technology management is so critical, all one has to do is scan the newspapers and trade journals for examples of how it has failed. These examples include catastrophic results from failure to manage the technology, the current wave of forecasted unforeseen effects of technological decisions, and Herculean efforts to mitigate those impending effects on the corporation.

 The year-2000 problem is just the most recent incident in a still immature industry and discipline. However, such technologies as the internal-combustion engine, the telephone, pesticides, and fertilizers have effects and consequences beyond the scope and duration that inventors and exploiters of the technology had considered.

 By now some of you may be starting to question whether this is an article about the subject of technology management, an attempted political rationale for world planning, or another anti-technological manifesto. Let me assure you that it is neither of the latter two. My home is filled with the latest electronic entertainment gadgets and wizardry. My family owns several automobiles, and with the exception of a rather large library and paper document file system, most of my written composition and information processing is done on one of several computers on our home network. What else would you expect from someone that works at Microsoft?

 What should you expect from an article purportedly about technology management? In short, more questions than answers. I could spend literally hours discussing the finer points of some esoteric theory of management or give mounds of statistical data to show why one forecasting technique is better than another one. However, I don’t see that it would hold much interest or relevance, except to a few specialized domain experts. It is not my objective to make you an expert in the field or to present the latest fad technique. Rather, I seek to present to you a broad overview of technology management and its potential benefits and possible pitfalls and to raise some points when considering technology management.

 Technology Management Techniques

  • • Architecture
  • • Systems Engineering
  • • Microsoft Solutions Framework 2.0
  • • Investment Strategies for Information Systems (ISIS)
  • • Information Economics
  • • Total Cost of Ownership (TCO)
  • • Portfolio Mgt. (Investment Model)
  • • Rapid Economic Justification (REJ)

 What is Technology Management?

Technology Management, in the opinion of this author, are the processes to plan the selection, usage, and demise of technologies. This sounds like a very broad statement, where almost any activity of a technical nature could apply, and it is.  Most technical and business professionals are engaged in technology management through the selection and design of technologies used to design a product or products. We are thus engaged in this process consciously, unconsciously or by abdication to circumstances. This observation is not an absolute. There are various processes I would suggest which have as objectives managing technology. However, I would also suggest that many of us are engaged in using these processes semiconscious about their effects. I refer to the architectural process, systems engineering, and several design processes such as QFD, VA/VE, and the like. While these have a primary goal of designing a product, various aspects of these techniques ask the practitioner to select properties and attributes to determine qualities the final product should possess.  In most of these techniques it is assumed that the practitioner has an understanding and perspective of the qualities the final product should have in a larger context.

 Who cares and why?

So does anyone care? The first and most obvious answer has already been hinted at earlier. With the advent of the “Global Economy,” corporations have geared up to this new challenge through improved management of all resources. The effective management of technology and intellectual property has been seen as a competitive advantage. Technology management has thus become a critical competency requirement in most organizations.  There are two important points that need to be made with regard to the technologies one needs to manage: the technology a corporation develops and delivers as part of a product to a customer, and the technologies which are developed to develop the product. 

Simply, return on investment is why corporations care. Today every resource at the command of a company is examined for how well it contributes to the bottom line. Capital, equipment, people, and technology, all are now being viewed through the same financial lens. Likewise the philosophy that all resources should be managed has also changed the corporate psyche. 

 In the past, corporations have swung like a pendulum between two extremes; full lifecycle planning and organic growth in their management technologies.  The fact of the matter is there is no perfect technology management methodology.

 Technology Management Techniques

During my early career at Rockwell one of the first Technology Management techniques I used was “architecture” as opposed to design.

 A design is a specification for the creation of a product, while architecture is the rules for selection and usage of elements which constrain the process of design.  Architecture constrains the design activity such that it only produces designs of similar characteristics and qualities such as a family.

What is useful about this methodology is its ability to focus simultaneously on both the whole and its constituents. Unlike many other techniques architecture seeks to strike a balance between these two perspectives (analysis and synthesis). Too often laypersons speak of architecture when they mean the design of a product. While architectural principles enable one to address this issue, using the metaphor to the extreme, in this context, puts him or her in jeopardy of creating a monument. It looks nice, but doesn’t have relevance or function for the present conditions.

 Systems engineering, a close cousin to architecture, seeks to decompose a product into a collection of subsystems or components which when assembled meet the overall product requirement. The basis of this technique is the decomposition and allocation of requirements to the various elements that compose the final product. These elements are traditionally organized in a hierarchy that is related to a similar decomposition and allocation hierarchy of requirements. 

 Individual techniques within this methodology assist the designer in analysis and decision-making from a predictable and repeatable set of rules. Another important concept within systems engineering is the focus upon the management of interfaces. The rationale behind this is that properly constructed interfaces bind the implementation of the component from the rest of the system, therefore components can be replaced with other components as long as they have the same interface and characteristics. This concept makes modularization and delegation possible, since the interface is the only place where the system as a whole is affected by its pieces.

Other models of managing technology have derivations from the investment and financial community. The four most popular techniques are:

  • Portfolio Management
  • Investment Strategy for Information Systems (ISIS)
  • Information Economics
  • Rapid Economic Justification (REJ)

These methods are concerned less with technical issues than value and prioritization clarification. These techniques address establishment of benefit or value, assessment of risk and uncertainty, prioritization, and decision-making.

 Portfolio management: Technologies are clustered into groups based upon similar attributes, and decisions as to managing these technologies based upon these groupings are used. The most famous of these techniques has been the Boston Consulting Group’s matrix that classifies members into groups: Dog, Cash Cow, Rising Star, etc.

 

Investment Strategy for Information Systems: This is an asset allocation and justification methodology developed by IBM in the 1980s to assist MIS directors. The benefits here are the ability to focus upon making decisions based upon relative values of the various members. The disadvantage is the oversimplification of issues to a single value decision that may not address other factors that have significant effects.

 Information Economics: A technique developed by Parker and Benson of IBM. It provides the practitioner a means to quantify various factors which affect decisions and establish a relative weighting of importance to them. Though not a pure objective approach, these methods provide the decision maker visibility in the logic behind the decision made. Like the portfolio management, these techniques group technologies into a system based upon the attributes and relative weights of each attribute. The result of using these methodologies, specifically in a group, is more than just a ranking of alternatives. It forces one to examine the values that are placed upon decision criteria and apply them equally across all options.

 This is where the Microsoft Solutions Framework (MSF 2.0) comes in. It is not a methodology in the strictest sense; it provides a map of techniques applicable to the various phases of the Software Development Life Cycle (SDLC). Many of the elements of the Framework can be employed or not. These techniques address various phases of software technology management such as planning, selection, and coordinating design and construction activities. In addition there are elements (templates) which guide and assist professionals in activities such as installation. These templates reduce project plans to a parametric exercise, thus enabling a manager to focus upon non-repetitive activities instead of the simple day-to-day mechanics.

 What MSF does is allow a CIO, for instance, to manage the technology within an infrastructure of an organization. It’s important to note, what MSF doesn’t do is force you to choose a specific technique, therefore enabling you to choose the most appropriate techniques for your organization.

 Rapid Economic Justification: A technique developed in conjunction with Microsoft corporation during the late 90s sought to create a fast, logical and mathematically correct model of evaluating benefits and costs of IT projects.  The key factor was establishing a baseline of benefits in metrics that could be traced back to logical assumptions that were believable by CFOs.   I am still working on a derivate of the methodology (BEIS) that adds portfolio management, options theory and a more robust means of measuring value.  

Originally Published in START Magazine March/April, 1998, Minor Revision 2011 to add REJ

Technology Adoption vs. Information Management

One of the things I’ve realized over the years, popular technology adoption does not equate to business productivity. It may actually be the reverse; when a technology becomes all the rage and takes on fad-like status, it rapidly gets out of control resulting in all manner of side effects.

SharePoint is one such example. Being touted as the application backlog silver bullet: “End-Users can build their own collaboration applications themselves leaving I.T to work infrastructure an enterprise applications. I think I heard this story before with MS Access and Excel. Now CIOs and Business Executives are left with thousands of unmanaged applications with critical corporate information distributed on workstations and laptops across the enterprise.

This is not a critique of MSOffice or SharePoint far from it, it’s more of a condemnation of how adoption is managed or rather mismanaged in organizations.  Despite all the talk of governance and adoption maturity levels most of these are applied to the installation of the technology not its productive use.   I watch as technologist, I.T consultants, and sales representatives promote technology virtues with all sorts of hyperbole in amazement knowing that an organization is lucky to get one fifth of those benefits.  Not because the technology can’t deliver, but rather the organization can’t incorporate usage best practices as well as installation best practices. 

A simple proof point how many MS Access and Excel spreadsheets are used in your organization to manage core work?  How many are managed as a critical asset?  How much of the information is duplicated in some form or version of the truth?  And now you’re going to enable end users to do the same in SharePoint by creating their own sites?  

This is not a call to ban end-user site generation, more so a suggestion that education and training on information management for information workers (i.e., most of us now) is becoming a critical need more so than simple computer literacy.  Either we learn how to manage our information better or we’ll drown in a information glut of our own making.  Consider joining organizations such as AIIM and DAMA are providing some thought leadership in this area.

Satory Global LLC
Twitter: @bseitz
www.satory.com

Brian Drain -reposted

Ayn Rand foreshadowed it in her bestseller, Atlas Shrugged, but what can be done about brain drain? The NEO model offers a new kind of vision for leveraging intellectual property and assets During the past twenty odd years I have held various titles and positions – almost always with the same small twist to them. I am typically asked to take on the most troublesome areas of a business or project. Part of this is by choice and part of this is by reputation. In days past, people like myself were called corporate troubleshooters or consultants. We had very vague titles and charters to go forth and do good work fixing whatever we saw as a problem within the business. It is from that perspective, as a dispassionate observer, that I offer the following thoughts on business practices today.

Nowadays in the fast paced dot.com and the post-large corporation world, the apparent need for deep knowledge, skills and refinement appears to have vanished. If a process or product doesn’t work right out of the gate, the rationale goes something like this: patch it quickly, if it doesn’t survive in the market, dump it fast. Businesses have become the ultimate example of what was called the disposable generation, tossing products, companies, and people like used Kleenex

However, like all trends and fads, this too shall pass. As corporations like Microsoft, Boeing, CISCO, and Sun Microsystems field new technologies and other corporations wish to field the same; the need for skills beyond typical project management competency becomes more apparent.

Shortage of Skilled Workers

For evidence of this, we need look no further than the advertised shortage in the I.T. field. The statistics you see from Gartner Group, IDC and the like all show the same painful result. The majority of IT projects fail to meet expectations of the business, and in too many cases just plain fail in some technical aspect. While internally this failure is rationalized away by statements like “the technology just wasn’t available yet” or a plethora of other excuses, the simple fact is that the deep knowledge and skills, those that make a employee part of the enterprise versus just another set of arms and legs, are not there anymore. Neither is the commitment of employees to develop such deep knowledge since corporations are increasingly taking a short-range view and as a result, have broken the perceived employer/employee bond.

Corporations are reaping what they sowed years ago as they broke faith with employees. As such, employers are now forced to not only compensate higher, but in addition give a piece of the pie away also, just to attract or retain the basic or core skills needed to produce. Industrial karma you might say.

Atlas Shrugged

What isn’t showing, but was foreshadowed in Ann Rand’s book, Atlas Shrugged, is what has been labeled brain drain at various corporations. This occurs within a business’s lifecycle when the perceived full motivating factors verses the de-motivating factors such as lack of freedom, interest & fun in the job and low to average financial compensation do not add into a net sum gain in the technical employee’s eyes. What occurs is compliance to rules and measure, but not commitment to the business’s success as they see no linkage to their efforts, business performance, and their professional success. This results in not only the mass trend that employers are seeing now -difficulty in finding needed skills-but a more wicked issue hiding like an undercurrent in the visible employment waters above.

Essentially, the best and brightest are voting on a corporation’s perceived short & long viability with their feet. Look at the dot.com phenomena; shortly after many of these companies IPO, the intellectual horsepower in the corporation leaves and the company falls. Mergers and acquisitions are another area in which these conditions abound. In days past, smart businessman knew enough to insist that the key talent and staff stay long enough for the company to be assimilated. This at least gave it a fighting chance to survive. However, these organizations too are now suffering the realities of the shift in the power in the employment contract they pushed to change to.

The Matrix (I’m going to show you a new world…)

What does that mean? A new business model is needed. One that truly integrates employees and employers in a common purpose and direction. Enter NEO, a code name for a new collaborative environment named for a character in the pop film, “The Matrix”.

The interesting aspect about the NEO model is that it is not “a” model per say but several models integrated into a dynamic context. This context is the recognition that the electronic infrastructure (a.k.a. the WEB) is a vehicle to link people together enabling them to collaborate. This collaboration occurs in three parallel dimensions: E-space, Pspace, and I-space.

E-space is the rushing world of bits and bytes, the data processing and telecommunications that the I.T. community submerge themselves into divorcing themselves from the rest of the world. It can be characterized by the terms access, connectivity, and bandwidth.

P-space is that physical world we all belong and are aware of. It contains the phenomena that scientists spend their days describing and engineers capitalize upon to build products that the rest of the world finds value in.

I-space is that abstract dimension of ideas, information, knowledge, and principles. It is the area where creativity and imagination live. It is less investigated and documented than the other dimensions and therefore harder to understand the rules and laws that governs this environment. NEO operates on all three planes; however, it is primarily an I-space entity which is manifested in P-space via the E-space entities that represent I-space properties and behaviors.

So what does all this 2nd year philosophy rhetoric have to do with how an engineering service company works? To quote the movie, The Matrix, “Imagine a world…” NEO is a set of rules describing how an enterprise without boundaries works, an enterprise based upon several simple principles. 1) People should be compensated on the value they bring to the corporation, not the time they dwell. 2) Knowledge and information are the means to collaboration and have a measurable value just like P-space entities (ie. copyrights, patents, and other intellectual property.) 3) Value is in the eye of the recipient. That is what I value may not be the same as you or have the same priority.

Having shown you the linkage between philosophy and business management, the extension of this yields several models which describe how this boundary-less enterprise is structured and behaves. First is a model of compensation called “Personalized Employment™”. Next is a model of intellectual capital called “the Registry™”, which is intimately tied with a model of how to manage all assets whatever class they are called Interlocking Portfolio Management™’

Glimpses of this model have been seen within the technical community during the past decade.

Show Me the Money (Personalized Employment)

Devices like options, grants, incentive pay, and improvement & performance bonuses are now part of the typical high tech compensation plains. Each has its pros and cons.  For employers, options, grants and incentive pay represent a way to motivate an employee risking very little up front. This compensation is based upon demonstrated performance of the company, the employee or both. In many new companies this is the way to attract new talent or retain talent without having to lay out significant cash. This works well as long as the corporation is on a fast growth path. However, just like other forms of compensation this can setup an internal competition between employees as they via for a share of the pie. Look at the exodus from corporations once their double digit growth is over.

For the employee, these devices can also be boom or bust. If the corporation does well, your performance is perceived well and critical to the success of the company, gold flows into your pockets or at least the promise of gold. Options valued at x dollars may not be worth a tenth of that when it is time to exercise them. Witness Microsoft’s recent drop and the adjustment it made to their stock options plan needed to retain employees.

Compensation within NEO is characterized by the concept of personalized employment.  Similar in principle to the cafeteria style benefits program that many large corporations have instituted, but extended to create a flexible total compensation package. Remember value is in the eye of the recipient. This is balanced against the value contribution an employee makes to the corporation.

Within NEO, each contributor is compensated in several ways according to the value-add they bring to a project. The first level of compensation is on a deliverables basis, no different than work for hire or other contracted labor. The difference is that the compensation is for that unit of work only. If someone can do the job, with the same quality, in one day verses two weeks it makes no difference. The compensation is for the result, not the level of effort. The next level of compensation is a little trickier to follow. It is based upon the original value-add a contributor creates and value added beyond the project or other corporate assets outside the scope of the original work unit. That value-add is determined as a percentage of the project’s entire value-add. A portion of the revenue generated from the project’s life is segmented into a pool for contributors to share. That pool is distributed to contributors based upon the value-add percentage determined earlier. In effect, the contributor now has a royalty check or annuity for the life of the project.

Asset Management (The Registry and Interlocking Portfolio Management)

NEO takes advantage of a unique asset management technique called Interlocking Portfolio ManagementTM (IPM). Unlike the typical view of asset management, NEO views almost every object or entity as an asset which potentially should be managed. This includes physical, fiscal, and intellectual types of objects. These assets are classified and categorized into subsidiary portfolios to be managed. Each portfolio then is also managed and cross analyzed to identify opportunities and liabilities within the entire portfolio (i.e., synergy).

The way this is achieved is through a classification matrix and assessment of the value of this assets in an economic context. Thus each asset is monitored as to its past, present, and future value singularly and jointly when combined with other assets. This is done through a dependency analysis and several other proprietary economic models.  These results are then displayed on what’s called an Enterprise Dashboard so that portfolio managers can make decisions as to what positions should be taken regarding these assets. These positions can be to divest, retain, or reinvest in these assets.

The primary function of the registry is to provide a means to capture, encode and disseminate information, the knowledge within the NEO sphere of influence. Thus the Registry and IPM represent a new kind of knowledge-based system – one that is capital-driven. (And, as such, one that we might expect capitalist Ayn Rand to advocate as a viable means of combating brain drain.)

Of course, it isn’t Rand who is responsible for attracting the brainpower necessary to fuel innovation in your company and it isn’t Rand who is the one prepared to offer her brainpower at some agreed upon ‘price.’ But maybe one of these scenarios describes you. If so, and if you’re interested in learning more about how the NEO model works, contact the author at briankseitz@earthlink.com, or check out the Intellectual Arbitrage Group’s website: http://www.intelarbgrp.org http://home.earthlink.net/briankseitz.

Originally Published in MCADCAFE August 2000

SharePoint as an ITIL Implementation Tool > Why should I care about ITIL? > Business Environment

Why should I care about ITIL?

[CxOs especially CIOs should care in the wake of rising I.T. costs, the strategic importance of I.T. as an enabler, and the growing chasm between business needs and I.T. provided capabilities]

Business Environment

Information Technology has evolved from the automation of financial transactions, once called EDP, into a strategic component of most businesses and in many cases part of the products and services the company delivers.

The computer industry has matured over the decades from EDP to MIS to now I.T. During that evolution the communities that the technology has supported has widened and deepen, to the point where almost every function and role within an enterprise has been touched by it.

Computer Technology initially automated repetitive accounting tasks, increasing the productivity of the accounting department. In the next generation of adoption (MIS), managers were able to create and monitor finance plans; the computer was still being used to “crunch” vast quantities of numbers. During this and previous generation computer resources where typically big centralized mainframes.

The next generation of Information Technology adoption came with the advent of microchips and the microprocessor making workstations and personal computers possible. Select members of the company staffs now had two sets of resources; mainframes and personal workstations. While the tools were initially primitive a larger pool of professional and amateur focused upon creating a richer and more flexible suite of function. Ushering in the personal productivity initiatives that still dominate the market today.

From here variations on the theme of connecting computers together created the next generation (network). Computers were either connected peer to peer or in a client server model. These enabled people to copy content around the network to share with others.

These advancements in usage however came with some liabilities; more effort was needed to maintain multiple computers, software and the network connections.

While the product development process for information technology industry has evolved over the past several years attempting to catch up with demand, the operations area has not kept pace.

Most I.T. operations are little more advanced than where they started. Instead of using JCL or Exec, Operations now using scripting languages. The model for delivery is after Q.A. install, run, optimize and stabilize. There is a natural tension between development and operations due to the differing roles, responsibilities and measurements.

As such many of the applications that were create years ago are still in operation, patched together when they should have been sun-set or displaced by others. A lesson hard goods manufacturing has learned: Not too many manual laths and milling machines around anymore. After its effective life these tools are salvaged and replaced. I.T. has not adopted a real lifecycle model yet, expect where purchased software is concern. The end result of this is a legacy wake of support that keeps growing.

As more solutions involve a configuration of multiple components end users are experiencing increasing amounts of operational brown and blackouts. These service interruptions however are not noted effectively in the statistics presented to the user community as compared to what they experience. This discrepancy is due to the difference in how services are measured. For the end user service is an end to end process, while most I.T. organization report on individual components.

A simple example: The Server CPU was up 99.9% of the time, the Storage Area Network (SAN) was on-line 97%, the Network 96%, the application 95%, the DBMS 98%. The logical assumption is the user experience 95% service availability. What the user actually experienced was 86.6%. This is due to the fact that availability is the results of all the components operating. Thus any break in the availability chain results in a loss of service. The formula for calculating this:

Service Availability = Σ Server × SAN × Network × DBMS × Application

Thus the management of technology vs. the mechanics of technology is becoming a critical set of skills. The business skills needed to understand and manage risk, finances, communications and customer relationships have become critical success factors for technologists.  It’s no longer enough for a Programmer or DBA or System Administrator to just have technology competency.

  As technology becomes easier to operation from an end-user perspective, stakeholders slip into a complacency regarding the effort needed to develop, operate and manage the complexities behind the apparently simple application.  Thus the magic user button proves it’s easy for anyone to build the system; since operating it is so simple, building it must be also.   

An overused example but still relevant; how many people really understand the technology needed to operate a car, not even a current model car with all its electronics.  Yet, people dismiss the amount of complexity that is inherent in the automotive transportation system from vehicle to fuel distribution to roads and traffic control systems.

Each of the components of system are also complex systems also and coordinating the interfaces, interaction and operations of these becomes an ever increasing task which has not received the same attention to automate or innovate as line of business applications, though information technology is now a line of business application unto itself.   

Delivery of I.T.’s value is evolving in advanced organizations from a product development to a service performance model.

Prior to the introduction of I.T. as a service this past decade, I.T. organizations were modeling themselves after product factories focusing on such metrics as: speed to market, product features, product robustness, and cost.  When it came to the operational and support side minor efforts have been made.  However, as with physical products, the glory and rewards have traditionally been on the creative and design side of the equation along with the funding.  It’s not too often that support or operations gets much accolades, it’s usually on the side of being asked to squeeze out one more cycle from an underpowered system.

Thus those in the know kept a fair distance from being on support or operations.  If you were a new employee chances are your first assignment was in operations or helpdesk with the rationalization that it would give you a will rounded view of how the organization worked.  As such it became the dumping ground or was thought of as the dumping ground for novice and underperforming employees.  This was a similar fate to mechanical engineers who were sent off to manufacturing rather than design roles.  

This scenario may change soon.  As in the engineering environment, the democratization of design (e.g., mass customization) has created a situation where the tools at hand can enable end users to create their own solutions.  This change in dynamics is occurring rather rapidly in the I.T. environment as well.  SharePoint is such a platform that is changing the game.

What that means is I.T. organizations will have to rethink value and how that value is delivered.