Know What You Got: Just what, exactly, should be inventoried?

[tl;dr Application inventories linked to team structure, coupled with increased use of meta-data in the software-development process that is linked to architectural standards such as functional, data and business ontologies, is key to achieving long term agility and complexity management.]

Some of the biggest challenges with managing a complex technology platform is knowing when to safely retire or remove redundant components, or when there is a viable reusable component (instantiated or not) that can be used in a solution. (In this sense, a ‘component’ can be a script, a library, or a complete application – or a multitude of variations in-between.)

There are multiple ways of looking at the inventory of components that are deployed. In particular, components can view from the technology perspective, or from the business or usage perspective.

The technology perspective is usually referred to as configuration management, as defined by ITIL. There are many tools (known as ‘CMDB’s) which, using ‘fingerprints’ of known software components, can automatically inventorise which components are deployed where, and their relationships to each other. This is a relatively well-known problem domain – although years of poor deployment practice means that random components can be found running on infrastructure months or even years after the people who created it have moved on. Although the ITIL approach is imminently sensible in principle, in practice it is always playing catch-up because deployment practices are only improving slowly in legacy environments.

Current cloud-aware deployment practices encapsulated in dev-ops are the antithesis of this approach: i.e., all aspects of deployment are encapsulated in script and treated as source code in and of itself. The full benefits of the ITIL approach to configuration management will therefore only be realised when the transition to this new approach to deployment is fully completed (alongside the virtualisation of data centres).

The business or usage perspective is much harder: typically this is approximated by establishing an application inventory, and linking various operational accountabilities to that inventory.

But such inventories suffer from key failings..specifically:

  • The definition of an application is very subjective and generally determined by IT process needs rather than what makes sense from a business perspective.
  • Application inventories are usually closely tied to the allocation of physical hardware and/or software (such as operating systems or databases).
  • Applications tend to evolve and many components could be governed by the same ‘application’ – some of which may be supporting multiple distinct business functions or products.
  • Application inventories tend to be associated with running instances rather than (additionally) instantiable instances.
  • Application inventories may capture interface dependencies, but often do not capture component dependencies – especially when applications may consist of components that are also considered to be a part of other applications.
  • Application and component versioning are often not linked to release and deployment processes.

The consequence is that the application inventory is very difficult to align with technology investment and change, so it is not obvious from a business perspective which components could or should be involved in any given solution, and whether there are potential conflicts which could lead to excessive investment in redundant components, and/or consequent under-investment in potentially reusable components.

Related to this, businesses typically wish to control the changes to ‘their’ application: the thought of the same application being shared with other businesses is usually something only agreed as a last resort and under extreme pressure from central management: the default approach is for each business to be fully in control of the change management process for their applications.

So IT rationally interprets this bias as a license to create a new, duplicate application rather than put in place a structured way to share reusable components, such that technical dependencies can be managed without visibly compromising business-line change independence. This is rational because safe, scalable ways to reuse components is still a very immature capability most organisations do not yet have.

So what makes sense to inventory at the application level? Can it be automated and made discoverable?

In practice, processes which rely on manual maintenance of inventory information that is isolated from the application development process are not likely to succeed – principally because the cost of ensuring data quality will make it prohibitive.

Packaging & build standards (such as Maven, Ant, Gradle, etc) and modularity standards (such as OSGi for Java, Gems for Ruby, ECMAsript for JS, etc) describe software components, their purpose, dependencies and relationships. In particular, disciplined use of software modules may allow applications to self-declare their composition at run-time.

Non-reusable components, such as business-specific (or context-specific) applications, can in principle be packaged and deployed similarly to reusable modules, also with searchable meta-data.

Databases are a special case: these can generally be viewed as reusable, instantiated components – i.e., they may be a component of a number of business applications. The contents of databases should likely best be described through open standards such as RDF etc. However, other components (such as API components serving a defined business purpose) could also be described using these standards, linked to discoverable API inventories.

So, what precisely needs to be manually inventoried? If all technical components are inventoried by the software development process, the only components that remain to be inventoried must be outside the development process.

This article proposes that what exists outside the development process is team structure. Teams are usually formed and broken up in alignment with business needs and priorities. Some teams are in place for many years, some may only last a few months. Regardless, every application component must be associated with at least one team, and teams must be responsible for maintaining/updating the meta-data (in version control) for every component used by that team. Where teams share multiple components, a ‘principle’ team owner must be appointed for each component to ensure consistency of component meta-data, to handle pull-requests etc for shared components, and to oversee releases of those components. Teams also have relevance for operational support processes (e.g., L3 escalation, etc).

The frequency of component updates will be a reflection of development activity: projects which propose to update infrequently changing components can expect to have higher risk than projects whose components frequently change, as infrequently changing components may indicate a lack of current knowledge/expertise in the component.

The ability to discover and reason about existing software (and infrastructure) components is key to managing complexity and maintaining agility. But relying on armies of people to capture data and maintain quality is impractical. Traditional roadmaps (while useful as a communication tool) can deviate from reality in practice, so keeping them current (except for communication of intent) may not be a productive use of resources.

In summary, application inventories linked to team structure, and Increased use of meta-data in the software-development process that is linked to broader architectural standards (such as functional, data and business ontologies) are key to achieving agility and long term complexity management.

Know What You Got: Just what, exactly, should be inventoried?

Achieving modularity: functional vs volatility decomposition

Enterprise architecture is all about managing complexity. Many EA initiatives tend to focus on managing IT complexity, but there is only so much that can be done there before it becomes obvious that IT complexity is, for the most part, a direct consequence of enterprise complexity. To recap, complexity needs to be managed in order to maintain agility – the ability for an organisation to respond (relatively) quickly and efficiently to changes in markets, regulations or innovation, and to continue to do this over time.

Enterprise complexity can be considered to be the activities performed and resources consumed by the organisation in order to deliver ‘value’, a metric usually measured through the ability to maintain (financial) income in excess of expenses over time.

Breaking down these activities and resources into appropriate partitions that allow holistic thinking and planning to occur is one of the key challenges of enterprise architecture, and there are various techniques to do this.

Top-Down Decomposition

The natural approach to decomposition is to first understand what an organisation does – i.e., what are the (business) functions that it performs. Simply put, a function is a collection of data and decision points that are closely related (e.g., ‘Payments ‘is a function). Functions typically add little value in and of themselves – rather they form part of an end-to-end process that delivers value for a person or legal entity in some context. For example, a payment on its own means nothing: it is usually performed in the context of a specific exchange of value or service.

So a first course of action is to create a taxonomy (or, more accurately, an ontology) to describe the functions performed consistently across an enterprise. Then, various processes, products or services can be described as a composition of those functions.

If we accept (and this is far from accepted everywhere) that EA is focused on information systems complexity, then EA is not responsible for the complexity relating to the existence of processes, products or services. The creation or destruction of these are usually a direct consequence of business decisions. However, EA should be responsible for cataloging these, and ensuring these are incorporated into other enterprise processes (such as, for example, disaster recovery or business continuity processes). And EA should relate these to the functional taxonomy and the information systems architecture.

This can get very complex very quickly due to the sheer number of processes, products and services – including their various variations – most organisations have. So it is important to partition or decompose the complexity into manageable chunks to facilitate meaningful conversations.

Enterprise Equivalence Relations

One way to do this at enterprise level is to group functions into partitions (aka domains) according to synergy or autonomy (as described by Roger Sessions), for all products/services supporting a particular business. This approach is based on the mathematical concept of equivalenceBecause different functions in different contexts may have differing equivalence relationships, functions may appear in multiple partitions. One role of EA is to assess and validate if those functions are actually autonomous or if there is the potential to group apparently duplicate functions into a new partition.

Once partitions are identified, it is possible to apply ‘traditional’ EA thinking to a particular partition, because that partition is of a manageable size. By ‘traditional’ EA, I mean applying Zachman, TOGAF, PEAF, or any of the myriad methodologies/frameworks that are out there. More specifically, at that level, it is possible to establish a meaningful information systems strategy or goal for a particular partition that is directly supporting business agility objectives.

The Fallacy of Functional Decomposition

Once you get down to the level of partition, the utility of functional decomposition when it comes to architecting solutions becomes less useful. The natural tendency for architects would be to build reusable components or services that realise the various functions that comprise the partition. In fact, this may be the wrong thing to do. As Jüval Lowy demonstrates in his excellent webinar, this may result in more complexity, not less (and hence less agility).

When it comes to software architecture, the real reason to modularise your architecture is to manage volatility or uncertainty – and to ensure that volatility in one part of the architecture does not unnecessarily negatively impact another part of the architecture over time. Doing this allows agility to be maintained, so volatile parts of the application can, in fact, change frequently, at low impact to other parts of the application.

When looking at a software architecture through this lens, a quite different set of components/modules/services may become evident than those which may otherwise be obvious when using functional decomposition – the example in the webinar demonstrates this very well. A key argument used by Jüval in his presentation is that (to paraphrase him somewhat) functions are, in general, highly dependent on the context in which they are used, so to split them out into separate services may require making often impossible assumptions about all possible contexts the functions could be invoked in.

In this sense, identified components, modules or services can be considered to be providing options in terms of what is done, or how it is done, within the context of a larger system with parts of variable volatility. (See my earlier post on real options in the context of agility to understand more about options in this context.)

Partitions as Enterprise Architecture

When each partition is considered with respect to its relationship with other partitions, there is a lot of uncertainty around how different partitions will evolve. To allow for maximum flexibility, every partition should assume each other partition is a volatile part of their architecture, and design accordingly for this. This allows each partition to evolve (reasonably) independently with minimum fixed co-ordination points, without compromising the enterprise architecture by having different partitions replicate the behaviours of partitions they depend on.

This then allows:

  • Investment to be expressed in terms of impact to one or more partitions
  • Partitions to establish their own implementation strategies
  • Agile principles to be agreed on a per partition basis
  • Architectural standards to be agreed on a per partition basis
  • Partitions to define internally reusable components relevant to that partition only
  • Partitions to expose partition behaviour to other partitions in an enterprise-consistent way

In generative organisation cultures, partitions do not need to be organisationally aligned. However, in other organisation cultures (pathological or bureaucratic), alignment of enterprise infrastructure functions such as IT or operations (at least) with partitions (domains) may help accelerate the architectural and cultural changes needed – especially if coupled with broader transformations around investment planning, agile adoption and enterprise architecture.

Achieving modularity: functional vs volatility decomposition

Achieving Agile at Scale

[tl;dr Scaling agile at the enterprise level will need rethinking how portfolio management and enterprise architecture are done to ensure success.]

Agility,as a concept, is gaining increasing attention within large organisations. The idea that business functions – and in particular IT – can respond quickly and iteratively to business needs is an appealing one.

The reasons why agility is getting attention are easy to spot: larger firms are getting more and more obviously unagile – i.e., the ability of business functions to respond to business needs in a timely and sustainable manner is getting progressively worse, even as a rapidly evolving competitive and technology-led commercial environment is demanding more agility.

Couple that with the heavy cost of failing to meet ever increasing regulatory compliance obligations, and ‘agile’ seems a very good idea indeed.

Agile is a great idea, but when implemented at scale (in large enterprise organisations), it can actually reduce enterprise agility, rather than increase it, unless great care is taken.

This is partly because Agile’s origins come from developing web applications: in these scenarios, there is usually a clear customer, a clear goal (to the extent that the team exists in the first place), and relatively tight timelines that favour short or non-existent analysis/design phases. Agile is perfect for these scenarios.

Let’s call this scenario ‘local agile’. It is quite easy to see a situation where every team, in response to the question, ‘are you doing agile?’, for teams to say ‘Yes, we do!’. So if every team is doing ‘local agile’, does that mean your organisation is now ‘agile’?

The answer is No. Getting every team to adopt agile practices is a necessary but insufficient step towards achieving enterprise agility. In particular, two key factors needs to be addressed before true a firm can be said to be ‘agile’ at the enterprise level. These are:

  1. The process by which teams are created and funded, and
  2. Enterprise awareness

Creating & Funding (Agile) Teams

Historically, teams are usually created as a result of projects being initiated: the project passes investment justification criteria, the project is initiated and a team is put in place, led by the project manager. Also, this process was owned entirely by the IT organisation, irrespective of which other organisations were stakeholders in the project.

At this point, IT’s main consideration is, will the project be delivered on time and on budget? The business sponsor’s main consideration is, will it give us what we need when we need it? And the enterprise’s consideration (which is often ignored) is who is accountable for ensuring that the IT implementation delivers value to the enterprise. (In this sense, the ‘enterprise’ could be either a major business line with full P&L responsibility for all activities performed in support of their business, or the whole organisation, including shared enterprise functions).

Delivering ‘value’ is principally about ensuring that  on-going or operational processes, roles and responsibilities are adjusted to maximise the benefits of a new technology implementation – which could include organisational change, marketing, customer engagement, etc.

However, delivering ‘value’ is not always correlated to one IT implementation; value can be derived from leveraging multiple IT capabilities in concert. Given the complexity of large organisations, it is often neither desirable or feasible to have a single IT partner be responsible for all the IT elements that collectively deliver business value.

On this basis, it is evident that how businesses plan and structure their portfolio of IT investments needs to change dramatically. In particular,

  1. The business value agenda is outcome focused and explicit about which IT capabilities are required to enable it, and
  2. IT investment is focused around the capability investment lifecycle that IT is responsible for stewarding.

In particular ‘capabilities’ (or IT products or services) have a lifecycle: this affects the investment and expectations around those capabilities. And some capabilities need to be more ‘agile’ than others – some must be agile to be useful, whereas for others, stability may be the over-riding priority, and therefore their lack of agility must be made explicit – so agile teams can plan around that.

Enterprise Awareness

‘Locally’ agile teams are a step in the right direction – particularly if the business stakeholders all agree they are seeing the value from that agility. But often this comes at the expense of enterprise awareness. In short, agility in the strict business sense can often only deliver results by ignoring some stakeholders interests. So ‘locally’ agile teams may feel they must minimise their interactions with other teams – particularly if those teams are not themselves agile.

If we assume that teams have been created through a process as described in the previous section, it becomes more obvious where the team sits in relation to its obligations to other teams. Teams can then make appropriate compromises to their architecture, planning and agile SDLC to allow for those obligations.

If the team was created through ‘traditional’ planning processes, then it becomes a lot harder to figure out what ‘enterprise awareness’ is appropriate (except perhaps or IT-imposed standards or gates, which only contributes indirectly to business value).

Most public agile success stories describe very well how they achieved success up to  – but not including – the point at which architecture becomes an issue. Architecture, in this sense, refers to either parts of the solution architecture which can no longer be delivered via one or two members of an agile team, or those parts of the business value chain that cannot be entirely delivered via the agile team on its own.

However, there are success stores (e.g., Spotify) that show how ‘enterprise awareness’ can be achieved without limiting agility. For many organisations, transitioning from existing organisation structures to new ‘agile-ready’ structures will be a major challenge, and far harder than simply having teams ‘adopt agile’.

Conclusions

With the increased attention on Agile, there is fortunately increased attention on scaling agile. Methodologies like Disciplined Agile Development (DaD) and LargE Scale Scrum (LeSS), coupled with portfolio concepts like Scaled Agile Framework (SAFe) propose ways in which Agile can scale beyond the team and up to enterprise level, without losing the key benefits of the agile approach.

All scaled agile methodologies call for changes in how Portfolio Management and Enterprise Architecture are typically done within an organisation, as doing these activities right are key to the success of adopting Agile at scale.

Achieving Agile at Scale

Microservices in the Enterprise: Friend or Foe?

The Tai-Dev Blog

A micro approach to a macro problem?

The microservice hype is everywhere, and although the industry can’t seem to agree on an exact definition, we are repeatedly told that moving away from a monolithic application to a Service-Oriented Architecture (SOA) consisting of small services is the correct way to build and evolve software systems. However, there is currently an absence of traditional ‘Enterprise’ organisations talking about their adoption of microservices. This blog post is a preview to a larger article, which explores the use of microservices in the Enterprise.

Interfaces – Good contracts make for good neighbours

Whether you are starting a greenfield microservice project or are tasked with deconstructing an existing monolith into services, the first task is to define the boundaries and corresponding Application Programming Interfaces (APIs) of your new components.

The suggested granularity of a service in a microservice architecture is finer in comparison with what…

View original post 678 more words

Microservices in the Enterprise: Friend or Foe?

The Meaning of Data

[tl;dr The Semantic Web may be a CDOs best friend in their efforts to help the business realise the full value of enterprise data.]

Large organisations with complex legacy infrastructure are faced with a dilemma: the technology that allowed them to grow and capture market share has reached a turning point in terms of the value returned from additional spending. The only practical investments that can be made is to shore up (protect) those investments by performing essential maintenance, focusing on security, and (perhaps) moving to lower-cost, cloud-based infrastructure. Mandatory (usually regulatory) enhancements also need to be done.

But in terms of protecting existing revenue or capturing new markets, legacy applications are often seen as a burden. Businesses are increasingly looking outside their core IT capability to address their needs – especially in technologies that can help pinpoint opportunities in the first place (i.e., analysing and processing the digital foot-prints left by customers and prospective customers, whether those foot-prints are internal to the firm or external).

Technology is both the problem and the solution. And herein lies the dilemma: internal IT organisations cannot possibly advise on all possible technology-enabled solutions for a business. Rather, businesses need to become more “tech-savvy” – a term which is bandied about a lot these days.

What does “tech-savvy” actually mean, and where is the line drawn between a “tech-savvy” business person, and the professional IT service provider? And what form should this ‘line’ take?

Arguably, most businesses have always been tech-savvy: they knew by-and-large where technology was necessary to grow their business. So for the most successful firms, there is a *lot* of technology. Does that mean those businesses are tech-savvy?

Yes, if those firms are able to manage the complexity of their IT – in other words, to be able to adapt their IT to shifting market needs, and to incorporate/absorb innovations into their architecture without significantly reducing that agility. In practice, few firms have such mastery of their IT (and, by implication, their processes and business information systems).

So a new form of “tech savvy” is needed, that allows business folks to leverage opportunities to both find customers and meet their needs (profitably), while preventing a build-up of unsustainable complexity of processes and systems.

In essence, tech-savvy business folk need to get better at understanding data. And IT needs to get much better at making meaningful business data available to their business – irrespective of where that business data may actually reside.

What does this mean in practice? It’s all about semantics – something previously in the domain of data geeks. But major initiatives like the semantic web (led by Tim Berners-Lee, of WWW fame) are making semantics useful to traditionally non-technical people.

The tools available to business folks around what information (i.e., data + context) is available is very poor, and even when the required data is found (i.e., it is known where it is) it can be difficult to extract it and use it in a productive way.

A good example of this may be observed in the proliferation of Excel spreadsheets and Access databases in organisations that support critical business functions. These tools are necessary to support those areas as the data they need from various systems are not where they need when they need it – and that is often due to business needs changing faster than IT can capture, absorb and deliver requirements.

Over time (at least in principle), all meaningful business data will (must) end up being processed exclusively by controlled enterprise systems..but this doesn’t mean waiting until IT has made the necessary changes in order to effectively govern that data and make maximum use out of it.

A key principle behind the semantic web is that data can be anywhere. In an enterprise scenario, that means the data could exist in:

  • an internal enterprise system
  • a file system (e.g., Excel sheet or Access database)
  • a website (file download, or website screen scraping)
  • a commercial information provider (Reuters, Bloomberg, etc)
  • a business partner/supplier
  • etc

To take advantage of this, meta-data (i.e., data about data) needs to be made available to businesses, and it needs to be business-relevant and agnostic to internal IT systems and processes.

More than this, the tools needed to discover useful information, as well as to retrieve and process that information need to be available to tech-savvy business folk. The right set of tools, which are suitably agnostic to specific architectures and systems, can allow businesses to explore new opportunities and business models, while allowing the IT systems and platforms to catch up and evolve over time – noting that there is no presumption that the eventual source of useful business data originates from, or is stewarded by, internal systems managed by IT.

Such tools also need to be enforce compliance to ensure only the right people get access to the right data, and (if necessary) limit the ability for people to pull data outside of the retrieval platform. In essence, providing a controlled sandbox in which businesses can distill value from information, whatever its source.

Many organisations may approach this by implementing a ‘data lake‘. This may (perhaps) be a workable solution for all data assets managed by the IT organisation. But it is not feasible for many sets of data which are not managed by the IT organisation, but which still have business utility.

Emerging standards and technologies are evolving to meet this need for a data-centric view of information systems – in particular, RDF, OWL and related ‘ontology‘ languages, as well as standards like SPARQL which enable the discovery and retrieval of data originating from multiple sources. But tools are still very primitive and generally require too much technology savviness. However, efforts like, for example, the Callimachus Project gives a hint at the potential of data-driven applications.

At some point, it will not be unreasonable to ask an IT organisation to expose all of its data assets semantically via SPARQL end-points (with appropriate access controls), and to provide tools to businesses to allow them to explore that data and (where permissible) incorporate them into spreadsheets, models and other tools in ways that allow the business to realise value from that data without requiring IT change requests. Developing the capabilities to understand semantic data, and use it to commercial advantage, will take time but it will be a worthwhile investment (and arguably before folks start spending money on ‘big data’ projects that have no wider context).

In fact, I would go so far as to say that any provider (startup or established) delivering technology services to a business should provide SPARQL endpoints as a matter of course. Making data useful and available to the business will help the business realise the value of data and get more involved in how it is captured, processed and stored in the future – and make it easier to incorporate 3rd party solution providers into business operations.

In a nutshell, the Semantic Web may be the CDOs best friend in their efforts to help the business realise the full value of enterprise data.

The Meaning of Data

The Importance of Principles in Managing Complexity

[tl;dr The first step to managing complexity is to define what is important. Principles are a great way to reach consensus on what is important, and establishing governance around adherence to those principles are the first step to getting on top of change-related complexity.]

Managing complexity is not easy, whether you are in a large organisation, or in a relatively green-field environment where growth is rapid and complexity is a far-off problem you hope to have some day…

The fact is, if your organisation suffers from a complexity problem (in technology, and implicitly, processes and data), and you are still in (a profitable) business, then it is that complexity which, for better or for worse, has been the instrument of that organisation’s success. As a consequence, it is quite hard to convince people to implement complexity management (read ‘enterprise architecture’) before complexity actually becomes a problem.

Sometimes, however, unmanaged complexity brings an organisation’s growth to a grinding halt, in the worst cases leading to eventual shutdown. (In this context, an organisation can be an independent business or a business unit within a larger company.)

Historically, investment in IT incurred large capital costs, so usually there was a large focus on realising value from that investment – i.e. to exploit it to the maximum. Exploiting technology usually means adding on more and more functionality to existing systems in ways that meet business imperatives.  But business imperatives tend to be fairly short-term in nature. Without guiding principles, the act of exploiting technology leads to ever more expensive change-related costs.

For any self-respecting IT organisation, a core set of principles, actively governed, is an absolute necessity. Principles transcend solution architectures and technologies, and enable project autonomy as decisions can be made locally without requiring ‘central’ approval, provided principles are being adhered to.

Such principles, call them ‘implementation principles’ should guide how IT organisations do their work and manage change.

But many principles that stay only within the IT organisation may break in the face of over-whelming business imperatives: on the basis that the (immediate) business needs comes first, IT often takes it on the chin and accumulates technical debt.

So it is important to agree principles that business stakeholders agree to and supports, and for which they are willing to have an open discussion about in the context of delaying projects, increasing costs, prioritising change, or driving organisational change.

Let’s call these principles ‘value principles’ – they are focused on how the business can realise value from its investments, particularly in IT.

In order to understand what this means in practice, it may be worth returning to the ‘Zero CapEx IT’ concept introduced in my last post. What if all IT was sourced through (virtual or actual) 3rd parties, so that all of IT was delivered as software-as-a-service?

In this case, the ‘implementation principles’ would be whatever guidelines or standards were used by the SaaS provider to ensure they could deliver change reliably, and maintain service according to agreed levels. The SaaS provider has a vested interest in the client getting value out of their service, but the only levers they have to ensure that is their ability to react in a timely manner to change requests.

From the business’ perspective, adopting an IT (SaaS) service requires structural change to ensure the business can take advantage of the technology. This may mean assigning roles and responsibilities, defining new activities, maybe creating whole new departments, etc. The business may be paying for the service, but unless it pro-actively maximises what the service can offer in terms of what the business needs, it may not get the value. The SaaS (i.e., IT) can do little to address this situation, and may end up losing the client if the client does not take action.

On the other hand, if a client is pro-actively engaged, then they will likely want regular changes so they can continually optimise the value of the service to their operations. This puts the onus on the provider to remain agile.

In a corporate environment, where the accountability between value and implementation is often blurred, there is usually little optionality with respect to who your provider is – or, from a provider perspective, who your client is. In fact, the client/provider perspective in a corporate environment can be damaging, as it can reinforce an us-vs-them culture, leading to mistrust and an ‘order-taking’ mentality developing between the client and the provider.

In this environment, complexity will thrive because, after all, the client is tied and therefore has to keep paying for their service. There is little incentive to be agile..if the client wants more quicker, they have to pay for it.

If, on the other hand, the client retains an active, supportive interest in how their IT is delivered, as well as what is delivered, then they will get the IT they want.

If the client is not willing to agree certain base-line principles with their (IT) provider, and to put in place structures to enforce them (including a formal waiver process), then one can expect little to improve.

On the other hand, if the client or business partner will willingly participate in establishing principles and related governance forums, this will set the tone for how the business/IT relationship will evolve going forward.

At the very least, ‘value principles’ (such as, for example, “The business will commit to implementing the necessary structural changes to maximise IT investment”) should be agreed, and the consequence/implications of these to project planning should be understood.

Note that value principles are increasingly including concepts around data governance, which historically have been an ‘implementation’ principle – for example, the principle that ‘information is an asset’.

‘Implementation principles’  are principles which determine how (IT) change is decided and implemented, and how complexity is managed, by the provider. Where the business and technology are inter-twined (i.e., the business value proposition *is* the technology), these would simply form part of the business ‘value principles’. But where the business is agnostic as to the technology, and are only focused on value, then the ‘implementation principles’ are solely for the provider to define and enforce. An example of an ‘implementation’ principle may be ‘Control of technical diversity’, with consequent implications for solution architectures.

Adopting principles means that an organisation has some sense of what it wants to be, even if only aspirationally. It allows individuals or teams to feel comfortable making decisions on their own without feeling they have to get approval, which is key for agile cultures. Establishing formal business+IT forums to review principle waivers helps provide transparency with respect to project risk and platform risk. And it allows Technical Debt to be managed, as every waiver is a recognition of technical debt incurred.

In conclusion, managing complexity is hard. But Principles can make it manageable.

The Importance of Principles in Managing Complexity

What if IT had no CapEx? A gedanken experiment.

[tl;dr Imagining IT with no CapEx opens up possibilities that may not otherwise be considered. This is a thought experiment and it may never be practical or desirable to realise, but seeking to minimise direct CapEx expenditure in portfolio planning may help the firm’s overall Agile agenda.]

The ‘no CapEx’ thought experiment is related to the ‘Lean Enterprise’ theme – i.e., how enterprises can be more agile and responsive to business change, and how this could fundamentally change organisation’s approach to IT.

CapEx (or Capital Expenditure) usually relates to large up-front costs that are depreciated over a number of years. Investments such as infrastructure and software licenses are generally treated as CapEx. OpEx, on the other hand, refers to operational expenses, incurred on a recurring basis.

The interesting thing about CapEx is that it is a commitment – i.e., there is no optionality behind it. This means if business needs change, the cost of breaking that commitment could be prohibitively high. So businesses may be forced to keep pushing square pegs through round holes in their technology, reducing the potential value of the investment.

Agility is all about optionality – the deferring of (expensive) commitments to the latest possible moment, so that the commercial benefits of that commitment can be productively exploited for the life of that commitment.

Increasingly, it is becoming less and less clear to organisations, as technology rapidly advances, where to make these commitments that do not unnecessarily limit options in the future.

So, as a thought experiment, how would a CapEx-less IT organisation actually work, and what would it look like?

The first observation is that somebody, somewhere, has to make a CapEx investment for IT. For example, Amazon has made a significant CapEx investment in its datacentres, and it is this investment that allows other businesses to avoid having to make CapEx investments. Is this profitable for Amazon? It’s hard to tell..prices keep going down, and efficiency is improving all the time. In the long run, some profitable players will emerge. For now, organisations choose to use Amazon only because it is cheaper than the capital-investment alternative, over the length of the investment cycle.

Software licensing as a revenue stream is under pressure: businesses may be happier to pay for services to support the software they are using rather than for some license that gives them the right to use a specific version of a software package for commercial purposes. In the days when software was distributed to consumers and deployed onto proprietary infrastructure, this made sense. But more and more generally useful software is being open-sourced, and more and more ‘proprietary’ software is being delivered as a service. So CapEx related to software licenses will (eventually) not be as profitable as it once was (at least for the major enterprise software players).

So, in a CapEx-free IT environment, the IT landscape looks like an interacting set of software services that collectively deliver business value (i.e., give the businesses what they need in a timely manner and at a manageable cost).

The obligation of the enterprise is also clear: either derive value from the service, or stop using the service. In other words, exercise your option to terminate use of the service if the value is not there.

In this environment, do organisations actually need an IT strategy in the conventional/traditional sense? Isn’t it all about sourcing solutions and ensuring the solutions providers interact appropriately to collectively provide value to the enterprise, and that the enterprise interacts with the providers in such a way as to optimise change for maximum value? In the extreme case, yes.

For this model to work, providers need to be agile. They will likely have a number of clients, all requiring different solutions (albeit in the same domain). They need to be responsive, and they need to be able to scale. It is these characteristics that will differentiate among different providers.

All of this has massive implications with respect to data management, security and compliance. A strong culture of data governance coupled with sensible security and compliance policies could (eventually) allow these hurdles to be overcome.

The question for firms then, is where does it make sense to have an internal IT capability? And, apart from some R&D capabilities, is it worth it? Especially as individual businesses could innovate with a number of providers at relatively low risk. Perhaps the model should be to treat each internal capability as potentially a future third party service provider? Give the team the ability to innovate on agility, responsiveness and scaleability. Retain the option in the future to let the ‘internal’ service provider serve other businesses, or to shut it down if it is not providing value.

It is worth bearing in mind that the concept of the traditional ‘end user’ is changing. The end-user is someone who is often quite adept at making technology do what they want, without being considered traditional developers. Providing end-users with innovative ways to connect and orchestrate various (enterprise) services is expected. The difference between ‘local’ and ‘enterprise’ development is then emphasised, with local development being reactive and using enterprise services to ensure compliance etc. while still meeting localised needs.

Such an approach would maximise enterprise agility and also provide skilled technologists and even end users the ability to flex their technical chops in ways that businesses can exploit to their fullest advantage. The cost of this agility is that other organisations can use the same ‘building blocks’ to greater competitive advantage. As has always been the case, it is not so much the technology, as what you do with it, that really matters.

To summarise, while not always possible or even desirable, minimising (direct) CapEx expenses for IT is becoming a feasible way to achieve IT agility.

What if IT had no CapEx? A gedanken experiment.