Transforming IT: From a solution-driven model to a capability-driven model

[tl;dr Moving from a solution-oriented to a capability-oriented model for software development is necessary to enable enterprises to achieve agility, but has substantial impacts on how enterprises organise themselves to support this transition.]

Most organisations which manage software change as part of their overall change portfolio take a project-oriented approach to delivery: the project goals are set up front, and a solution architecture and delivery plan are created in order to achieve the project goals.

Most organisations also fix project portfolios on a yearly basis, and deviating from this plan can often very difficult for organisations to cope with – at least partly because such plans are intrinsically tied into financial planning and cost-saving techniques such as capitalisation of expenses, etc, which reduce bottom-line cost to the firm of the investment (even if it says nothing about the value added).

As the portfolio of change projects rise every year, due to many extraneous factors (business opportunities, revenue protection, regulatory demand, maintenance, exploration, digital initiatives,  etc), cross-project dependency management becomes increasingly difficult. It becomes even more complex to manage solution architecture dependencies within that overall dependency framework.

What results is a massive set of compromises that ends up with building solutions that are sub-optimal for pretty much every project, and an investment in technology that is so enterprise-specific, that no other organisation could possibly derive any significant value from it.

While it is possible that even that sub-optimal technology can yield significant value to the organisation as a whole, this benefit may be short lived, as the cost-effective ability to change the architecture must inevitably decrease over time, reducing agility and therefore the ability to compete.

So a balance needs to be struck, between delivering enterprise value (even at the expense of individual projects) while maintaining relative technical and business agility. By relative I mean relative to peers in the same competitive sector…sectors which are themselves being disrupted by innovative technology firms which are very specialist and agile within their domain.

The concept of ‘capabilities’ realised through technology ‘products’, in addition to the traditional project/program management approach, is key to this. In particular, it recognises the following key trends:

  • Infrastructure- and platform-as-a-service
  • Increasingly tech-savvy work-force
  • Increasing controls on IT by regulators, auditors, etc
  • Closer integration of business functions led by ‘digital’ initiatives
  • The replacement of the desktop by mobile & IoT (Internet of Things)
  • The tension between innovation and standards in large organisations

Enterprises are adapting to all the above by recognising that the IT function cannot be responsible for both technical delivery and ensuring that all technology-dependent initiatives realise the value they were intended to realise.

As a result, many aspects of IT project and programme management are no longer driven out of the ‘core’ IT function, but by domain-specific change management functions. IT itself must consolidate its activities to focus on those activities that can only be performed by highly qualified and expert technologists.

The inevitable consequence of this transformation is that IT becomes more product driven, where a given product may support many projects. As such, IT needs to be clear on how to govern change for that product, to lead it in a direction that is most appropriate for the enterprise as a whole, and not just for any particular project or business line.

A product must provide capabilities to the stakeholders or users of that product. In the past, those capabilities were entirely decided by whatever IT built and delivered: if IT delivered something that in practice wasn’t entirely fit for purpose, then business functions had no alternative but to find ways to work around the system deficiencies – usually creating more complexity (through end-user-developed applications in tools like Excel etc) and more expense (through having to hire more people).

By taking a capability-based approach to product development, however, IT can give business functions more options and ways to work around inevitable IT shortfalls without compromising controls or data integrity – e.g., through controlled APIs and services, etc.

So, while solutions may explode in number and complexity, the number of products can be controlled – with individual businesses being more directly accountable for the complexity they create, rather than ‘IT’.

This approach requires a step-change in how traditional IT organisations manage change. Techniques from enterprise architecture, scaled agile, and DevOps are all key enablers for this new model of structuring the IT organisation.

In particular, except for product-strategy (where IT must be the leader), IT must get out of the business of deciding the relative value/importance of individual product changes requested by projects, which historically IT has been required to do. By imposing a governance structure to control the ‘epics’ and ‘stories’ that drive product evolution, projects and stakeholders have some transparency into when the work they need will be done, and demand can be balanced fairly across stakeholders in accordance with their ability to pay.

If changes implemented by IT do not end up delivering value, it should not be because IT delivered the wrong thing, but rather the right thing was delivered for the wrong reason. As long as IT maintains its product roadmap and vision, such mis-steps can be tolerated. But they cannot be tolerated if every change weakens the ability of the product platform to change.

Firms which successfully balance between the project and product view of their technology landscape will find that productivity increases, complexity is reduced and agility increases massively. This model also lends itself nicely to bounded domain development, microservices, use of container technologies and automated build/deployment – all of which will likely feature strongly in the enterprise technology platform of the future.

The changes required to support this are significant..in terms of financial governance, delivery oversight, team collaborations, and the roles of senior managers and leaders. But organisations must be prepared to do this transition, as historical approaches to enterprise IT software development are clearly unsustainable.

Transforming IT: From a solution-driven model to a capability-driven model

Know What You Got: Just what, exactly, should be inventoried?

[tl;dr Application inventories linked to team structure, coupled with increased use of meta-data in the software-development process that is linked to architectural standards such as functional, data and business ontologies, is key to achieving long term agility and complexity management.]

Some of the biggest challenges with managing a complex technology platform is knowing when to safely retire or remove redundant components, or when there is a viable reusable component (instantiated or not) that can be used in a solution. (In this sense, a ‘component’ can be a script, a library, or a complete application – or a multitude of variations in-between.)

There are multiple ways of looking at the inventory of components that are deployed. In particular, components can view from the technology perspective, or from the business or usage perspective.

The technology perspective is usually referred to as configuration management, as defined by ITIL. There are many tools (known as ‘CMDB’s) which, using ‘fingerprints’ of known software components, can automatically inventorise which components are deployed where, and their relationships to each other. This is a relatively well-known problem domain – although years of poor deployment practice means that random components can be found running on infrastructure months or even years after the people who created it have moved on. Although the ITIL approach is imminently sensible in principle, in practice it is always playing catch-up because deployment practices are only improving slowly in legacy environments.

Current cloud-aware deployment practices encapsulated in dev-ops are the antithesis of this approach: i.e., all aspects of deployment are encapsulated in script and treated as source code in and of itself. The full benefits of the ITIL approach to configuration management will therefore only be realised when the transition to this new approach to deployment is fully completed (alongside the virtualisation of data centres).

The business or usage perspective is much harder: typically this is approximated by establishing an application inventory, and linking various operational accountabilities to that inventory.

But such inventories suffer from key failings..specifically:

  • The definition of an application is very subjective and generally determined by IT process needs rather than what makes sense from a business perspective.
  • Application inventories are usually closely tied to the allocation of physical hardware and/or software (such as operating systems or databases).
  • Applications tend to evolve and many components could be governed by the same ‘application’ – some of which may be supporting multiple distinct business functions or products.
  • Application inventories tend to be associated with running instances rather than (additionally) instantiable instances.
  • Application inventories may capture interface dependencies, but often do not capture component dependencies – especially when applications may consist of components that are also considered to be a part of other applications.
  • Application and component versioning are often not linked to release and deployment processes.

The consequence is that the application inventory is very difficult to align with technology investment and change, so it is not obvious from a business perspective which components could or should be involved in any given solution, and whether there are potential conflicts which could lead to excessive investment in redundant components, and/or consequent under-investment in potentially reusable components.

Related to this, businesses typically wish to control the changes to ‘their’ application: the thought of the same application being shared with other businesses is usually something only agreed as a last resort and under extreme pressure from central management: the default approach is for each business to be fully in control of the change management process for their applications.

So IT rationally interprets this bias as a license to create a new, duplicate application rather than put in place a structured way to share reusable components, such that technical dependencies can be managed without visibly compromising business-line change independence. This is rational because safe, scalable ways to reuse components is still a very immature capability most organisations do not yet have.

So what makes sense to inventory at the application level? Can it be automated and made discoverable?

In practice, processes which rely on manual maintenance of inventory information that is isolated from the application development process are not likely to succeed – principally because the cost of ensuring data quality will make it prohibitive.

Packaging & build standards (such as Maven, Ant, Gradle, etc) and modularity standards (such as OSGi for Java, Gems for Ruby, ECMAsript for JS, etc) describe software components, their purpose, dependencies and relationships. In particular, disciplined use of software modules may allow applications to self-declare their composition at run-time.

Non-reusable components, such as business-specific (or context-specific) applications, can in principle be packaged and deployed similarly to reusable modules, also with searchable meta-data.

Databases are a special case: these can generally be viewed as reusable, instantiated components – i.e., they may be a component of a number of business applications. The contents of databases should likely best be described through open standards such as RDF etc. However, other components (such as API components serving a defined business purpose) could also be described using these standards, linked to discoverable API inventories.

So, what precisely needs to be manually inventoried? If all technical components are inventoried by the software development process, the only components that remain to be inventoried must be outside the development process.

This article proposes that what exists outside the development process is team structure. Teams are usually formed and broken up in alignment with business needs and priorities. Some teams are in place for many years, some may only last a few months. Regardless, every application component must be associated with at least one team, and teams must be responsible for maintaining/updating the meta-data (in version control) for every component used by that team. Where teams share multiple components, a ‘principle’ team owner must be appointed for each component to ensure consistency of component meta-data, to handle pull-requests etc for shared components, and to oversee releases of those components. Teams also have relevance for operational support processes (e.g., L3 escalation, etc).

The frequency of component updates will be a reflection of development activity: projects which propose to update infrequently changing components can expect to have higher risk than projects whose components frequently change, as infrequently changing components may indicate a lack of current knowledge/expertise in the component.

The ability to discover and reason about existing software (and infrastructure) components is key to managing complexity and maintaining agility. But relying on armies of people to capture data and maintain quality is impractical. Traditional roadmaps (while useful as a communication tool) can deviate from reality in practice, so keeping them current (except for communication of intent) may not be a productive use of resources.

In summary, application inventories linked to team structure, and Increased use of meta-data in the software-development process that is linked to broader architectural standards (such as functional, data and business ontologies) are key to achieving agility and long term complexity management.

Know What You Got: Just what, exactly, should be inventoried?