Know What You Got: Just what, exactly, should be inventoried?

[tl;dr Application inventories linked to team structure, coupled with increased use of meta-data in the software-development process that is linked to architectural standards such as functional, data and business ontologies, is key to achieving long term agility and complexity management.]

Some of the biggest challenges with managing a complex technology platform is knowing when to safely retire or remove redundant components, or when there is a viable reusable component (instantiated or not) that can be used in a solution. (In this sense, a ‘component’ can be a script, a library, or a complete application – or a multitude of variations in-between.)

There are multiple ways of looking at the inventory of components that are deployed. In particular, components can view from the technology perspective, or from the business or usage perspective.

The technology perspective is usually referred to as configuration management, as defined by ITIL. There are many tools (known as ‘CMDB’s) which, using ‘fingerprints’ of known software components, can automatically inventorise which components are deployed where, and their relationships to each other. This is a relatively well-known problem domain – although years of poor deployment practice means that random components can be found running on infrastructure months or even years after the people who created it have moved on. Although the ITIL approach is imminently sensible in principle, in practice it is always playing catch-up because deployment practices are only improving slowly in legacy environments.

Current cloud-aware deployment practices encapsulated in dev-ops are the antithesis of this approach: i.e., all aspects of deployment are encapsulated in script and treated as source code in and of itself. The full benefits of the ITIL approach to configuration management will therefore only be realised when the transition to this new approach to deployment is fully completed (alongside the virtualisation of data centres).

The business or usage perspective is much harder: typically this is approximated by establishing an application inventory, and linking various operational accountabilities to that inventory.

But such inventories suffer from key failings..specifically:

  • The definition of an application is very subjective and generally determined by IT process needs rather than what makes sense from a business perspective.
  • Application inventories are usually closely tied to the allocation of physical hardware and/or software (such as operating systems or databases).
  • Applications tend to evolve and many components could be governed by the same ‘application’ – some of which may be supporting multiple distinct business functions or products.
  • Application inventories tend to be associated with running instances rather than (additionally) instantiable instances.
  • Application inventories may capture interface dependencies, but often do not capture component dependencies – especially when applications may consist of components that are also considered to be a part of other applications.
  • Application and component versioning are often not linked to release and deployment processes.

The consequence is that the application inventory is very difficult to align with technology investment and change, so it is not obvious from a business perspective which components could or should be involved in any given solution, and whether there are potential conflicts which could lead to excessive investment in redundant components, and/or consequent under-investment in potentially reusable components.

Related to this, businesses typically wish to control the changes to ‘their’ application: the thought of the same application being shared with other businesses is usually something only agreed as a last resort and under extreme pressure from central management: the default approach is for each business to be fully in control of the change management process for their applications.

So IT rationally interprets this bias as a license to create a new, duplicate application rather than put in place a structured way to share reusable components, such that technical dependencies can be managed without visibly compromising business-line change independence. This is rational because safe, scalable ways to reuse components is still a very immature capability most organisations do not yet have.

So what makes sense to inventory at the application level? Can it be automated and made discoverable?

In practice, processes which rely on manual maintenance of inventory information that is isolated from the application development process are not likely to succeed – principally because the cost of ensuring data quality will make it prohibitive.

Packaging & build standards (such as Maven, Ant, Gradle, etc) and modularity standards (such as OSGi for Java, Gems for Ruby, ECMAsript for JS, etc) describe software components, their purpose, dependencies and relationships. In particular, disciplined use of software modules may allow applications to self-declare their composition at run-time.

Non-reusable components, such as business-specific (or context-specific) applications, can in principle be packaged and deployed similarly to reusable modules, also with searchable meta-data.

Databases are a special case: these can generally be viewed as reusable, instantiated components – i.e., they may be a component of a number of business applications. The contents of databases should likely best be described through open standards such as RDF etc. However, other components (such as API components serving a defined business purpose) could also be described using these standards, linked to discoverable API inventories.

So, what precisely needs to be manually inventoried? If all technical components are inventoried by the software development process, the only components that remain to be inventoried must be outside the development process.

This article proposes that what exists outside the development process is team structure. Teams are usually formed and broken up in alignment with business needs and priorities. Some teams are in place for many years, some may only last a few months. Regardless, every application component must be associated with at least one team, and teams must be responsible for maintaining/updating the meta-data (in version control) for every component used by that team. Where teams share multiple components, a ‘principle’ team owner must be appointed for each component to ensure consistency of component meta-data, to handle pull-requests etc for shared components, and to oversee releases of those components. Teams also have relevance for operational support processes (e.g., L3 escalation, etc).

The frequency of component updates will be a reflection of development activity: projects which propose to update infrequently changing components can expect to have higher risk than projects whose components frequently change, as infrequently changing components may indicate a lack of current knowledge/expertise in the component.

The ability to discover and reason about existing software (and infrastructure) components is key to managing complexity and maintaining agility. But relying on armies of people to capture data and maintain quality is impractical. Traditional roadmaps (while useful as a communication tool) can deviate from reality in practice, so keeping them current (except for communication of intent) may not be a productive use of resources.

In summary, application inventories linked to team structure, and Increased use of meta-data in the software-development process that is linked to broader architectural standards (such as functional, data and business ontologies) are key to achieving agility and long term complexity management.

Know What You Got: Just what, exactly, should be inventoried?

Digital Evolution Architecture

[tl;dr The Digital Evolution Architect drives incremental technology portfolio investment towards achievable digital objectives.]

Enterprise architecture (EA) has been around for quite a few years. It has mostly been discovered by firms which recognised the rising cost of technology and the need to optimise their technology footprint. They recognised that unfettered technology innovation across departments and teams may serve individual fiefdoms in the short term, but does the whole firm no good in the medium and long term.

But those were in the days when (arguably) the biggest competitive concern of organisations was their peers: as long as you did technology better, and more cost effectively, than your peers, then you were golden. So the smarter organisations improved their investment management processes through the introduction of enterprise architecture. Portfolio construction and investment management was not only based on what the firm wanted to do, but how it did it.

All well and good. For those industries that managed to get by so far without enterprise architecture, the drivers for change today are much more insidious. It is no longer your peers that are challenging your status quo: it is smaller, nimbler businesses; it is more demanding user expectations; it is enterprise-grade technology at pay-as-you-go prices; it is digital business processes that massively compress the time it takes to transact; it is regulators that want to know you are in control of those same processes; it is data as a strategic asset, to be protected and used as such.

These digital drivers require firms to adopt a formal digital agenda. This in turn will require firms to rethink how they manage their technology investments, and how they drive those investments through to successful execution.

One thing is clear: firms cannot go digital overnight. Firms have built up substantial value over the last 20 years in their post-mainframe technology portfolios. They support tried-and-tested processes. Now those processes are under stress, and it is not clear which (or any) of these systems (or firms) will have a place in the highly digital environment we can expect to have in the next 5-10 years.

Firms that recognise the challenges have multiple efforts underway within their IT function – including, but not limited to:

  • Reducing the cost of infrastructure through virtualisation and cloud adoption
  • Consolidation & modernisation of application portfolios
  • Investment in mobile development capabilities
  • Investment in ‘big data’ initiatives
  • Increased focus on security management
  • Improved SDLC and operational processes (DevOps)
  • etc

The problem is, all these initiatives tend to take place in their own silos, and end up being disconnected from the ‘big picture’ – if, that is, there is in fact a ‘big picture’ in the first place. Bringing all these disparate efforts together into a cohesive set of capabilities that allow a firm to thrive in a highly digital environment is non-trivial, but must be an explicit goal of any firm that has established a formal digital agenda.

To do this requires the bringing together of some well-established skills and disciplines that most firms do not have, or are still developing capabilities in:

  • strategy development,
  • investment governance,
  • enterprise architecture,
  • (scaled) agile development, and
  • ‘true’ service-oriented architecture.

(‘True’ service-oriented architecture means designing reusable services from the ground-up, rather than just putting a service interface on top of legacy, monolithic, non-cloud architectures. ‘Micro services’ may be an acceptable term for this form of SOA.)

It also requires a recognition that a multi-tiered IT organisation is a necessity – i.e., that the traditional IT organisation cannot retain control over the entire technology spend of a firm, but instead should focus on those capabilities of cross-business significance (beyond obvious functions like HR, Finance, etc), and enable business (revenue)-aligned IT folks to build on top of these capabilities.

A digital architecture cannot be built over-night: the key skill is in understanding what the business needs to do now (i.e., in the timeframe of the next budgeting period), and in continually evolving the architecture to where it needs to be in order to be ‘digital’ through the projects that are commissioned and executed, even as specific business needs will be constantly changing and adapting.

This means that strategies for cloud adoption, application portfolio consolidation and modernisation, mobile app development, big data, security, ‘devops’ etc all need to be implemented incrementally as part of this over-arching digital evolution approach.

The digital evolution architect (DEA) is the role or function that supports the transition, putting legacy applications on a clear evolutionary path, ensuring that progress does not leave its own legacy, and that the overall cost of change is on a fixed downward trend, even in the face of continuous innovation in technology. It is the role described in the ‘Opportunities and Solutions” part of the TOGAF method for enterprise architecture, albeit with a very explicit digital mandate. As such, it requires close collaboration with folks involved in business & data architecture, solution (information systems) architecture and technology architecture. But the DEA is primarily concerned about evolution towards coherent goals, not about deciding what the optimal target state is, and hence cares as much about portfolio management as about architecture.

For larger firms, the enterprise architecture function must identify which DEA’s are needed where, and how they should collaborate. But DEA’s should be accountable to the head of Digital Transformation, if a firm has one. (If a firm doesn’t have or plan to have such a function, then don’t expect it to be competitive or even around in 10 years, regardless of how big it is today.)

This blog will elaborate more on the skills and capabilities of the digital evolution architecture concept over the coming weeks.

 

 

 

 

 

Digital Evolution Architecture