Transforming IT: From a solution-driven model to a capability-driven model

[tl;dr Moving from a solution-oriented to a capability-oriented model for software development is necessary to enable enterprises to achieve agility, but has substantial impacts on how enterprises organise themselves to support this transition.]

Most organisations which manage software change as part of their overall change portfolio take a project-oriented approach to delivery: the project goals are set up front, and a solution architecture and delivery plan are created in order to achieve the project goals.

Most organisations also fix project portfolios on a yearly basis, and deviating from this plan can often very difficult for organisations to cope with – at least partly because such plans are intrinsically tied into financial planning and cost-saving techniques such as capitalisation of expenses, etc, which reduce bottom-line cost to the firm of the investment (even if it says nothing about the value added).

As the portfolio of change projects rise every year, due to many extraneous factors (business opportunities, revenue protection, regulatory demand, maintenance, exploration, digital initiatives,  etc), cross-project dependency management becomes increasingly difficult. It becomes even more complex to manage solution architecture dependencies within that overall dependency framework.

What results is a massive set of compromises that ends up with building solutions that are sub-optimal for pretty much every project, and an investment in technology that is so enterprise-specific, that no other organisation could possibly derive any significant value from it.

While it is possible that even that sub-optimal technology can yield significant value to the organisation as a whole, this benefit may be short lived, as the cost-effective ability to change the architecture must inevitably decrease over time, reducing agility and therefore the ability to compete.

So a balance needs to be struck, between delivering enterprise value (even at the expense of individual projects) while maintaining relative technical and business agility. By relative I mean relative to peers in the same competitive sector…sectors which are themselves being disrupted by innovative technology firms which are very specialist and agile within their domain.

The concept of ‘capabilities’ realised through technology ‘products’, in addition to the traditional project/program management approach, is key to this. In particular, it recognises the following key trends:

  • Infrastructure- and platform-as-a-service
  • Increasingly tech-savvy work-force
  • Increasing controls on IT by regulators, auditors, etc
  • Closer integration of business functions led by ‘digital’ initiatives
  • The replacement of the desktop by mobile & IoT (Internet of Things)
  • The tension between innovation and standards in large organisations

Enterprises are adapting to all the above by recognising that the IT function cannot be responsible for both technical delivery and ensuring that all technology-dependent initiatives realise the value they were intended to realise.

As a result, many aspects of IT project and programme management are no longer driven out of the ‘core’ IT function, but by domain-specific change management functions. IT itself must consolidate its activities to focus on those activities that can only be performed by highly qualified and expert technologists.

The inevitable consequence of this transformation is that IT becomes more product driven, where a given product may support many projects. As such, IT needs to be clear on how to govern change for that product, to lead it in a direction that is most appropriate for the enterprise as a whole, and not just for any particular project or business line.

A product must provide capabilities to the stakeholders or users of that product. In the past, those capabilities were entirely decided by whatever IT built and delivered: if IT delivered something that in practice wasn’t entirely fit for purpose, then business functions had no alternative but to find ways to work around the system deficiencies – usually creating more complexity (through end-user-developed applications in tools like Excel etc) and more expense (through having to hire more people).

By taking a capability-based approach to product development, however, IT can give business functions more options and ways to work around inevitable IT shortfalls without compromising controls or data integrity – e.g., through controlled APIs and services, etc.

So, while solutions may explode in number and complexity, the number of products can be controlled – with individual businesses being more directly accountable for the complexity they create, rather than ‘IT’.

This approach requires a step-change in how traditional IT organisations manage change. Techniques from enterprise architecture, scaled agile, and DevOps are all key enablers for this new model of structuring the IT organisation.

In particular, except for product-strategy (where IT must be the leader), IT must get out of the business of deciding the relative value/importance of individual product changes requested by projects, which historically IT has been required to do. By imposing a governance structure to control the ‘epics’ and ‘stories’ that drive product evolution, projects and stakeholders have some transparency into when the work they need will be done, and demand can be balanced fairly across stakeholders in accordance with their ability to pay.

If changes implemented by IT do not end up delivering value, it should not be because IT delivered the wrong thing, but rather the right thing was delivered for the wrong reason. As long as IT maintains its product roadmap and vision, such mis-steps can be tolerated. But they cannot be tolerated if every change weakens the ability of the product platform to change.

Firms which successfully balance between the project and product view of their technology landscape will find that productivity increases, complexity is reduced and agility increases massively. This model also lends itself nicely to bounded domain development, microservices, use of container technologies and automated build/deployment – all of which will likely feature strongly in the enterprise technology platform of the future.

The changes required to support this are significant..in terms of financial governance, delivery oversight, team collaborations, and the roles of senior managers and leaders. But organisations must be prepared to do this transition, as historical approaches to enterprise IT software development are clearly unsustainable.

Transforming IT: From a solution-driven model to a capability-driven model

What if IT had no CapEx? A gedanken experiment.

[tl;dr Imagining IT with no CapEx opens up possibilities that may not otherwise be considered. This is a thought experiment and it may never be practical or desirable to realise, but seeking to minimise direct CapEx expenditure in portfolio planning may help the firm’s overall Agile agenda.]

The ‘no CapEx’ thought experiment is related to the ‘Lean Enterprise’ theme – i.e., how enterprises can be more agile and responsive to business change, and how this could fundamentally change organisation’s approach to IT.

CapEx (or Capital Expenditure) usually relates to large up-front costs that are depreciated over a number of years. Investments such as infrastructure and software licenses are generally treated as CapEx. OpEx, on the other hand, refers to operational expenses, incurred on a recurring basis.

The interesting thing about CapEx is that it is a commitment – i.e., there is no optionality behind it. This means if business needs change, the cost of breaking that commitment could be prohibitively high. So businesses may be forced to keep pushing square pegs through round holes in their technology, reducing the potential value of the investment.

Agility is all about optionality – the deferring of (expensive) commitments to the latest possible moment, so that the commercial benefits of that commitment can be productively exploited for the life of that commitment.

Increasingly, it is becoming less and less clear to organisations, as technology rapidly advances, where to make these commitments that do not unnecessarily limit options in the future.

So, as a thought experiment, how would a CapEx-less IT organisation actually work, and what would it look like?

The first observation is that somebody, somewhere, has to make a CapEx investment for IT. For example, Amazon has made a significant CapEx investment in its datacentres, and it is this investment that allows other businesses to avoid having to make CapEx investments. Is this profitable for Amazon? It’s hard to tell..prices keep going down, and efficiency is improving all the time. In the long run, some profitable players will emerge. For now, organisations choose to use Amazon only because it is cheaper than the capital-investment alternative, over the length of the investment cycle.

Software licensing as a revenue stream is under pressure: businesses may be happier to pay for services to support the software they are using rather than for some license that gives them the right to use a specific version of a software package for commercial purposes. In the days when software was distributed to consumers and deployed onto proprietary infrastructure, this made sense. But more and more generally useful software is being open-sourced, and more and more ‘proprietary’ software is being delivered as a service. So CapEx related to software licenses will (eventually) not be as profitable as it once was (at least for the major enterprise software players).

So, in a CapEx-free IT environment, the IT landscape looks like an interacting set of software services that collectively deliver business value (i.e., give the businesses what they need in a timely manner and at a manageable cost).

The obligation of the enterprise is also clear: either derive value from the service, or stop using the service. In other words, exercise your option to terminate use of the service if the value is not there.

In this environment, do organisations actually need an IT strategy in the conventional/traditional sense? Isn’t it all about sourcing solutions and ensuring the solutions providers interact appropriately to collectively provide value to the enterprise, and that the enterprise interacts with the providers in such a way as to optimise change for maximum value? In the extreme case, yes.

For this model to work, providers need to be agile. They will likely have a number of clients, all requiring different solutions (albeit in the same domain). They need to be responsive, and they need to be able to scale. It is these characteristics that will differentiate among different providers.

All of this has massive implications with respect to data management, security and compliance. A strong culture of data governance coupled with sensible security and compliance policies could (eventually) allow these hurdles to be overcome.

The question for firms then, is where does it make sense to have an internal IT capability? And, apart from some R&D capabilities, is it worth it? Especially as individual businesses could innovate with a number of providers at relatively low risk. Perhaps the model should be to treat each internal capability as potentially a future third party service provider? Give the team the ability to innovate on agility, responsiveness and scaleability. Retain the option in the future to let the ‘internal’ service provider serve other businesses, or to shut it down if it is not providing value.

It is worth bearing in mind that the concept of the traditional ‘end user’ is changing. The end-user is someone who is often quite adept at making technology do what they want, without being considered traditional developers. Providing end-users with innovative ways to connect and orchestrate various (enterprise) services is expected. The difference between ‘local’ and ‘enterprise’ development is then emphasised, with local development being reactive and using enterprise services to ensure compliance etc. while still meeting localised needs.

Such an approach would maximise enterprise agility and also provide skilled technologists and even end users the ability to flex their technical chops in ways that businesses can exploit to their fullest advantage. The cost of this agility is that other organisations can use the same ‘building blocks’ to greater competitive advantage. As has always been the case, it is not so much the technology, as what you do with it, that really matters.

To summarise, while not always possible or even desirable, minimising (direct) CapEx expenses for IT is becoming a feasible way to achieve IT agility.

What if IT had no CapEx? A gedanken experiment.

The Learning CTO’s Strategic themes for 2015

Like most folks, I cannot predict the future. I can only relate the themes that most interest me. On that basis, here is what is occupying my mind as we go into 2015.

The Lean Enterprise The cultural and process transformations necessary to innovate and maintain agility in large enterprises – in technology, financial management, and risk, governance and compliance. Includes using real-option theory to manage risk.
Enterprise Modularity The state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Or SOA Mark II.
Continuous Delivery The state-of-the-art technologies and techniques which brings together agile software delivery with operational considerations. Or DevOps Mark I.
Systems Theory & Systems Thinking The ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts.
Machine Learning Using business intelligence and advanced data semantics to dynamically improve automated or automatable processes.

[Blockchain technologies are another key strategic theme which are covered in a separate blog, dappsinfintech.com.]

Why these particular topics?

Specifically, over the next few years, large organisations need to be able to

  1. Out-innovate smaller firms in their field of expertise, and
  2. Embrace and leverage external innovative solutions and services that serve to enhance a firm’s core product offerings.

And firms need to be able to do this while keeping a lid on overall costs, which means managing complexity (or ‘cleaning up’ as you go along, also known as keeping technical debt under control).

It is also interesting to note that for many startups, these themes are generally not an explicit concern, as they tend to be ‘obvious’ and therefore do not need additional management action to address them. However, small firms may fail to scale successfully if they do not explicitly recognise where they are doing these already, and so ensure they maintain focus on these capabilities as they grow.

Also, for many startups, their success may be predicated on larger organisations getting on top of these themes, as otherwise the startups may struggle to get large firms to adopt their innovations on a sustainable basis.

The next few blog posts will explain these themes in a bit more detail, with relevant references.

The Learning CTO’s Strategic themes for 2015

The role of microservices in Enterprise Architectural planning

Microservices as an architectural construct is gaining a lot of popularity, and the term is slowly but surely finding its place among the lexicon of developers and software architects. The precise definition of ‘microservices’ varies (it is not a formal concept in computer science, as to my knowledge it is not grounded in any formal mathematical abstraction) but the basic concept of a microservice being a reusable lightweight architectural component doing one thing and doing it extremely well is a reasonable baseline.

What do ‘Microservices’ mean to organisations used to understanding their world in terms of Applications? What about all the money some organisations have already spent on Service Oriented Architecture (SOA)? Have all the Services now been supplanted by Microservices? And does all this stuff need to be managed in a traditional command-and-control way, or is it possible they could self-manage?

The answers to these questions are far from reaching a consensus, but here folllows an approach that may help begin to help architects and planners rationalise all these concepts, taken from a pragmatic enterprise perspective.

Note that the definitions below do not focus on technologies so much as scenarios.

Applications

Applications are designed to meet the needs of a specific set of users for a specific set of related tasks or activities. Applications should not be shared in an enterprise sense. This means applications must not expose any re-usable services.

However, application functionality may depend on services exposed by many domains, and applications should use enterprise services to the fullest extent possible. To that extent, applications must focus on orchestrating rather than implementing services and microservices, even though it is generally through the act of building applications that services and microservices are identified and created.

Within a given domain, an Application should make extensive use of the microservices available in that domain.

Applications require a high degree of governance: i.e., for a given user base, the intent to build and maintain an application for a given purpose needs to be explicitly funded – and the domain that that application belongs to must be unambiguous. (But noting that the functionality provided by an application may leverage the capabilities of multiple domains.)

Services

Services represent the exposed interfaces (in a technical sense) of organizational units or functional domains. Services represent how that domain wants information or services to be consumed and/or published with respect to other domains.

Services represent a contract between that domain and other domains, and reflect the commitment the domain has to the efficient operation of the enterprise as a whole. Services may orchestrate other services (cross-domain) or microservices (intra-domain). The relationship between applications and services is asymmetric: if a service has a dependency on an application, then the architecture needs reviewing.

Services require a high degree of governance: i.e., the commitments/obligations that a domain has on other domains (and vice-versa) needs to be formalised and not left to individual projects to work out. It means some form of Service Design methodology, including data governance, is required – most likely in parallel with application development.

Microservices

Within a domain, many microservices may be created to support how that domain builds or deploys functionality. Microservices are optimised for the domain’s performance, and not necessarily for the enterprise, and do not depend upon microservices in other domains.

Applications that are local to a domain may use local microservices directly, but microservices should be invisible to applications or services in other domains.

However, some microservices may be exposed as Services by the domain where there is seen to be value in doing so. This imposes an external obligation on a domain, which are enforced through planning and resourcing.

Microservices require a low degree of governance: the architectural strategy for a domain will define the extent to which microservices form part of the solution architecture for a domain. Microservices developers choose to make their services available and discoverable in ways appropriate to their solution architecture.

In fact, evidence of a successful microservices based approach to application development is an indicator of a healthy technical strategy. Rather than require microservices to be formally planned, the existence of microservices in a given domain should be a hint as to what services could or should be formally exposed by the domain to support cross-divisional goals.

It is quite unlikely that meaningful (resilient, robust, scaleable) Services can be identified and deployed without an underlying microservices strategy. Therefore, the role of a domain architect should be to enable a culture of microservice development, knowing that such a culture will directly support strategic enterprise goals.

Domains

A domain consists of a collection of applications, services and microservices, underpinning a business capability. It is an architectural partition of the organisation, which can be reflected in the firm’s organisational structure. It encapsulates processes, data and technology.

In particular, domains represent business capabilities and are clusters of business and technology – or, equivalently, business function and technology.

The organisational structure as it relates to domains must reflect the strategy of the firm; without this, any attempt at developing a technology strategy will likely fail to achieve the intended outcome.

So, the organisational structure is key, as without appropriate accountability and a forward-looking vision, the above concepts will not usefully survive ad-hoc organisational changes.

While organisational hierarchies may change, what a business actually does will not change very often (except as part of major transformation efforts). And while different individuals may take different responsibilities for different domains (or sets of domains), if an organisation change materially impacts the architecture, then architectural boundaries and roadmaps need to change accordingly so that they re-align with the new organisational goals.

Summary

Partitioning an organisation’s capabilities is a necessary prerequisite to developing a technology strategy. A technology strategy can be expressed in terms of domains, applications, services and (the presence of) microservices.

The key is to not be overly prescriptive, and to allow innovation and creativity within and across domains, but providing sufficient constraints to encourage innovation while simultaneously keeping a lid on complexity. The presence of microservices and a viable microservice strategy within a domain can be used as a good proxy for enterprise technology strategy alignment.

The role of microservices in Enterprise Architectural planning

The true scope of enterprise architecture

This article addresses what is, or should be, the true scope of enterprise architecture.

To date, most enterprise architecture efforts have been focused on supporting IT’s role as the ‘owner’ of all applications and infrastructure used by an organisation to deliver its core value proposition. (Note: in this article, ‘IT’ refers to the IT organisation within a firm, rather than ‘technology’ as a general theme.)

The view of the ‘enterprise’ was therefore heavily influenced by IT’s view of the enterprise.

There has been an increasing awareness in recent times that enterprise architects should look beyond IT and its role in the enterprise – notably led by Tom Graves and his view on concepts such as the business model canvas.

But while in principle this perspective correct, in practice it is hard for most organisations to swallow. So most enterprise architects are still focused on IT-centric activities, which most corporate folks are comfortable for them to do – and, indeed, for which there is still a considerable need.

But the world is changing, and this IT-centric view of the enterprise is becoming less useful, for two principle (ironically) technology-led reasons:

  • The rise of Everything-as-a-Service, and
  • Decentralised applications

Let’s cover the above two major technology trends in turn.

The rise of Everything-as-a-Service

This is a reflection that more and more technology and technology-enabled capability is being delivered as a service – i.e., on-tap, pay-as-you-go and there when you need it.

This starts with basic infrastructure (so called IaaS), and rises up to Platform-as-a-Service (PaaS), then up to Software-as-a-Service (SaaS) and finally  Business Process-as-a Service (BPaaS) and its ilk.

The implications of this are far-reaching, as this research from Horses for Sources indicates.

This trend puts the provision and management of IT capability squarely outside of the ‘traditional’ IT organisation. While IT as an organisation must still exist, it’s role will focus increasingly on the ‘digital’ agenda, namely, focusing on client-centricity (bespoke optimised client experience), data as an asset (and related governance/compliance issues), and managing complexity (of service utilisation, service integration, and legacy application evolution/retirement).

Of particular note is that many firms may be willing to write-down their legacy IT investments in the face of newer, more innovative, agile and cheaper solutions available via the ‘as-a-Service’ route.

Decentralised Applications

Decentralised applications are applications which are built on so-called ‘blockchain’ technology – i.e., applications where transactions between two or more mutually distrusting and legally distinct parties can be reliably and irrefutably recorded without the need for an intermediate 3rd party.

Bitcoin is the first and most visible decentralised application: it acts as a distributed ledger. ‘bitcoin’ the currency is built upon the Bitcoin protocol, and its focus is on being a reliable store of value. But there are many other protocols and technologies (such as Ethereum, Ripple, MasterCoin, Namecoin, etc).

Decentralised applications will enable many new and innovative distributed processes and services. Where previously many processes were retained in-house for maximum control, processes can in future be performed by one or more external parties, with decentralised application technology ensuring all parties are meeting their obligations within the overall process.

Summary

In the short term, many EA’s will be focused on helping optimise change within their respective organisation’s, and will remain accountable to IT while supporting a particular business area.

As IT’s role changes to accommodate the new reality of ‘everything-as-a-service’ and decentralised applications, EA’s should be given a broader remit to look beyond traditional IT-centric solutions.

In the meantime, businesses are investing heavily in innovation accelerators and incubators – most of which are dominated by ‘as-a-service’ solutions.

It will take some time for the ‘everything-as-a-service’ model to play itself out, as there are many issues concerning privacy, security, performance, etc that will need to be addressed in practice and in law. But the direction is clear, and firms that start to take a service-oriented view of the capabilities they need will have a competitive advantage over those that do not.

 

 

 

 

 

The true scope of enterprise architecture