Transforming IT: From a solution-driven model to a capability-driven model

[tl;dr Moving from a solution-oriented to a capability-oriented model for software development is necessary to enable enterprises to achieve agility, but has substantial impacts on how enterprises organise themselves to support this transition.]

Most organisations which manage software change as part of their overall change portfolio take a project-oriented approach to delivery: the project goals are set up front, and a solution architecture and delivery plan are created in order to achieve the project goals.

Most organisations also fix project portfolios on a yearly basis, and deviating from this plan can often very difficult for organisations to cope with – at least partly because such plans are intrinsically tied into financial planning and cost-saving techniques such as capitalisation of expenses, etc, which reduce bottom-line cost to the firm of the investment (even if it says nothing about the value added).

As the portfolio of change projects rise every year, due to many extraneous factors (business opportunities, revenue protection, regulatory demand, maintenance, exploration, digital initiatives,  etc), cross-project dependency management becomes increasingly difficult. It becomes even more complex to manage solution architecture dependencies within that overall dependency framework.

What results is a massive set of compromises that ends up with building solutions that are sub-optimal for pretty much every project, and an investment in technology that is so enterprise-specific, that no other organisation could possibly derive any significant value from it.

While it is possible that even that sub-optimal technology can yield significant value to the organisation as a whole, this benefit may be short lived, as the cost-effective ability to change the architecture must inevitably decrease over time, reducing agility and therefore the ability to compete.

So a balance needs to be struck, between delivering enterprise value (even at the expense of individual projects) while maintaining relative technical and business agility. By relative I mean relative to peers in the same competitive sector…sectors which are themselves being disrupted by innovative technology firms which are very specialist and agile within their domain.

The concept of ‘capabilities’ realised through technology ‘products’, in addition to the traditional project/program management approach, is key to this. In particular, it recognises the following key trends:

  • Infrastructure- and platform-as-a-service
  • Increasingly tech-savvy work-force
  • Increasing controls on IT by regulators, auditors, etc
  • Closer integration of business functions led by ‘digital’ initiatives
  • The replacement of the desktop by mobile & IoT (Internet of Things)
  • The tension between innovation and standards in large organisations

Enterprises are adapting to all the above by recognising that the IT function cannot be responsible for both technical delivery and ensuring that all technology-dependent initiatives realise the value they were intended to realise.

As a result, many aspects of IT project and programme management are no longer driven out of the ‘core’ IT function, but by domain-specific change management functions. IT itself must consolidate its activities to focus on those activities that can only be performed by highly qualified and expert technologists.

The inevitable consequence of this transformation is that IT becomes more product driven, where a given product may support many projects. As such, IT needs to be clear on how to govern change for that product, to lead it in a direction that is most appropriate for the enterprise as a whole, and not just for any particular project or business line.

A product must provide capabilities to the stakeholders or users of that product. In the past, those capabilities were entirely decided by whatever IT built and delivered: if IT delivered something that in practice wasn’t entirely fit for purpose, then business functions had no alternative but to find ways to work around the system deficiencies – usually creating more complexity (through end-user-developed applications in tools like Excel etc) and more expense (through having to hire more people).

By taking a capability-based approach to product development, however, IT can give business functions more options and ways to work around inevitable IT shortfalls without compromising controls or data integrity – e.g., through controlled APIs and services, etc.

So, while solutions may explode in number and complexity, the number of products can be controlled – with individual businesses being more directly accountable for the complexity they create, rather than ‘IT’.

This approach requires a step-change in how traditional IT organisations manage change. Techniques from enterprise architecture, scaled agile, and DevOps are all key enablers for this new model of structuring the IT organisation.

In particular, except for product-strategy (where IT must be the leader), IT must get out of the business of deciding the relative value/importance of individual product changes requested by projects, which historically IT has been required to do. By imposing a governance structure to control the ‘epics’ and ‘stories’ that drive product evolution, projects and stakeholders have some transparency into when the work they need will be done, and demand can be balanced fairly across stakeholders in accordance with their ability to pay.

If changes implemented by IT do not end up delivering value, it should not be because IT delivered the wrong thing, but rather the right thing was delivered for the wrong reason. As long as IT maintains its product roadmap and vision, such mis-steps can be tolerated. But they cannot be tolerated if every change weakens the ability of the product platform to change.

Firms which successfully balance between the project and product view of their technology landscape will find that productivity increases, complexity is reduced and agility increases massively. This model also lends itself nicely to bounded domain development, microservices, use of container technologies and automated build/deployment – all of which will likely feature strongly in the enterprise technology platform of the future.

The changes required to support this are significant..in terms of financial governance, delivery oversight, team collaborations, and the roles of senior managers and leaders. But organisations must be prepared to do this transition, as historical approaches to enterprise IT software development are clearly unsustainable.

Advertisements
Transforming IT: From a solution-driven model to a capability-driven model

The hidden costs of PaaS & microservice engineering innovation

[tl;dr The leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.]

The pace of innovation in the PaaS and microservice space is increasing rapidly. This, coupled with increasing pressure on ‘traditional’ organisations to deliver more value more quickly from IT investments, is causing a flurry of interest in PaaS enabling technologies such as Cloud Foundry (favoured by the likes of IBM and Pivotal), OpenShift (favoured by RedHat), Azure (Microsoft), Heroku (SalesForce), AWS, Google Application Engine, etc.

A key characteristic of all these PaaS solutions is that they are ‘devops’ enabled – i.e., it is possible to automate both code and infrastructure deployment, enabling the way to have highly automated operational processes for applications built on these platforms.

For large organisations, or organisations that prefer to control their infrastructure (because of, for example, regulatory constraints), PaaS solutions that can be run in a private datacenter rather than the public cloud are preferable, as this a future option to deploy to external clouds if needed/appropriate.

These PaaS environments are feature-rich and aim to provide a lot of the building blocks needed to build enterprise applications. But other framework initiatives, such as Spring Boot, DropWizard and Vert.X aim to make it easier to build PaaS-based applications.

Combined, all of these promise to provide a dramatic increase in developer productivity: the marginal cost of developing, deploying and operating a complete application will drop significantly.

Due to the low capital investment required to build new applications, it becomes ever more feasible to move from a heavy-weight, planning intensive approach to IT investment to a more agile approach where a complete application can be built, iterated and validated (or not) in the time it takes to create a traditional requirements document.

However, this also has massive implications, as – left unchecked – the drift towards entropy will increase over time, and organisations could be severely challenged to effectively manage and generate value from the sheer number of applications and services that can be created on such platforms. So an eye on managing complexity should be in place from the very beginning.

Many of the above platforms aim to make it as easy as possible for developers to get going quickly: this is a laudable goal, and if more of the complexity can be pushed into the PaaS, then that can only be good. The consequence of this approach is that developers have less control over the evolution of key aspects of the PaaS, and this could cause unexpected issues as PaaS upgrades conflict with application lifecycles, etc. In essence, it could be quite difficult to isolate applications from some PaaS changes. How these frameworks help developers cope with such changes is something to closely monitor, as these platforms are not yet mature enough to have gone through a major upgrade with a significant number of deployed applications.

The relative benefit/complexity trade-off between established microservice frameworks such as OSGi and easier to use solutions such as described above needs to be tested in practice. Specifically, OSGi’s more robust dependency model may prove more useful in enterprise environments than environments which have a ‘move fast and break things’ approach to application development, especially if OSGi-based PaaS solutions such as JBoss Fuse on OpenShift and Paremus ServiceFabric gain more popular use.

So: all well and good from the technology side. But even if the pros and cons of the different engineering approaches are evaluated and a perfect PaaS solution emerges, that doesn’t mean Microservice Nirvana can be achieved.

A recent article on the challenges of building successful micro-service applications, coupled with a presentation by Lisa van Gelder at a recent Agile meetup in New York City, has emphasised that even given the right enabling technologies, deploying microservices is a major challenge – but if done right, the rewards are well worth it.

Specifically, there are a number of factors that impact the success of a large scale or enterprise microservice based strategy, including but not limited to:

  • Shared ownership of services
  • Setting cross-team goals
  • Performing scrum of scrums
  • Identifying swim lanes – isolating content failure & eventually consistent data
  • Provision of Circuit breakers & Timeouts (anti-fragile)
  • Service discoverability & clear ownership
  • Testing against stubs; customer driven contracts
  • Running fake transactions in production
  • SLOs and priorities
  • Shared understanding of what happens when something goes wrong
  • Focus on Mean time to repair (recover) rather than mean-time-to-failure
  • Use of common interfaces: deployment, health check, logging, monitoring
  • Tracing a users journey through the application
  • Collecting logs
  • Providing monitoring dashboards
  • Standardising common metric names

Some of these can be technically provided by the chosen PaaS, but a lot is based around the best practices consistently applied within and across development teams. In fact, it is quite hard to capture these key success factors in traditional architectural views – something that needs to be considered when architecting large-scale microservice solutions.

In summary, the leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.
The hidden costs of PaaS & microservice engineering innovation

Know What You Got: Just what, exactly, should be inventoried?

[tl;dr Application inventories linked to team structure, coupled with increased use of meta-data in the software-development process that is linked to architectural standards such as functional, data and business ontologies, is key to achieving long term agility and complexity management.]

Some of the biggest challenges with managing a complex technology platform is knowing when to safely retire or remove redundant components, or when there is a viable reusable component (instantiated or not) that can be used in a solution. (In this sense, a ‘component’ can be a script, a library, or a complete application – or a multitude of variations in-between.)

There are multiple ways of looking at the inventory of components that are deployed. In particular, components can view from the technology perspective, or from the business or usage perspective.

The technology perspective is usually referred to as configuration management, as defined by ITIL. There are many tools (known as ‘CMDB’s) which, using ‘fingerprints’ of known software components, can automatically inventorise which components are deployed where, and their relationships to each other. This is a relatively well-known problem domain – although years of poor deployment practice means that random components can be found running on infrastructure months or even years after the people who created it have moved on. Although the ITIL approach is imminently sensible in principle, in practice it is always playing catch-up because deployment practices are only improving slowly in legacy environments.

Current cloud-aware deployment practices encapsulated in dev-ops are the antithesis of this approach: i.e., all aspects of deployment are encapsulated in script and treated as source code in and of itself. The full benefits of the ITIL approach to configuration management will therefore only be realised when the transition to this new approach to deployment is fully completed (alongside the virtualisation of data centres).

The business or usage perspective is much harder: typically this is approximated by establishing an application inventory, and linking various operational accountabilities to that inventory.

But such inventories suffer from key failings..specifically:

  • The definition of an application is very subjective and generally determined by IT process needs rather than what makes sense from a business perspective.
  • Application inventories are usually closely tied to the allocation of physical hardware and/or software (such as operating systems or databases).
  • Applications tend to evolve and many components could be governed by the same ‘application’ – some of which may be supporting multiple distinct business functions or products.
  • Application inventories tend to be associated with running instances rather than (additionally) instantiable instances.
  • Application inventories may capture interface dependencies, but often do not capture component dependencies – especially when applications may consist of components that are also considered to be a part of other applications.
  • Application and component versioning are often not linked to release and deployment processes.

The consequence is that the application inventory is very difficult to align with technology investment and change, so it is not obvious from a business perspective which components could or should be involved in any given solution, and whether there are potential conflicts which could lead to excessive investment in redundant components, and/or consequent under-investment in potentially reusable components.

Related to this, businesses typically wish to control the changes to ‘their’ application: the thought of the same application being shared with other businesses is usually something only agreed as a last resort and under extreme pressure from central management: the default approach is for each business to be fully in control of the change management process for their applications.

So IT rationally interprets this bias as a license to create a new, duplicate application rather than put in place a structured way to share reusable components, such that technical dependencies can be managed without visibly compromising business-line change independence. This is rational because safe, scalable ways to reuse components is still a very immature capability most organisations do not yet have.

So what makes sense to inventory at the application level? Can it be automated and made discoverable?

In practice, processes which rely on manual maintenance of inventory information that is isolated from the application development process are not likely to succeed – principally because the cost of ensuring data quality will make it prohibitive.

Packaging & build standards (such as Maven, Ant, Gradle, etc) and modularity standards (such as OSGi for Java, Gems for Ruby, ECMAsript for JS, etc) describe software components, their purpose, dependencies and relationships. In particular, disciplined use of software modules may allow applications to self-declare their composition at run-time.

Non-reusable components, such as business-specific (or context-specific) applications, can in principle be packaged and deployed similarly to reusable modules, also with searchable meta-data.

Databases are a special case: these can generally be viewed as reusable, instantiated components – i.e., they may be a component of a number of business applications. The contents of databases should likely best be described through open standards such as RDF etc. However, other components (such as API components serving a defined business purpose) could also be described using these standards, linked to discoverable API inventories.

So, what precisely needs to be manually inventoried? If all technical components are inventoried by the software development process, the only components that remain to be inventoried must be outside the development process.

This article proposes that what exists outside the development process is team structure. Teams are usually formed and broken up in alignment with business needs and priorities. Some teams are in place for many years, some may only last a few months. Regardless, every application component must be associated with at least one team, and teams must be responsible for maintaining/updating the meta-data (in version control) for every component used by that team. Where teams share multiple components, a ‘principle’ team owner must be appointed for each component to ensure consistency of component meta-data, to handle pull-requests etc for shared components, and to oversee releases of those components. Teams also have relevance for operational support processes (e.g., L3 escalation, etc).

The frequency of component updates will be a reflection of development activity: projects which propose to update infrequently changing components can expect to have higher risk than projects whose components frequently change, as infrequently changing components may indicate a lack of current knowledge/expertise in the component.

The ability to discover and reason about existing software (and infrastructure) components is key to managing complexity and maintaining agility. But relying on armies of people to capture data and maintain quality is impractical. Traditional roadmaps (while useful as a communication tool) can deviate from reality in practice, so keeping them current (except for communication of intent) may not be a productive use of resources.

In summary, application inventories linked to team structure, and Increased use of meta-data in the software-development process that is linked to broader architectural standards (such as functional, data and business ontologies) are key to achieving agility and long term complexity management.

Know What You Got: Just what, exactly, should be inventoried?

Achieving modularity: functional vs volatility decomposition

Enterprise architecture is all about managing complexity. Many EA initiatives tend to focus on managing IT complexity, but there is only so much that can be done there before it becomes obvious that IT complexity is, for the most part, a direct consequence of enterprise complexity. To recap, complexity needs to be managed in order to maintain agility – the ability for an organisation to respond (relatively) quickly and efficiently to changes in markets, regulations or innovation, and to continue to do this over time.

Enterprise complexity can be considered to be the activities performed and resources consumed by the organisation in order to deliver ‘value’, a metric usually measured through the ability to maintain (financial) income in excess of expenses over time.

Breaking down these activities and resources into appropriate partitions that allow holistic thinking and planning to occur is one of the key challenges of enterprise architecture, and there are various techniques to do this.

Top-Down Decomposition

The natural approach to decomposition is to first understand what an organisation does – i.e., what are the (business) functions that it performs. Simply put, a function is a collection of data and decision points that are closely related (e.g., ‘Payments ‘is a function). Functions typically add little value in and of themselves – rather they form part of an end-to-end process that delivers value for a person or legal entity in some context. For example, a payment on its own means nothing: it is usually performed in the context of a specific exchange of value or service.

So a first course of action is to create a taxonomy (or, more accurately, an ontology) to describe the functions performed consistently across an enterprise. Then, various processes, products or services can be described as a composition of those functions.

If we accept (and this is far from accepted everywhere) that EA is focused on information systems complexity, then EA is not responsible for the complexity relating to the existence of processes, products or services. The creation or destruction of these are usually a direct consequence of business decisions. However, EA should be responsible for cataloging these, and ensuring these are incorporated into other enterprise processes (such as, for example, disaster recovery or business continuity processes). And EA should relate these to the functional taxonomy and the information systems architecture.

This can get very complex very quickly due to the sheer number of processes, products and services – including their various variations – most organisations have. So it is important to partition or decompose the complexity into manageable chunks to facilitate meaningful conversations.

Enterprise Equivalence Relations

One way to do this at enterprise level is to group functions into partitions (aka domains) according to synergy or autonomy (as described by Roger Sessions), for all products/services supporting a particular business. This approach is based on the mathematical concept of equivalenceBecause different functions in different contexts may have differing equivalence relationships, functions may appear in multiple partitions. One role of EA is to assess and validate if those functions are actually autonomous or if there is the potential to group apparently duplicate functions into a new partition.

Once partitions are identified, it is possible to apply ‘traditional’ EA thinking to a particular partition, because that partition is of a manageable size. By ‘traditional’ EA, I mean applying Zachman, TOGAF, PEAF, or any of the myriad methodologies/frameworks that are out there. More specifically, at that level, it is possible to establish a meaningful information systems strategy or goal for a particular partition that is directly supporting business agility objectives.

The Fallacy of Functional Decomposition

Once you get down to the level of partition, the utility of functional decomposition when it comes to architecting solutions becomes less useful. The natural tendency for architects would be to build reusable components or services that realise the various functions that comprise the partition. In fact, this may be the wrong thing to do. As Jüval Lowy demonstrates in his excellent webinar, this may result in more complexity, not less (and hence less agility).

When it comes to software architecture, the real reason to modularise your architecture is to manage volatility or uncertainty – and to ensure that volatility in one part of the architecture does not unnecessarily negatively impact another part of the architecture over time. Doing this allows agility to be maintained, so volatile parts of the application can, in fact, change frequently, at low impact to other parts of the application.

When looking at a software architecture through this lens, a quite different set of components/modules/services may become evident than those which may otherwise be obvious when using functional decomposition – the example in the webinar demonstrates this very well. A key argument used by Jüval in his presentation is that (to paraphrase him somewhat) functions are, in general, highly dependent on the context in which they are used, so to split them out into separate services may require making often impossible assumptions about all possible contexts the functions could be invoked in.

In this sense, identified components, modules or services can be considered to be providing options in terms of what is done, or how it is done, within the context of a larger system with parts of variable volatility. (See my earlier post on real options in the context of agility to understand more about options in this context.)

Partitions as Enterprise Architecture

When each partition is considered with respect to its relationship with other partitions, there is a lot of uncertainty around how different partitions will evolve. To allow for maximum flexibility, every partition should assume each other partition is a volatile part of their architecture, and design accordingly for this. This allows each partition to evolve (reasonably) independently with minimum fixed co-ordination points, without compromising the enterprise architecture by having different partitions replicate the behaviours of partitions they depend on.

This then allows:

  • Investment to be expressed in terms of impact to one or more partitions
  • Partitions to establish their own implementation strategies
  • Agile principles to be agreed on a per partition basis
  • Architectural standards to be agreed on a per partition basis
  • Partitions to define internally reusable components relevant to that partition only
  • Partitions to expose partition behaviour to other partitions in an enterprise-consistent way

In generative organisation cultures, partitions do not need to be organisationally aligned. However, in other organisation cultures (pathological or bureaucratic), alignment of enterprise infrastructure functions such as IT or operations (at least) with partitions (domains) may help accelerate the architectural and cultural changes needed – especially if coupled with broader transformations around investment planning, agile adoption and enterprise architecture.

Achieving modularity: functional vs volatility decomposition

Achieving Agile at Scale

[tl;dr Scaling agile at the enterprise level will need rethinking how portfolio management and enterprise architecture are done to ensure success.]

Agility,as a concept, is gaining increasing attention within large organisations. The idea that business functions – and in particular IT – can respond quickly and iteratively to business needs is an appealing one.

The reasons why agility is getting attention are easy to spot: larger firms are getting more and more obviously unagile – i.e., the ability of business functions to respond to business needs in a timely and sustainable manner is getting progressively worse, even as a rapidly evolving competitive and technology-led commercial environment is demanding more agility.

Couple that with the heavy cost of failing to meet ever increasing regulatory compliance obligations, and ‘agile’ seems a very good idea indeed.

Agile is a great idea, but when implemented at scale (in large enterprise organisations), it can actually reduce enterprise agility, rather than increase it, unless great care is taken.

This is partly because Agile’s origins come from developing web applications: in these scenarios, there is usually a clear customer, a clear goal (to the extent that the team exists in the first place), and relatively tight timelines that favour short or non-existent analysis/design phases. Agile is perfect for these scenarios.

Let’s call this scenario ‘local agile’. It is quite easy to see a situation where every team, in response to the question, ‘are you doing agile?’, for teams to say ‘Yes, we do!’. So if every team is doing ‘local agile’, does that mean your organisation is now ‘agile’?

The answer is No. Getting every team to adopt agile practices is a necessary but insufficient step towards achieving enterprise agility. In particular, two key factors needs to be addressed before true a firm can be said to be ‘agile’ at the enterprise level. These are:

  1. The process by which teams are created and funded, and
  2. Enterprise awareness

Creating & Funding (Agile) Teams

Historically, teams are usually created as a result of projects being initiated: the project passes investment justification criteria, the project is initiated and a team is put in place, led by the project manager. Also, this process was owned entirely by the IT organisation, irrespective of which other organisations were stakeholders in the project.

At this point, IT’s main consideration is, will the project be delivered on time and on budget? The business sponsor’s main consideration is, will it give us what we need when we need it? And the enterprise’s consideration (which is often ignored) is who is accountable for ensuring that the IT implementation delivers value to the enterprise. (In this sense, the ‘enterprise’ could be either a major business line with full P&L responsibility for all activities performed in support of their business, or the whole organisation, including shared enterprise functions).

Delivering ‘value’ is principally about ensuring that  on-going or operational processes, roles and responsibilities are adjusted to maximise the benefits of a new technology implementation – which could include organisational change, marketing, customer engagement, etc.

However, delivering ‘value’ is not always correlated to one IT implementation; value can be derived from leveraging multiple IT capabilities in concert. Given the complexity of large organisations, it is often neither desirable or feasible to have a single IT partner be responsible for all the IT elements that collectively deliver business value.

On this basis, it is evident that how businesses plan and structure their portfolio of IT investments needs to change dramatically. In particular,

  1. The business value agenda is outcome focused and explicit about which IT capabilities are required to enable it, and
  2. IT investment is focused around the capability investment lifecycle that IT is responsible for stewarding.

In particular ‘capabilities’ (or IT products or services) have a lifecycle: this affects the investment and expectations around those capabilities. And some capabilities need to be more ‘agile’ than others – some must be agile to be useful, whereas for others, stability may be the over-riding priority, and therefore their lack of agility must be made explicit – so agile teams can plan around that.

Enterprise Awareness

‘Locally’ agile teams are a step in the right direction – particularly if the business stakeholders all agree they are seeing the value from that agility. But often this comes at the expense of enterprise awareness. In short, agility in the strict business sense can often only deliver results by ignoring some stakeholders interests. So ‘locally’ agile teams may feel they must minimise their interactions with other teams – particularly if those teams are not themselves agile.

If we assume that teams have been created through a process as described in the previous section, it becomes more obvious where the team sits in relation to its obligations to other teams. Teams can then make appropriate compromises to their architecture, planning and agile SDLC to allow for those obligations.

If the team was created through ‘traditional’ planning processes, then it becomes a lot harder to figure out what ‘enterprise awareness’ is appropriate (except perhaps or IT-imposed standards or gates, which only contributes indirectly to business value).

Most public agile success stories describe very well how they achieved success up to  – but not including – the point at which architecture becomes an issue. Architecture, in this sense, refers to either parts of the solution architecture which can no longer be delivered via one or two members of an agile team, or those parts of the business value chain that cannot be entirely delivered via the agile team on its own.

However, there are success stores (e.g., Spotify) that show how ‘enterprise awareness’ can be achieved without limiting agility. For many organisations, transitioning from existing organisation structures to new ‘agile-ready’ structures will be a major challenge, and far harder than simply having teams ‘adopt agile’.

Conclusions

With the increased attention on Agile, there is fortunately increased attention on scaling agile. Methodologies like Disciplined Agile Development (DaD) and LargE Scale Scrum (LeSS), coupled with portfolio concepts like Scaled Agile Framework (SAFe) propose ways in which Agile can scale beyond the team and up to enterprise level, without losing the key benefits of the agile approach.

All scaled agile methodologies call for changes in how Portfolio Management and Enterprise Architecture are typically done within an organisation, as doing these activities right are key to the success of adopting Agile at scale.

Achieving Agile at Scale

Strategic Theme #2/5: Enterprise Modularity

[tl;dr Enterprise modularity is the state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Standards for these are slowly gaining traction in the enterprise.]

Enterprise Modularity is, in essence, Service Oriented Architecture but ‘all the way down’; it has evolved a lot over the last 15-20 years since SOA was first promoted as a concept.

Specifically, SOA historically has been focused on ‘services’ – or, more explicitly, end-points, the contracts they expose, and service discovery. It has been less concerned with the architectures of the systems behind the end-points, which seems logically correct but has practical implications which has limited the widespread adoption of SOA solutions.

SOA is a great idea that has often suffered from epic failures when implemented as part of major transformation initiatives (although less ambitious implementations may have met with more success, but limited to specific domains).

The challenge has been the disconnect between what developers build, what users/customers need, and what the enterprise desires –  not just for the duration of a programme, but over time.

Enterprise modularity needs several factors to be in place before it can succeed at scale. Namely:

  • An enterprise architecture (business context) linked to strategy
  • A focus on data governance
  • Agile cross-disciplinary/cross-team planning and delivery
  • Integration technologies and tools consistent with development practices

Modularity standards and abstractions like OSGi, Resource Oriented Computing and RESTful computing enable the development of loosely coupled, cohesive modules potentially deployable as stand-alone services and which can be orchestrated in innovative ways using   many technologies – especially when linked with canonical enterprise data standards.

The upshot of all this is that traditional ESBs are transforming..technologies like Red Hat Fuse ESB and open-source solutions like Apache ServiceMix provide powerful building blocks that allow ESBs to become first-class applications rather than simply integration hubs. (MuleSoft has not, to my knowledge, adopted an open modularity standard such as OSGi, so I believe it is not appropriate to use as the basis for a general application architecture.)

This approach allows for the eventual construction of ‘domain applications’ which, more than being an integration hub, actually hosts the applications (or, more accurately, application components) in ways that make dependencies explicit.

Vendors such as CA Layer7, Apigee and AxWay are prioritising API management over more traditional, heavy-weight ESB functions – essentially, anyone can build an API internal to their application architecture, but once you want to expose it beyond the application itself, then an API management tool will be required. These can be implemented top-down once the bottom-up API architecture has been developed and validated. Enterprise or integration architecture teams should be driving the adoption and implementation of API management tools, as these (done well) can be non-intrusive with respect to how application development is done, provided application developers take a micro-services led approach to application architecture and design. The concept of ‘dumb pipes’ and ‘smart endpoints are key here, to avoid putting inappropriate complexity in the API management layer or in the (traditional) ESB.

Microservices is a complex topic, as this post describes. But modules (along the lines defined by OSGi) are a great stepping stone, as they embrace microservice thinking without necessarily introducing distributed systems complexity. Innovative technologies like Paremus Service Fabric  building on OSGi standards, help make the transition from monolithic modular architectures to a (distributed) microservice architecture as painless as can be reasonably expected, and can be a very effective way to manage evolution from monolithic to distributed (agile, microservice-based) architectures.

Other solutions, such as Cloud Foundry, offer a means of taking much of the complexity out of building green-field microservices solutions by standardising many of the platform components – however, Cloud Foundry, unlike OSGi, is not a formal open standard, and so carries risks. (It may become a de-facto standard, however, if enough PaaS providers use it as the basis for their offerings.)

The decisions as to which ‘PaaS’ solutions to build/adopt enterprise-wide will be a key consideration for enterprise CTOs in the coming years. It is vital that such decisions are made such that the chosen PaaS solution(s) will directly support enterprise modularity (integration) goals linked to development standards and practices.

For 2015, it will be interesting to see how nascent enterprise standards such as described above evolve in tandem with IaaS and PaaS innovations. Any strategic choices made in these domains must consider the cost/impact of changing those choices: defining architectures that are too heavily dependent on a specific PaaS or IaaS solution limits optionality and is contrary to the principles of (enterprise) modularity.

This suggests that Enterprise modularity standards which are agnostic to specific platform implementation choices must be put in place, and different parts of the enterprise can then choose implementations appropriate for their needs. Hence open standards and abstractions must dominate the conversation rather than specific products or solutions.

Strategic Theme #2/5: Enterprise Modularity

The Learning CTO’s Strategic themes for 2015

Like most folks, I cannot predict the future. I can only relate the themes that most interest me. On that basis, here is what is occupying my mind as we go into 2015.

The Lean Enterprise The cultural and process transformations necessary to innovate and maintain agility in large enterprises – in technology, financial management, and risk, governance and compliance. Includes using real-option theory to manage risk.
Enterprise Modularity The state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Or SOA Mark II.
Continuous Delivery The state-of-the-art technologies and techniques which brings together agile software delivery with operational considerations. Or DevOps Mark I.
Systems Theory & Systems Thinking The ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts.
Machine Learning Using business intelligence and advanced data semantics to dynamically improve automated or automatable processes.

[Blockchain technologies are another key strategic theme which are covered in a separate blog, dappsinfintech.com.]

Why these particular topics?

Specifically, over the next few years, large organisations need to be able to

  1. Out-innovate smaller firms in their field of expertise, and
  2. Embrace and leverage external innovative solutions and services that serve to enhance a firm’s core product offerings.

And firms need to be able to do this while keeping a lid on overall costs, which means managing complexity (or ‘cleaning up’ as you go along, also known as keeping technical debt under control).

It is also interesting to note that for many startups, these themes are generally not an explicit concern, as they tend to be ‘obvious’ and therefore do not need additional management action to address them. However, small firms may fail to scale successfully if they do not explicitly recognise where they are doing these already, and so ensure they maintain focus on these capabilities as they grow.

Also, for many startups, their success may be predicated on larger organisations getting on top of these themes, as otherwise the startups may struggle to get large firms to adopt their innovations on a sustainable basis.

The next few blog posts will explain these themes in a bit more detail, with relevant references.

The Learning CTO’s Strategic themes for 2015