Strategic Theme #2/5: Enterprise Modularity

[tl;dr Enterprise modularity is the state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Standards for these are slowly gaining traction in the enterprise.]

Enterprise Modularity is, in essence, Service Oriented Architecture but ‘all the way down’; it has evolved a lot over the last 15-20 years since SOA was first promoted as a concept.

Specifically, SOA historically has been focused on ‘services’ – or, more explicitly, end-points, the contracts they expose, and service discovery. It has been less concerned with the architectures of the systems behind the end-points, which seems logically correct but has practical implications which has limited the widespread adoption of SOA solutions.

SOA is a great idea that has often suffered from epic failures when implemented as part of major transformation initiatives (although less ambitious implementations may have met with more success, but limited to specific domains).

The challenge has been the disconnect between what developers build, what users/customers need, and what the enterprise desires –  not just for the duration of a programme, but over time.

Enterprise modularity needs several factors to be in place before it can succeed at scale. Namely:

  • An enterprise architecture (business context) linked to strategy
  • A focus on data governance
  • Agile cross-disciplinary/cross-team planning and delivery
  • Integration technologies and tools consistent with development practices

Modularity standards and abstractions like OSGi, Resource Oriented Computing and RESTful computing enable the development of loosely coupled, cohesive modules potentially deployable as stand-alone services and which can be orchestrated in innovative ways using   many technologies – especially when linked with canonical enterprise data standards.

The upshot of all this is that traditional ESBs are transforming..technologies like Red Hat Fuse ESB and open-source solutions like Apache ServiceMix provide powerful building blocks that allow ESBs to become first-class applications rather than simply integration hubs. (MuleSoft has not, to my knowledge, adopted an open modularity standard such as OSGi, so I believe it is not appropriate to use as the basis for a general application architecture.)

This approach allows for the eventual construction of ‘domain applications’ which, more than being an integration hub, actually hosts the applications (or, more accurately, application components) in ways that make dependencies explicit.

Vendors such as CA Layer7, Apigee and AxWay are prioritising API management over more traditional, heavy-weight ESB functions – essentially, anyone can build an API internal to their application architecture, but once you want to expose it beyond the application itself, then an API management tool will be required. These can be implemented top-down once the bottom-up API architecture has been developed and validated. Enterprise or integration architecture teams should be driving the adoption and implementation of API management tools, as these (done well) can be non-intrusive with respect to how application development is done, provided application developers take a micro-services led approach to application architecture and design. The concept of ‘dumb pipes’ and ‘smart endpoints are key here, to avoid putting inappropriate complexity in the API management layer or in the (traditional) ESB.

Microservices is a complex topic, as this post describes. But modules (along the lines defined by OSGi) are a great stepping stone, as they embrace microservice thinking without necessarily introducing distributed systems complexity. Innovative technologies like Paremus Service Fabric  building on OSGi standards, help make the transition from monolithic modular architectures to a (distributed) microservice architecture as painless as can be reasonably expected, and can be a very effective way to manage evolution from monolithic to distributed (agile, microservice-based) architectures.

Other solutions, such as Cloud Foundry, offer a means of taking much of the complexity out of building green-field microservices solutions by standardising many of the platform components – however, Cloud Foundry, unlike OSGi, is not a formal open standard, and so carries risks. (It may become a de-facto standard, however, if enough PaaS providers use it as the basis for their offerings.)

The decisions as to which ‘PaaS’ solutions to build/adopt enterprise-wide will be a key consideration for enterprise CTOs in the coming years. It is vital that such decisions are made such that the chosen PaaS solution(s) will directly support enterprise modularity (integration) goals linked to development standards and practices.

For 2015, it will be interesting to see how nascent enterprise standards such as described above evolve in tandem with IaaS and PaaS innovations. Any strategic choices made in these domains must consider the cost/impact of changing those choices: defining architectures that are too heavily dependent on a specific PaaS or IaaS solution limits optionality and is contrary to the principles of (enterprise) modularity.

This suggests that Enterprise modularity standards which are agnostic to specific platform implementation choices must be put in place, and different parts of the enterprise can then choose implementations appropriate for their needs. Hence open standards and abstractions must dominate the conversation rather than specific products or solutions.

Strategic Theme #2/5: Enterprise Modularity

The Learning CTO’s Strategic themes for 2015

Like most folks, I cannot predict the future. I can only relate the themes that most interest me. On that basis, here is what is occupying my mind as we go into 2015.

The Lean Enterprise The cultural and process transformations necessary to innovate and maintain agility in large enterprises – in technology, financial management, and risk, governance and compliance. Includes using real-option theory to manage risk.
Enterprise Modularity The state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Or SOA Mark II.
Continuous Delivery The state-of-the-art technologies and techniques which brings together agile software delivery with operational considerations. Or DevOps Mark I.
Systems Theory & Systems Thinking The ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts.
Machine Learning Using business intelligence and advanced data semantics to dynamically improve automated or automatable processes.

[Blockchain technologies are another key strategic theme which are covered in a separate blog,]

Why these particular topics?

Specifically, over the next few years, large organisations need to be able to

  1. Out-innovate smaller firms in their field of expertise, and
  2. Embrace and leverage external innovative solutions and services that serve to enhance a firm’s core product offerings.

And firms need to be able to do this while keeping a lid on overall costs, which means managing complexity (or ‘cleaning up’ as you go along, also known as keeping technical debt under control).

It is also interesting to note that for many startups, these themes are generally not an explicit concern, as they tend to be ‘obvious’ and therefore do not need additional management action to address them. However, small firms may fail to scale successfully if they do not explicitly recognise where they are doing these already, and so ensure they maintain focus on these capabilities as they grow.

Also, for many startups, their success may be predicated on larger organisations getting on top of these themes, as otherwise the startups may struggle to get large firms to adopt their innovations on a sustainable basis.

The next few blog posts will explain these themes in a bit more detail, with relevant references.

The Learning CTO’s Strategic themes for 2015

The true scope of enterprise architecture

This article addresses what is, or should be, the true scope of enterprise architecture.

To date, most enterprise architecture efforts have been focused on supporting IT’s role as the ‘owner’ of all applications and infrastructure used by an organisation to deliver its core value proposition. (Note: in this article, ‘IT’ refers to the IT organisation within a firm, rather than ‘technology’ as a general theme.)

The view of the ‘enterprise’ was therefore heavily influenced by IT’s view of the enterprise.

There has been an increasing awareness in recent times that enterprise architects should look beyond IT and its role in the enterprise – notably led by Tom Graves and his view on concepts such as the business model canvas.

But while in principle this perspective correct, in practice it is hard for most organisations to swallow. So most enterprise architects are still focused on IT-centric activities, which most corporate folks are comfortable for them to do – and, indeed, for which there is still a considerable need.

But the world is changing, and this IT-centric view of the enterprise is becoming less useful, for two principle (ironically) technology-led reasons:

  • The rise of Everything-as-a-Service, and
  • Decentralised applications

Let’s cover the above two major technology trends in turn.

The rise of Everything-as-a-Service

This is a reflection that more and more technology and technology-enabled capability is being delivered as a service – i.e., on-tap, pay-as-you-go and there when you need it.

This starts with basic infrastructure (so called IaaS), and rises up to Platform-as-a-Service (PaaS), then up to Software-as-a-Service (SaaS) and finally  Business Process-as-a Service (BPaaS) and its ilk.

The implications of this are far-reaching, as this research from Horses for Sources indicates.

This trend puts the provision and management of IT capability squarely outside of the ‘traditional’ IT organisation. While IT as an organisation must still exist, it’s role will focus increasingly on the ‘digital’ agenda, namely, focusing on client-centricity (bespoke optimised client experience), data as an asset (and related governance/compliance issues), and managing complexity (of service utilisation, service integration, and legacy application evolution/retirement).

Of particular note is that many firms may be willing to write-down their legacy IT investments in the face of newer, more innovative, agile and cheaper solutions available via the ‘as-a-Service’ route.

Decentralised Applications

Decentralised applications are applications which are built on so-called ‘blockchain’ technology – i.e., applications where transactions between two or more mutually distrusting and legally distinct parties can be reliably and irrefutably recorded without the need for an intermediate 3rd party.

Bitcoin is the first and most visible decentralised application: it acts as a distributed ledger. ‘bitcoin’ the currency is built upon the Bitcoin protocol, and its focus is on being a reliable store of value. But there are many other protocols and technologies (such as Ethereum, Ripple, MasterCoin, Namecoin, etc).

Decentralised applications will enable many new and innovative distributed processes and services. Where previously many processes were retained in-house for maximum control, processes can in future be performed by one or more external parties, with decentralised application technology ensuring all parties are meeting their obligations within the overall process.


In the short term, many EA’s will be focused on helping optimise change within their respective organisation’s, and will remain accountable to IT while supporting a particular business area.

As IT’s role changes to accommodate the new reality of ‘everything-as-a-service’ and decentralised applications, EA’s should be given a broader remit to look beyond traditional IT-centric solutions.

In the meantime, businesses are investing heavily in innovation accelerators and incubators – most of which are dominated by ‘as-a-service’ solutions.

It will take some time for the ‘everything-as-a-service’ model to play itself out, as there are many issues concerning privacy, security, performance, etc that will need to be addressed in practice and in law. But the direction is clear, and firms that start to take a service-oriented view of the capabilities they need will have a competitive advantage over those that do not.






The true scope of enterprise architecture

The long journey for legacy application evolution

The CTO of Sungard recently authored this article on Sungard’s technical strategy for its large and diverse portfolio of financial applications:

The technical strategy is shockingly simple on the surface:

  • All products are to provide hosted solutions, and
  • All new products must use HTML5 technology

This (one assumes) reflects the changing nature of how Sungard would like to support its clients – i.e., moving from bespoke shrink-wrapped applications deployed and run in client environments, to providing hosted services that are more accessible and cheaper to deploy and operate.

Executing this strategy will have, I believe, some interesting side effects, as it is quite difficult to meet those to primary objectives without addressing a number of others along the way. For example, the development of key capabilities such as:

  • Elasticity
  • Dev-Ops
  • Mobile
  • Shared Services
  • Messaging/Integration


Simply put, hosted services require elasticity if cost effectiveness is to be achieved. At the very least, the capability to quickly add server capacity to a hosted service is needed, but – especially when multiple hosted services are offered – the ability to reallocate capacity to different services at different times is especially key.

For legacy applications, this can be a challenge to do: most legacy applications are designed to work with a known (ahead of time) configuration of servers, and significant configuration and/or coding work is needed when this changes. There is also a reliance on underlying platforms (e.g., application servers) to themselves be able to move to a truly elastic environment. In the end, some product architectures may be able to do no more than allow products to host a replica of the client’s installation on their data centres, shifting the operational cost from the client to the provider. I suspect that it won’t be long before these products become commercially unviable.


Dev-ops is a must-have in a hosted environment – i.e., effective, efficient processes that support robust change from programmer check-in to operational deployment. All SaaS-based firms understand the value of this, and while it’s been talked about a lot in ‘traditional’ organisations, the cultural and organisational changes needed to make it happen are often lagging.

For traditional software vendors in particular, this is a new skill: why worry about dev-ops when all you need to do is publish a new version of your software for your client to configure, test and deploy – probably no more than on a quarterly basis, or even much longer. So a whole new set of capabilities need to be built-up to support a strategy like this.


HTML5 is ready-made for mobile. By adopting HTML5 as a standard, application technology stacks will be mobile-ready, across the range of devices, enabling the creation of many products optimised for specific clients or client user groups.

In general, it should not be necessary to develop applications specifically for a particular mobile platform: the kinds of functionality that need to be provided will (in the case of financial services) likely be operational in nature – i.e., workflows or dashboards, etc.

Applications specifically designed to support trading activities may need to be custom-built, but even these should be easier to built on a technology stack that supports HTML5 vs more traditional 2-tier fat client & database architectures.

Traditional 3-tier architectures are obviously easier to migrate to HTML5, but in a large portfolio of (legacy) applications, these may be in a relative minority.

Shared Services

In order to build many more innovative products, new products need to build upon the capabilities of older or existing products – but without creating ‘drag’ (i.e., software complexity) that can slow down and eventually kill the ability to innovate.

The way to do this is with shared services, which all teams recognise and evolve, and who build their products on top of their shared capabilities. Shared services mean better data governance and architectural oversight, to ensure the most appropriate services are identified, and to ensure the design of good APIs and reusable software. Shared services pave the way for applications to consolidate functionality at a portfolio level.


On the face of it, the Sungard strategy doesn’t say anything about messaging or integration: after all, HTML5 is a client GUI technology, and hosted services are (in principle) primarily about infrastructure.

And one could certainly meet those stated objectives without addressing messaging/integration. However, once you start to have hosted services, you can start to interact with clients from a business development perspective on a ‘whole capability’ basis – i.e., offer solutions based on the range of hosted services offered. This will require a degree of interoperability between services predicated primarily on message passing of some sort.

On top of that, with hosted services, connection points into client environments need to be more formally managed: it is untenable (from a cost/efficiency perspective) to have custom interface points for every client, so some standards will be needed.

Messaging/Integration is closely related to shared services (they both are concerned with APIs and message content etc), but are more loosely coupled, as the client should not (to the fullest extent possible) know or care about the technical implementation of the hosted services, and instead focus on message characteristics and behaviour as it relates to their internal platforms.


The two objectives set by Sungard are a reflection of what is both meaningful and achievable for a firm with a portfolio of its size and complexity. Products that choose to adopt the directives to the letter will eventually find they have no strategic future.

Product owners, on the other hand, that understand where the CTO is really trying to get to, will thrive and survive in an environment where legacy financial applications – feature-rich as they are – are at risk of being outed by new internet-based firms that are able to leverage the best of the latest technologies and practices to scale and take market share very quickly from traditional vendors.

Portfolio owners on the buy-side (i.e., financial service firms) should take note. A similar simply stated strategy may be all that is needed to get the ball rolling.

The long journey for legacy application evolution

Reactive Development vs Legacy Applications

To follow up my earlier post on the Reactive Development movement, I went to a meet-up explaining the rationale behind the concepts and hopefully to explain why it deserves its own ‘manifesto’ – particularly when compared with the architectures of many legacy applications owned by large/mature organisations, which did not specifically apply such principles.

To recap, Reactive Development is focused around designing and building applications conforming with the following 4 architectural principles:

  • Responsive
  • Resilient
  • Message driven
  • Elastic

When a new phrase is coined in technology, there are usually commercial interests behind it, and ‘reactive development’ is no different: the hosts of this particular event (Typesafe) are front-and-centre of the so-called Reactive-software movement and the Reactive Manifesto.

But, leaving that aside, is there anything new here? Why should architects care, haven’t they been dealing with these challenges all along?

Below is my assessment of how ‘legacy’ applications stack up against the Reactive Manifesto; this gives a hint as to the challenge (and opportunity) facing organisations faced with replacing their legacy architectures with more modern architectures. (In this case, ‘legacy’ means applications built over the past 10-15 years, not the mainframe dinosaurs..).


Historically, responsiveness was primarily determined from a user perspective: since many applications were built to be delivered via HTTP (HTML, Javascript, etc), this mostly concerned application server architectures (and, indirectly, database responsiveness). Application server architectures are optimised for user-led use cases – i.e., to build responsive GUIs supporting many concurrent users. The case for using complex application servers (i.e., middleware such as WebLogic, WebSphere, TomCat, etc) has historically been harder to make for non-GUI-oriented application development (i.e., where user session management is not a concern). In particular, these architectures are generally assuming that the ‘users’ are human, and not other systems, which has a significant impact on how such systems would be architected.

In summary, application responsiveness – excluding databases – has been driven primarily by GUI requirements, resulting in application architectures optimised for user use-cases which only need to meet ‘good enough’ performance criteria. This can be expensive to fix when ‘good enough’ is no longer good enough.


In ‘traditional’ architectures, resilience has been provided via techniques such as clustering, replication or redundant fail-over capacity. Such approaches require static infrastructure configurations, and the middleware software underpinning applications must be explicitly designed to support such configurations; this usually means proprietary enterprise-strength vendor solutions. Such solutions in turn have a heavy bias towards expensive infrastructure with a low MTBF (mean-time-between-failure), as recovering from failure is still an expensive and labour-intensive process – even if the application continues to operate at a degraded level through the failure. Conversely, an approach focused on achieving low mean-time-to-recovery (MTTR) would acknowledge infrastructure failure as likely to happen, and seek to minimise the time taken to get failed processes up and running again.

In summary, application resilience has historically relied on low MTBF rather than on low MTTR, and consequently resilience comes at high infrastructure and support cost. 

Message Driven

Message-driven, in the context of Reactive Development, means non-blocking and location-transparent. Publish/subscribe messaging architectures conform to that definition, and certainly these exist today. But most architectures are not message driven: the use of RESTful APIs, for example, does not by default conform to this principle, as REST is synchronous and can be location-specific. However, it is certainly possible to build message-driven architectures on top of REST.

The real point of the message-driven principle is to enable the other reactive principles of resilience, elasticity and  responsiveness. It means that even if you have point-to-point messages, as long as they conform to Reactive Principles, you will get the Reactive Manifesto benefits of “load management, elasticity, and flow control” – benefits that are not easy to obtain with legacy static point-to-point feed-driven architectures.

In most enterprise environments, good pub/sub solutions have existed for some time; however, without requiring application designers to conform to the ‘message-driven’ principle, when you want to get data from System X to System Y, why use the overhead of a (relatively expensive) enterprise messaging system when all that is needed is a simple file extract?

In summary, the lack of an historical message-driven focus for system design has resulted in a proliferation of complex and expensive-to-maintain point-to-point system dependencies, despite the existence of enabling technologies.


Traditional architectures are not elastic. By elastic, we mean the ability to scale down as well as up. Legacy architectures are static: it can often take months to introduce new infrastructure into a production environment for a ‘legacy’ application.

The best ‘traditional’ architectures use load balancing and allow the easy addition of new servers into a farm (such as a web server farm or compute grid). This enables scaling up, but doesn’t particularly support scaling down: i.e., taking servers out of the farm and making them available for other applications is still a very manual, labour intensive exercise – and is definitely not something that can be done many times during the day, as demand and load varies.

In summary: legacy architectures may at best be able to scale up, but they cannot easily scale down, resulting in wasted capacity and unnecessarily high infrastructure costs.


My main take-away from the Reactive Development  meet-up is really about how distributed computing has advanced over the last 20 years, or, more specifically, how distributed computing concepts are becoming key skills that your ‘average’ developer needs to be able to grasp to cope with ‘modern’ application development.

Distributed computing in the enterprise is hard: you just have to look at CORBA, SOA and ESBs to see how history has treated ambitious attempts at distributed computing. Distributed technologies such as

  • Distributed shared mutable state,
  • Serialisable distributed transactions,
  • Synchronous RPC, and
  • Distributed Objects

have all failed to gain mass traction because they require specialist knowledge, which usually implies vendor lock-in when deployed at enterprise scale – which equates to long-term expense.

By applying Reactive principles, and using technologies that can specifically enable them, it should be possible to build enterprise-scale applications at significantly lower long-term cost and with increasing rather than decreasing business value.

Note: I have no view as yet as to whether vendors rallying behind the Reactive Manifesto are on the right track with their solutions.  In my opinion, there is still too much emphasis on the run-time aspects of Reactive Development, and not enough on change planning and system design, which is not something any vendor can solve for an organisation, but which the application of Reactive Principles can have a very positive effect on.

Reactive Development vs Legacy Applications

APIs, more APIs, and .. crypto-currencies

I recently attended #APIconUK, a conference on APIs organised by the folks at Programmable Web. This event was aimed at developers (bottom-up API development) rather than more traditional enterprise customers (top-down SOA-types), and as such was very poorly represented by enterprise-minded folks. Which, IMO, is a bit short-sighted.

The conference was good – a nice mix of hands on, technical and strategic presentations with plenty of time to chat to people between sessions.

Sadly I couldn’t make all the sessions, but key themes I picked up were:

  • Emphasis on treating APIs as a product
  • Computational APIs using Wolfram Alpha’s Mathematica
  • The rise of Linked Data
  • API Security
  • Payment APIs (Stripe)
  • Monetisation of APIs using crypto-currencies
  • Mobile developers & APIs

Adding all these together, what does it really mean?

  • APIs must be treated as first-class citizens with respect to technology management..inventoried/marketed, governed and supported in much the same way applications have been in the past.
  • It is possible to rapidly build useful services using sophisticated but usually not enterprise-standard tools like Mathematica, but to expose the end result as a RESTful API, which the enterprise IT teams can then re-implement in standard technologies. This allows consumers of the API to gain early benefit, and allow enterprise IT time to implement it in a sustainable,medium-term cost effective way.
  • Linked Data – the idea of attaching semantics to API information – is key for Machine Learning. APIs have the opportunity to become dynamic and self-forming when coupled with semantic information via Linked Data standards. Machine Learning is, in turn, key to automate or significantly improve processes that have relied on traditional BI reporting methods.
  • API security is a huge concern, but lots of vendors and solutions are beginning to appear in this space. In essence, it is quick and easy for anyone to create an API (just build a RESTful interface), but API management tools can help control who accesses that API, etc. It also imposes a level of governance to avoid proliferation of dubiously useful APIs.
  • Payment APIs are the next big thing. Stripe demonstrated a live build-it-from-scratch example of how to use their payment APIs, and it is remarkably simple. It doesn’t even need a client side SDK, as you can invoke the API via tools like CURL etc. An excellent example of how a simple API can hide the complexity of the back-end processing that must go on to make it appear simple..
  • Monetisation of APIs could be huge to incentivise the development of reusable services in direct response to demand. This is a significant topic, but essentially the current API model relies on many (internet-based) API providers ultimately making money from advertising, and not from the API itself. Introducing an API currency which is exchanged whenever external APIs are used, or external services use your API, is an intriguing concept, which may be useful even in a ‘traditional’ corporate environment, and would allow service providers to monetise their services directly without having to rely on ‘eyeball’ oriented revenue.
  • Mobile developers today still tend to build the back-end in a way that is highly optimised for the mobile application – in other words, any APIs built (RESTful or otherwise) tend to be designed for the specific needs of the mobile app, and not provided as a general service. Mobile developers have the opportunity to take the lead in ensuring that any mobile app results in the publishing of one or more enterprise-relevant services that other (mobile) app developers could use. Clearly this has an impact on how systems are designed and built, which is likely why the focus on mobile hasn’t produced good enterprise-relevant APIs.

Where does this leave the enterprise CTO? Here’s my take:

1) Mobile apps is where a lot of attention is these days..make sure every mobile project catalogs its data and makes an effort to publish reusable APIs. Do not let every mobile app have its own back-end server implementation, unless it is essentially just an orchestration layer.

2) Encourage developers to build and catalog APIs..even creating a simple, human-discoverable internal catalog like the one Programmable Web has would be a good start.

3) Understand your data from a business perspective, and document it using semantic web standards. This not only helps with IT planning, but is also directly usable in API and machine-learning initiatives, through standards like RDF and JSON-LD. Data architects are the best folks to tackle this.

4) Use API Management tools to help manage and control the creation and use of APIs.

5) Adopt principles like those proposed by the Reactive Development Movement to ensure that architectures behind APIs meet basic architectural standards, so they have some hope of coping with unplanned growth and utilisation.

6) Any app that does Payments should use a well-defined Payments API. It’s beyond the right time to develop internal Payment API standards for apps that have payment processing requirements..the increasing use of alternative currencies will expedite the need for this, as many traditional or legacy systems will not be able to cope with these currencies, so the API should hide this. Plus. it will help manage the transition from current state to target application landscape for payments-related applications. A model-driven approach to payments is key to this, especially as international payments standards take time to evolve.

7) APIs can enable innovative (read: non-standard) use of IT solutions, as long as the innovation can be encapsulated behind standard APIs. This gives the enterprise IT folks time to plan how to support the API, but still allow (limited) business adoption of the API for immediate business value.

With respect to how this all relates to SOA-type enterprise integration strategies, I’m still forming a view. Specifically, the internet itself does not have the concept of a ‘bus’. Someday it possibly will. But this is one space where the innovation still needs to come from enterprises, which still struggle with implementing successful SOA architectures that extend much beyond departmental boundaries.

Finally, the relationship between APIs and crypto-currencies is potentially so huge, that I will cover that another day..

APIs, more APIs, and .. crypto-currencies

The Reactive Development Movement

A new movement, called ‘Reactive Development’ is taking shape:

The rationale behind why creating a new ‘movement’, with its own manifesto, principles and related tools, vendors and evangelists, seems quite reasonable. It leans heavily on the principles of Agile, but extends them into the domain of Architecture.

The core principles of reactive development are around building systems that are:

  • Responsive
  • Resilient
  • Elastic
  • Message Driven

This list seems like a good starting may grow,but these are a solid starting point.

What this movement seems to have (correctly) realised is that building these systems aren’t easy and don’t come for most folks tend to ignore these principles *unless* these are specific requirements of the application to be built. Usually, however, the above characteristics are what you wish your system had when you see your original ‘coherent’ architecture slowly falling apart while you continually accommodate other systems that have a dependency on it (resulting in ‘stovepipe’ architectures, etc’).

IMO, this is the right time for the Reactive Movement. Most mature organisations are already feeling the pain of maintaining and evolving their stovepipe architectures, and many do not have a coherent internal technical strategy to fix this situation (or, if they do, it is often not matched with a corresponding business strategy or enabling organisational design – a subject for another post). And the rapidly growing number of SaaS platforms out there means that these principles are a concern for even the smallest startups, participating as part of a connected set of services delivering value to many businesses.

Most efforts around modularity (or ‘service oriented architectures’) in the end try to deliver on the above principles, but do not bring together all the myriad of skills or capabilities needed to make it successful.

So, I think the Reactive Development effort has the opportunity bring together many architecture skills – namely process/functional (business) architects, data architects, integration architects, application architects and engineers. Today, while the individual disciplines themselves are improving rapidly, the touch-points between them are still, in my view, very weak indeed – and often falls apart completely when it gets to the developers on the ground (especially when they are geographically removed from the architects).

I will be watching this movement with great interest.

The Reactive Development Movement