Reactive Development vs Legacy Applications

To follow up my earlier post on the Reactive Development movement, I went to a meet-up explaining the rationale behind the concepts and hopefully to explain why it deserves its own ‘manifesto’ – particularly when compared with the architectures of many legacy applications owned by large/mature organisations, which did not specifically apply such principles.

To recap, Reactive Development is focused around designing and building applications conforming with the following 4 architectural principles:

  • Responsive
  • Resilient
  • Message driven
  • Elastic

When a new phrase is coined in technology, there are usually commercial interests behind it, and ‘reactive development’ is no different: the hosts of this particular event (Typesafe) are front-and-centre of the so-called Reactive-software movement and the Reactive Manifesto.

But, leaving that aside, is there anything new here? Why should architects care, haven’t they been dealing with these challenges all along?

Below is my assessment of how ‘legacy’ applications stack up against the Reactive Manifesto; this gives a hint as to the challenge (and opportunity) facing organisations faced with replacing their legacy architectures with more modern architectures. (In this case, ‘legacy’ means applications built over the past 10-15 years, not the mainframe dinosaurs..).


Historically, responsiveness was primarily determined from a user perspective: since many applications were built to be delivered via HTTP (HTML, Javascript, etc), this mostly concerned application server architectures (and, indirectly, database responsiveness). Application server architectures are optimised for user-led use cases – i.e., to build responsive GUIs supporting many concurrent users. The case for using complex application servers (i.e., middleware such as WebLogic, WebSphere, TomCat, etc) has historically been harder to make for non-GUI-oriented application development (i.e., where user session management is not a concern). In particular, these architectures are generally assuming that the ‘users’ are human, and not other systems, which has a significant impact on how such systems would be architected.

In summary, application responsiveness – excluding databases – has been driven primarily by GUI requirements, resulting in application architectures optimised for user use-cases which only need to meet ‘good enough’ performance criteria. This can be expensive to fix when ‘good enough’ is no longer good enough.


In ‘traditional’ architectures, resilience has been provided via techniques such as clustering, replication or redundant fail-over capacity. Such approaches require static infrastructure configurations, and the middleware software underpinning applications must be explicitly designed to support such configurations; this usually means proprietary enterprise-strength vendor solutions. Such solutions in turn have a heavy bias towards expensive infrastructure with a low MTBF (mean-time-between-failure), as recovering from failure is still an expensive and labour-intensive process – even if the application continues to operate at a degraded level through the failure. Conversely, an approach focused on achieving low mean-time-to-recovery (MTTR) would acknowledge infrastructure failure as likely to happen, and seek to minimise the time taken to get failed processes up and running again.

In summary, application resilience has historically relied on low MTBF rather than on low MTTR, and consequently resilience comes at high infrastructure and support cost. 

Message Driven

Message-driven, in the context of Reactive Development, means non-blocking and location-transparent. Publish/subscribe messaging architectures conform to that definition, and certainly these exist today. But most architectures are not message driven: the use of RESTful APIs, for example, does not by default conform to this principle, as REST is synchronous and can be location-specific. However, it is certainly possible to build message-driven architectures on top of REST.

The real point of the message-driven principle is to enable the other reactive principles of resilience, elasticity and  responsiveness. It means that even if you have point-to-point messages, as long as they conform to Reactive Principles, you will get the Reactive Manifesto benefits of “load management, elasticity, and flow control” – benefits that are not easy to obtain with legacy static point-to-point feed-driven architectures.

In most enterprise environments, good pub/sub solutions have existed for some time; however, without requiring application designers to conform to the ‘message-driven’ principle, when you want to get data from System X to System Y, why use the overhead of a (relatively expensive) enterprise messaging system when all that is needed is a simple file extract?

In summary, the lack of an historical message-driven focus for system design has resulted in a proliferation of complex and expensive-to-maintain point-to-point system dependencies, despite the existence of enabling technologies.


Traditional architectures are not elastic. By elastic, we mean the ability to scale down as well as up. Legacy architectures are static: it can often take months to introduce new infrastructure into a production environment for a ‘legacy’ application.

The best ‘traditional’ architectures use load balancing and allow the easy addition of new servers into a farm (such as a web server farm or compute grid). This enables scaling up, but doesn’t particularly support scaling down: i.e., taking servers out of the farm and making them available for other applications is still a very manual, labour intensive exercise – and is definitely not something that can be done many times during the day, as demand and load varies.

In summary: legacy architectures may at best be able to scale up, but they cannot easily scale down, resulting in wasted capacity and unnecessarily high infrastructure costs.


My main take-away from the Reactive Development  meet-up is really about how distributed computing has advanced over the last 20 years, or, more specifically, how distributed computing concepts are becoming key skills that your ‘average’ developer needs to be able to grasp to cope with ‘modern’ application development.

Distributed computing in the enterprise is hard: you just have to look at CORBA, SOA and ESBs to see how history has treated ambitious attempts at distributed computing. Distributed technologies such as

  • Distributed shared mutable state,
  • Serialisable distributed transactions,
  • Synchronous RPC, and
  • Distributed Objects

have all failed to gain mass traction because they require specialist knowledge, which usually implies vendor lock-in when deployed at enterprise scale – which equates to long-term expense.

By applying Reactive principles, and using technologies that can specifically enable them, it should be possible to build enterprise-scale applications at significantly lower long-term cost and with increasing rather than decreasing business value.

Note: I have no view as yet as to whether vendors rallying behind the Reactive Manifesto are on the right track with their solutions.  In my opinion, there is still too much emphasis on the run-time aspects of Reactive Development, and not enough on change planning and system design, which is not something any vendor can solve for an organisation, but which the application of Reactive Principles can have a very positive effect on.

Reactive Development vs Legacy Applications

The Reactive Development Movement

A new movement, called ‘Reactive Development’ is taking shape:

The rationale behind why creating a new ‘movement’, with its own manifesto, principles and related tools, vendors and evangelists, seems quite reasonable. It leans heavily on the principles of Agile, but extends them into the domain of Architecture.

The core principles of reactive development are around building systems that are:

  • Responsive
  • Resilient
  • Elastic
  • Message Driven

This list seems like a good starting may grow,but these are a solid starting point.

What this movement seems to have (correctly) realised is that building these systems aren’t easy and don’t come for most folks tend to ignore these principles *unless* these are specific requirements of the application to be built. Usually, however, the above characteristics are what you wish your system had when you see your original ‘coherent’ architecture slowly falling apart while you continually accommodate other systems that have a dependency on it (resulting in ‘stovepipe’ architectures, etc’).

IMO, this is the right time for the Reactive Movement. Most mature organisations are already feeling the pain of maintaining and evolving their stovepipe architectures, and many do not have a coherent internal technical strategy to fix this situation (or, if they do, it is often not matched with a corresponding business strategy or enabling organisational design – a subject for another post). And the rapidly growing number of SaaS platforms out there means that these principles are a concern for even the smallest startups, participating as part of a connected set of services delivering value to many businesses.

Most efforts around modularity (or ‘service oriented architectures’) in the end try to deliver on the above principles, but do not bring together all the myriad of skills or capabilities needed to make it successful.

So, I think the Reactive Development effort has the opportunity bring together many architecture skills – namely process/functional (business) architects, data architects, integration architects, application architects and engineers. Today, while the individual disciplines themselves are improving rapidly, the touch-points between them are still, in my view, very weak indeed – and often falls apart completely when it gets to the developers on the ground (especially when they are geographically removed from the architects).

I will be watching this movement with great interest.

The Reactive Development Movement