The hidden costs of PaaS & microservice engineering innovation

[tl;dr The leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.]

The pace of innovation in the PaaS and microservice space is increasing rapidly. This, coupled with increasing pressure on ‘traditional’ organisations to deliver more value more quickly from IT investments, is causing a flurry of interest in PaaS enabling technologies such as Cloud Foundry (favoured by the likes of IBM and Pivotal), OpenShift (favoured by RedHat), Azure (Microsoft), Heroku (SalesForce), AWS, Google Application Engine, etc.

A key characteristic of all these PaaS solutions is that they are ‘devops’ enabled – i.e., it is possible to automate both code and infrastructure deployment, enabling the way to have highly automated operational processes for applications built on these platforms.

For large organisations, or organisations that prefer to control their infrastructure (because of, for example, regulatory constraints), PaaS solutions that can be run in a private datacenter rather than the public cloud are preferable, as this a future option to deploy to external clouds if needed/appropriate.

These PaaS environments are feature-rich and aim to provide a lot of the building blocks needed to build enterprise applications. But other framework initiatives, such as Spring Boot, DropWizard and Vert.X aim to make it easier to build PaaS-based applications.

Combined, all of these promise to provide a dramatic increase in developer productivity: the marginal cost of developing, deploying and operating a complete application will drop significantly.

Due to the low capital investment required to build new applications, it becomes ever more feasible to move from a heavy-weight, planning intensive approach to IT investment to a more agile approach where a complete application can be built, iterated and validated (or not) in the time it takes to create a traditional requirements document.

However, this also has massive implications, as – left unchecked – the drift towards entropy will increase over time, and organisations could be severely challenged to effectively manage and generate value from the sheer number of applications and services that can be created on such platforms. So an eye on managing complexity should be in place from the very beginning.

Many of the above platforms aim to make it as easy as possible for developers to get going quickly: this is a laudable goal, and if more of the complexity can be pushed into the PaaS, then that can only be good. The consequence of this approach is that developers have less control over the evolution of key aspects of the PaaS, and this could cause unexpected issues as PaaS upgrades conflict with application lifecycles, etc. In essence, it could be quite difficult to isolate applications from some PaaS changes. How these frameworks help developers cope with such changes is something to closely monitor, as these platforms are not yet mature enough to have gone through a major upgrade with a significant number of deployed applications.

The relative benefit/complexity trade-off between established microservice frameworks such as OSGi and easier to use solutions such as described above needs to be tested in practice. Specifically, OSGi’s more robust dependency model may prove more useful in enterprise environments than environments which have a ‘move fast and break things’ approach to application development, especially if OSGi-based PaaS solutions such as JBoss Fuse on OpenShift and Paremus ServiceFabric gain more popular use.

So: all well and good from the technology side. But even if the pros and cons of the different engineering approaches are evaluated and a perfect PaaS solution emerges, that doesn’t mean Microservice Nirvana can be achieved.

A recent article on the challenges of building successful micro-service applications, coupled with a presentation by Lisa van Gelder at a recent Agile meetup in New York City, has emphasised that even given the right enabling technologies, deploying microservices is a major challenge – but if done right, the rewards are well worth it.

Specifically, there are a number of factors that impact the success of a large scale or enterprise microservice based strategy, including but not limited to:

  • Shared ownership of services
  • Setting cross-team goals
  • Performing scrum of scrums
  • Identifying swim lanes – isolating content failure & eventually consistent data
  • Provision of Circuit breakers & Timeouts (anti-fragile)
  • Service discoverability & clear ownership
  • Testing against stubs; customer driven contracts
  • Running fake transactions in production
  • SLOs and priorities
  • Shared understanding of what happens when something goes wrong
  • Focus on Mean time to repair (recover) rather than mean-time-to-failure
  • Use of common interfaces: deployment, health check, logging, monitoring
  • Tracing a users journey through the application
  • Collecting logs
  • Providing monitoring dashboards
  • Standardising common metric names

Some of these can be technically provided by the chosen PaaS, but a lot is based around the best practices consistently applied within and across development teams. In fact, it is quite hard to capture these key success factors in traditional architectural views – something that needs to be considered when architecting large-scale microservice solutions.

In summary, the leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.
The hidden costs of PaaS & microservice engineering innovation

How ‘uncertainty’ can be used to create a strategic advantage

[TL;DR This post outlines a strategy for dealing with uncertainty in enterprise architecture planning, with specific reference to regulatory change in financial services.]

One of the biggest challenges anyone involved in technology has is in balancing the need to address the immediate requirement with the need to prepare for future change at the risk of over-engineering a solution.

The wrong balance over time results in complex, expensive legacy technology that ends up being an inhibitor to change, rather than an enabler.

It should not be unreasonable to expect that, over time, and with appropriate investment, any firm should have significant IT capability that can be brought to bear for a multitude of challenges or opportunities – even those not thought of at the time.

Unfortunately, most legacy systems are so optimised to solve the specific problem they were commissioned to solve, they often cannot be easily or cheaply adapted for new scenarios or problem domains.

In other words, as more functionality is added to a system, the ability to change it diminishes rapidly:

Agility vs FunctionalityThe net result is that the technology platform already in place is optimised to cope with existing business models and practices, but generally incapable of (cost effectively) adapting to new business models or practices.

To address this needs some forward thinking: specifically, what capabilities need to be developed to support where the business needs to be, given the large number of known unknowns? (accepting that everybody is in the same boat when it comes to dealing with unknown unknowns..).

These capabilities are generally determined by external factors – trends in the specific sector, technology, society, economics, etc, coupled with internal forward-looking strategies.

An excellent example of where a lack of focus on capabilities has caused structural challenges is the financial industry. A recent conference at the Bank for International Settlements (BIS) has highlighted the following capability gaps in how banks do their IT – at least as it relates to regulator’s expectations:

  • Data governance and data architecture need to be optimised in order to enhance the quality, accuracy and integrity of data.
  • Analytical and reporting processes need to facilitate faster decision-making and direct availability of the relevant information.
  • Processes and databases for the areas of finance, control and risk need to be harmonised.
  • Increased automation of the data exchange processes with the supervisory authorities is required.
  • Fast and flexible implementation of supervisory requirements by business units and IT necessitates a modular and flexible architecture and appropriate project management methods.

The interesting aspect about the above capabilities is that they span multiple businesses, products and functional domains. Yet for the most part they do not fall into the traditional remit of typical IT organisations.

The current state of technology today is capable of delivering these requirements from a purely technical perspective: these are challenging problems, but for the most part they have already been solved, or are being solved, in other industries or sectors – sometimes in a larger scale even than banks have to deal with. However, finding talent is, and remains, an issue.

The big challenge, rather is in ‘business-technology’. That amorphous space that is not quite business but not quite (traditional) IT either. This is the capability that banks need to develop: the ability to interpret what outcomes a business requires, and map that not only to projects, but also to capabilities – both business capabilities and IT capabilities.

So, what core capabilities are being called out by the BIS? Here’s a rough initial pass (by no means complete, but hopefully indicative):

Data Governance Increased focus on Data Ontologies, Semantic Modelling, Linked/Open Data (RDF), Machine Learning, Self-Describing Systems, Integration
Analytics & Reporting Application of Big Data techniques for scaling timely analysis of large data sets, not only for reporting but also as part of feedback loops into automated processes. Data science approach to analytics.
Processes & Databases Use of meta-data in exposing capabilities that can be orchestrated by many business-aligned IT teams to support specific end-to-end business processes. Databases only exposed via prescribed services; model-driven product development; business architecture.
Automation of data exchange  Automation of all report generation, approval, publishing and distribution (i.e., throwing people at the problem won’t fix this)
Fast and flexible implementation Adoption of modular-friendly practices such as portfolio planning, domain-driven design, enterprise architecture, agile project management, & microservice (distributed, cloud-ready, reusable, modular) architectures

It should be obvious looking at this list that it will not be possible or feasible to outsource these capabilities. Individual capabilities are not developed isolation: they complement and support each other. Therefore they need to be developed and maintained in-house – although vendors will certainly have a role in successfully delivering these capabilities. And these skills are quite different from skills existing business & IT folks have (although some are evolutionary).

Nobody can accurately predict what systems need to be built to meet the demands of any business in the next 6 months, let alone 3 years from now. But the capabilities that separates the winners for the losers in given sectors are easier to identify. Banks in particular are under massive pressure, with regulatory pressure, major shifts in market dynamics, competition from smaller, more nimble alternative financial service providers, and rapidly diminishing technology infrastructure costs levelling the playing field for new contenders.

Regulators have, in fact, given banks a lifeline: those that heed the regulators and take appropriate action will actually be in a strong position to deal competitively with significant structural change to the financial services industry over the next 10+ years.

The changes (client-centricity, digital transformation, regulatory compliance) that all knowledge-based industries (especially finance) will go through will depend heavily on all of the above capabilities. So this is an opportunity for financial institutions to get 3 for the price of 1 in terms of strategic business-IT investment.

How ‘uncertainty’ can be used to create a strategic advantage

THE FUTURE OF ESBs (AND SOA)

There are some interesting changes happening in technology, which will likely fundamentally change how IT approaches technology like Enterprise Service Buses (ESBs) and concepts like Service Oriented Architecture (SOA).

Specifically, those changes are:

  • An increased focus on data governance, and
  • Microservice technology

Let’s take each in turn, and conclude by suggesting how this will impact how ESBs and SOA will likely evolve.

Data Governance

Historically, IT has an inconsistent record with respect to data governance. For sure, each application often had dedicated data modellers or designers, but its data architecture tended to be very inward focused. Integration initiatives tended to focus on specific projects with specific requirements, and data was governed only to the extent it enabled inidividual project objectives to be achieved.

Sporadic attempts at creating standard message structures and dictionaries crumbled in the face of meeting tight deadlines for critical business deliverables.

ESBs, except in the most stable, controlled environments, failed to deliver the anticipated business benefits because heavy-weight ESBs turned out to be at least as un-agile as the applications they intended to integrate, and since the requirements on the bus evolve continually, application teams tended to favour reliable (or at least predictable) point-to-point solutions over enterprise solutions.

But there are three new drivers for improving data governance across the enterprise, and not just at the application level. These are:

  • Security/Privacy
  • Digital Transformation
  • Regulatory Control

The security/privacy agenda is the most visible, as organisations are extremely exposed to reputational risk if there are security breaches. An organisation needs to know what data it has where, and who has access to it, in order to ensure it can protect it.

Digital transformation means that every process is a digital-first process (or ‘straight-through-processing’ in the parlance of financial services). Human intervention should only be required to handle exceptions. And it means that the capabilities of the entire enterprise need to be brought to bear in order to provide a consistent connected customer experience.

For regulated industries, government regulators are now insisting that firms govern their data throughout that data’s entire lifecycle, not only from a security/privacy compliance perspective, but also from the perspective of being able to properly aggregate and report on regulated data sets.

The same governance principles, policies, processes and standards within an enterprise should underpin all three drivers – hence the increasing focus on establishing the role of ‘chief data officeer’ within organisations, and resourcing that role to materially improve how firms govern their data.

Microservice Technology

Microservice technology is an evolution of modularity in monolithic application design that started with procedures, and evolved through to object-oriented programming, and then to packages/modules (JARs and DLLs etc).

Along the way were attempts to extend the metaphor to distributed systems – e.g., RPC, CORBA, SOA/SOAP, and most recently RESTful APIs – in addition to completely different ‘message-driven’ approachs such as that advocated by the Reactive Development community.

Unfortunately, until fairly recently, most applications behind distributed end-points were architecturally monolithic – i.e., complex applications that needed to go through significant build-test-deploy processes for even minor changes, making it very difficult to adapt these applications in a timely manner to external change factors, such as integrations.

The microservices movement is a reaction to this, where the goal is to be able to deploy microservices as often as needed, without the risk of breaking the entire application (or of having a simple rollback process if it does break). In addition, microservice architectures are inherently amenable to horizontal scaling, a key factor behind its use within internet-scale technology companies.

So, microservices are an architectural style that favours agile, distributed deployment.

As such, one benefit behind the use of microservices is that it allows teams, or individuals within teams, to take responsibility for all aspects of the microservice over its lifetime. In particular, where microservices are exposed to external teams, there is an implied commitment from the team to continue to support those external teams throughout the life of the microservice.

A key aspect of microservices is that they are fairly lightweight: the developer is in control. There is no need for specific heavyweight infrastructure – in fact, microservices favor anti-fragile architectures, with abundant low-cost infrastructure.

Open standards such as OSGi and abstractions such as Resource Oriented Computing allow microservices to participate in a governed, developer-driven context. And in the default (simplest) case, microservices can be exposed using plain-old RESTful standards, which every web application developer is at least somewhat familiar with.

Data Governance + Microservices = Enterprise Building Blocks

Combining the benefits of both data governance and microservices means that firms for the first time can start buiding up a real catalog of enterprise-re-usable building blocks – but without the need for a traditional ESB, or traditional ESB governance. Microservices are developed in response to developer needs (perhaps influenced by Data Governance standards), and Data Standards can be used to describe, in an enterprise context, what those (exposed) microservices do.

Because microservices technologies allow ‘smart endpoints’ to be easily created and integrated into an application architecture, the need for a central ‘bus’ is eliminated. Developers can create many endpoints with limited complexity overhead, and over time can converge these into a small number of common services.

With respect to the Service Registry function provided by ESBs, the new breed of API Management tools may be sufficient to provide any lookup/resolution capabilities required (above and beyond those provided by the microservice architecture itself). API Management tools also keep complexity out of API development by taking care of monitoring, analytics, authentication, protocol conversion and basic throttling capabilities – for those APIs that require those capabilities.

Culturally, however, microservices requires a collaborative approach to software development and evolution, with minimum top-down command-and-control intervention. Data governance, on the other hand, is necessarily driven top-down. So there is a risk of a cultural conflict between top-down data governance and bottom-up microservice delivery: both sides need to be sensitive to the needs of the other side, and be prepared to make compromises occasionally.

In conclusion, the ESB is dead…but long live (m)SOA.

THE FUTURE OF ESBs (AND SOA)

Microservice Principles and Enterprise IT Architecture

The idea that the benefits embodied in a microservices approach to solution architecture is relevant to enterprise architecture is a solid one.

In particular, it allows bottom-up, demand-driven solution architectures to evolve, while providing a useful benchmark to assess if those architectures are moving in a way that increases the organization’s ability to manage overall complexity (and hence business agility).

Microservices cannot be mandated top-down the same way Services were intended to be. But fostering a culture and developing an IT strategy that encourages the bottom-up development of microservices will have a significant sustainable positive impact on a business’s competitiveness in a highly digital environment.

I fully endorse all the points Gene has made in his blog post.

Form Follows Function

Julia Set Fractal

Ruth Malan is fond of noting that “design is fractal”. In a comment on her post “We Just Stopped Talking About Design”, she observed:

We need to get beyond thinking of design as just a do once, up-front sort of thing. If we re-orient to design as something we do at different levels (strategic, system-in-context, system, elements and mechanisms, algorithms, …), at different times (including early), and iteratively and throughout the development and evolution of systems, then we open up the option that we (can and should) design in different media.

This fractal nature is illustrated by the fact that software systems and systems of systems belonging to an organization exist within an ecosystem dominated by that organization which is itself a system of systems of the social kind operating within a larger ecosystem (i.e. the enterprise). Just as structure follows strategy then becomes a constraint on strategy…

View original post 670 more words

Microservice Principles and Enterprise IT Architecture

The role of microservices in Enterprise Architectural planning

Microservices as an architectural construct is gaining a lot of popularity, and the term is slowly but surely finding its place among the lexicon of developers and software architects. The precise definition of ‘microservices’ varies (it is not a formal concept in computer science, as to my knowledge it is not grounded in any formal mathematical abstraction) but the basic concept of a microservice being a reusable lightweight architectural component doing one thing and doing it extremely well is a reasonable baseline.

What do ‘Microservices’ mean to organisations used to understanding their world in terms of Applications? What about all the money some organisations have already spent on Service Oriented Architecture (SOA)? Have all the Services now been supplanted by Microservices? And does all this stuff need to be managed in a traditional command-and-control way, or is it possible they could self-manage?

The answers to these questions are far from reaching a consensus, but here folllows an approach that may help begin to help architects and planners rationalise all these concepts, taken from a pragmatic enterprise perspective.

Note that the definitions below do not focus on technologies so much as scenarios.

Applications

Applications are designed to meet the needs of a specific set of users for a specific set of related tasks or activities. Applications should not be shared in an enterprise sense. This means applications must not expose any re-usable services.

However, application functionality may depend on services exposed by many domains, and applications should use enterprise services to the fullest extent possible. To that extent, applications must focus on orchestrating rather than implementing services and microservices, even though it is generally through the act of building applications that services and microservices are identified and created.

Within a given domain, an Application should make extensive use of the microservices available in that domain.

Applications require a high degree of governance: i.e., for a given user base, the intent to build and maintain an application for a given purpose needs to be explicitly funded – and the domain that that application belongs to must be unambiguous. (But noting that the functionality provided by an application may leverage the capabilities of multiple domains.)

Services

Services represent the exposed interfaces (in a technical sense) of organizational units or functional domains. Services represent how that domain wants information or services to be consumed and/or published with respect to other domains.

Services represent a contract between that domain and other domains, and reflect the commitment the domain has to the efficient operation of the enterprise as a whole. Services may orchestrate other services (cross-domain) or microservices (intra-domain). The relationship between applications and services is asymmetric: if a service has a dependency on an application, then the architecture needs reviewing.

Services require a high degree of governance: i.e., the commitments/obligations that a domain has on other domains (and vice-versa) needs to be formalised and not left to individual projects to work out. It means some form of Service Design methodology, including data governance, is required – most likely in parallel with application development.

Microservices

Within a domain, many microservices may be created to support how that domain builds or deploys functionality. Microservices are optimised for the domain’s performance, and not necessarily for the enterprise, and do not depend upon microservices in other domains.

Applications that are local to a domain may use local microservices directly, but microservices should be invisible to applications or services in other domains.

However, some microservices may be exposed as Services by the domain where there is seen to be value in doing so. This imposes an external obligation on a domain, which are enforced through planning and resourcing.

Microservices require a low degree of governance: the architectural strategy for a domain will define the extent to which microservices form part of the solution architecture for a domain. Microservices developers choose to make their services available and discoverable in ways appropriate to their solution architecture.

In fact, evidence of a successful microservices based approach to application development is an indicator of a healthy technical strategy. Rather than require microservices to be formally planned, the existence of microservices in a given domain should be a hint as to what services could or should be formally exposed by the domain to support cross-divisional goals.

It is quite unlikely that meaningful (resilient, robust, scaleable) Services can be identified and deployed without an underlying microservices strategy. Therefore, the role of a domain architect should be to enable a culture of microservice development, knowing that such a culture will directly support strategic enterprise goals.

Domains

A domain consists of a collection of applications, services and microservices, underpinning a business capability. It is an architectural partition of the organisation, which can be reflected in the firm’s organisational structure. It encapsulates processes, data and technology.

In particular, domains represent business capabilities and are clusters of business and technology – or, equivalently, business function and technology.

The organisational structure as it relates to domains must reflect the strategy of the firm; without this, any attempt at developing a technology strategy will likely fail to achieve the intended outcome.

So, the organisational structure is key, as without appropriate accountability and a forward-looking vision, the above concepts will not usefully survive ad-hoc organisational changes.

While organisational hierarchies may change, what a business actually does will not change very often (except as part of major transformation efforts). And while different individuals may take different responsibilities for different domains (or sets of domains), if an organisation change materially impacts the architecture, then architectural boundaries and roadmaps need to change accordingly so that they re-align with the new organisational goals.

Summary

Partitioning an organisation’s capabilities is a necessary prerequisite to developing a technology strategy. A technology strategy can be expressed in terms of domains, applications, services and (the presence of) microservices.

The key is to not be overly prescriptive, and to allow innovation and creativity within and across domains, but providing sufficient constraints to encourage innovation while simultaneously keeping a lid on complexity. The presence of microservices and a viable microservice strategy within a domain can be used as a good proxy for enterprise technology strategy alignment.

The role of microservices in Enterprise Architectural planning

The vexing question of the value of (micro)services

In an enterprise context, the benefit and value of developing distributed systems vs monolithic systems is, I believe, key to whether large organisations can thrive/survive in the disruptive world of nimble startups nipping at their heels.

The observed architecture of most mature enterprises is a rats nest of integrated systems and platforms, resulting in high total cost of ownership, reduced ability to respond to (business) change, and significant inability to leverage the single most valuable assets of most modern firms: their data.

In this context, what exactly is a ‘rats nest’? In essence, it’s where dependencies between systems are largely unknown and/or ungoverned, the impact of changes between one system and another is unpredictable, and where the cost of rigorously testing such impacts outweigh the economic benefit of the business need requiring the change in the first place.

Are microservices and distributed architectures the answer to addressing this?

Martin Fowler recently published a very insightful view of the challenges of building applications using a microservices-based architecture (essentially where more components are out-of-process than are in-process):

http://martinfowler.com/articles/distributed-objects-microservices.html

The led to a discussion on LinkedIn prompted by this post, questioning the value/benefit of serviced-based architectures:

Fears for Tiers – Do You Need a Service Layer?

The upshot is, there is no agreed solution pattern that addresses the challenges of how to manage the complexity of the inherently distributed nature of enterprise systems, in a cost effective, scalable way. And therein lies the opportunities for enterprise architects.

It seems evident that the existing point-to-point, use-case based approach to developing enterprise systems cannot continue. But what is the alternative? 

These blog posts will delve into this topic a lot more over the coming weeks, with hopefully some useful findings that enterprises can apply with some success.. 

 

The vexing question of the value of (micro)services