Strategic Theme # 4/5: Systems Theory & Systems Thinking

[tl;dr Systems thinking is the ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts. Understanding and applying systems thinking in software-enabled businesses is in everyone’s interest.]

First thing’s first: I am not an expert in systems theory or systems thinking. But I am learning. Many of the ideas resonate with my personal experience and work practices, so I guess I am a Systems Thinker by nature. The corollary of this is that, it seems, not everyone is a Systems Thinker by nature.

Enterprise Architecture is, in essence, a discipline within the general field of systems theory and systems thinking. It attempts to explain or communicate a system from many perspectives, explicitly looking at the whole as well as the parts.

As every enterprise architect knows, it is impossible to communicate all perspectives and viewpoints in a single view…the concept of a ‘thing’ in enterprise architecture can be very amorphous, a bit like Shrödingers Cat, only tangible when you look at it. So meta-models such as ArchiMate and the Zachman Framework have been devised to help architects manage this complexity.

But this post isn’t about enterprise architecture: the fact is, many complex problems faced by businesses today can only be solved by systems thinking across the board. It cannot be left to a solitary group of ‘enterprise architects’ to try to bring everything together. Rather, anybody involved in business change, continuous improvement (kaizen) or strategy setting must be versed in systems thinking. At the very least, it is this understanding that will allow people to know when and why to engage with enterprise architects.

An excellent example of where systems thinking would be particularly useful is in financial services regulatory reform: regulations impact the entire lifecycle, front-to-back, of every financial product. The sheer volume of regulatory change makes it a practical impossibility to address all regulations all the time, while still operating a profitable business in a competitive environment. The temptation to solve known, specific requirements one-by-one is high,  but without systems thinking complexity will creep in and sustainable change becomes a real challenge.

Although I have a lot to learn on systems thinking, I have found the book ‘Thinking in Systems’ by Donella H Meadows to be a very accessible introduction to the subject (recommended by Adrian Cockcroft and as such it is also a strong influence within the DevOps community). I am also seeing some excellent views on the subject coming out of established architecture practitioners such as Avancier and Tetradian – where challenges and frustrations around the conventional IT-centric approach to enterprise architecture are vented and addressed in very informative ways.

The point of this post is to emphasise that systems thinking (which breaks through traditional organisational silos and boundaries) applied across all professions and disciplines is what will help drive success in the future: culturally, it complements the goals of the ‘lean enterprise’ (strategic theme #1).

Many businesses  struggling to maintain profitability in the face of rapid technology innovation, changing markets, increased competition and ever-rising regulatory obligations would, in my view, do well to embark on a program of education on aspects of this topic across their entire workforce. Those that do can reasonably expect to see a significantly greater return on their (technology) investments in the longer term.

Strategic Theme # 4/5: Systems Theory & Systems Thinking

Strategic Theme #2/5: Enterprise Modularity

[tl;dr Enterprise modularity is the state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Standards for these are slowly gaining traction in the enterprise.]

Enterprise Modularity is, in essence, Service Oriented Architecture but ‘all the way down’; it has evolved a lot over the last 15-20 years since SOA was first promoted as a concept.

Specifically, SOA historically has been focused on ‘services’ – or, more explicitly, end-points, the contracts they expose, and service discovery. It has been less concerned with the architectures of the systems behind the end-points, which seems logically correct but has practical implications which has limited the widespread adoption of SOA solutions.

SOA is a great idea that has often suffered from epic failures when implemented as part of major transformation initiatives (although less ambitious implementations may have met with more success, but limited to specific domains).

The challenge has been the disconnect between what developers build, what users/customers need, and what the enterprise desires –  not just for the duration of a programme, but over time.

Enterprise modularity needs several factors to be in place before it can succeed at scale. Namely:

  • An enterprise architecture (business context) linked to strategy
  • A focus on data governance
  • Agile cross-disciplinary/cross-team planning and delivery
  • Integration technologies and tools consistent with development practices

Modularity standards and abstractions like OSGi, Resource Oriented Computing and RESTful computing enable the development of loosely coupled, cohesive modules potentially deployable as stand-alone services and which can be orchestrated in innovative ways using   many technologies – especially when linked with canonical enterprise data standards.

The upshot of all this is that traditional ESBs are transforming..technologies like Red Hat Fuse ESB and open-source solutions like Apache ServiceMix provide powerful building blocks that allow ESBs to become first-class applications rather than simply integration hubs. (MuleSoft has not, to my knowledge, adopted an open modularity standard such as OSGi, so I believe it is not appropriate to use as the basis for a general application architecture.)

This approach allows for the eventual construction of ‘domain applications’ which, more than being an integration hub, actually hosts the applications (or, more accurately, application components) in ways that make dependencies explicit.

Vendors such as CA Layer7, Apigee and AxWay are prioritising API management over more traditional, heavy-weight ESB functions – essentially, anyone can build an API internal to their application architecture, but once you want to expose it beyond the application itself, then an API management tool will be required. These can be implemented top-down once the bottom-up API architecture has been developed and validated. Enterprise or integration architecture teams should be driving the adoption and implementation of API management tools, as these (done well) can be non-intrusive with respect to how application development is done, provided application developers take a micro-services led approach to application architecture and design. The concept of ‘dumb pipes’ and ‘smart endpoints are key here, to avoid putting inappropriate complexity in the API management layer or in the (traditional) ESB.

Microservices is a complex topic, as this post describes. But modules (along the lines defined by OSGi) are a great stepping stone, as they embrace microservice thinking without necessarily introducing distributed systems complexity. Innovative technologies like Paremus Service Fabric  building on OSGi standards, help make the transition from monolithic modular architectures to a (distributed) microservice architecture as painless as can be reasonably expected, and can be a very effective way to manage evolution from monolithic to distributed (agile, microservice-based) architectures.

Other solutions, such as Cloud Foundry, offer a means of taking much of the complexity out of building green-field microservices solutions by standardising many of the platform components – however, Cloud Foundry, unlike OSGi, is not a formal open standard, and so carries risks. (It may become a de-facto standard, however, if enough PaaS providers use it as the basis for their offerings.)

The decisions as to which ‘PaaS’ solutions to build/adopt enterprise-wide will be a key consideration for enterprise CTOs in the coming years. It is vital that such decisions are made such that the chosen PaaS solution(s) will directly support enterprise modularity (integration) goals linked to development standards and practices.

For 2015, it will be interesting to see how nascent enterprise standards such as described above evolve in tandem with IaaS and PaaS innovations. Any strategic choices made in these domains must consider the cost/impact of changing those choices: defining architectures that are too heavily dependent on a specific PaaS or IaaS solution limits optionality and is contrary to the principles of (enterprise) modularity.

This suggests that Enterprise modularity standards which are agnostic to specific platform implementation choices must be put in place, and different parts of the enterprise can then choose implementations appropriate for their needs. Hence open standards and abstractions must dominate the conversation rather than specific products or solutions.

Strategic Theme #2/5: Enterprise Modularity

How ‘uncertainty’ can be used to create a strategic advantage

[TL;DR This post outlines a strategy for dealing with uncertainty in enterprise architecture planning, with specific reference to regulatory change in financial services.]

One of the biggest challenges anyone involved in technology has is in balancing the need to address the immediate requirement with the need to prepare for future change at the risk of over-engineering a solution.

The wrong balance over time results in complex, expensive legacy technology that ends up being an inhibitor to change, rather than an enabler.

It should not be unreasonable to expect that, over time, and with appropriate investment, any firm should have significant IT capability that can be brought to bear for a multitude of challenges or opportunities – even those not thought of at the time.

Unfortunately, most legacy systems are so optimised to solve the specific problem they were commissioned to solve, they often cannot be easily or cheaply adapted for new scenarios or problem domains.

In other words, as more functionality is added to a system, the ability to change it diminishes rapidly:

Agility vs FunctionalityThe net result is that the technology platform already in place is optimised to cope with existing business models and practices, but generally incapable of (cost effectively) adapting to new business models or practices.

To address this needs some forward thinking: specifically, what capabilities need to be developed to support where the business needs to be, given the large number of known unknowns? (accepting that everybody is in the same boat when it comes to dealing with unknown unknowns..).

These capabilities are generally determined by external factors – trends in the specific sector, technology, society, economics, etc, coupled with internal forward-looking strategies.

An excellent example of where a lack of focus on capabilities has caused structural challenges is the financial industry. A recent conference at the Bank for International Settlements (BIS) has highlighted the following capability gaps in how banks do their IT – at least as it relates to regulator’s expectations:

  • Data governance and data architecture need to be optimised in order to enhance the quality, accuracy and integrity of data.
  • Analytical and reporting processes need to facilitate faster decision-making and direct availability of the relevant information.
  • Processes and databases for the areas of finance, control and risk need to be harmonised.
  • Increased automation of the data exchange processes with the supervisory authorities is required.
  • Fast and flexible implementation of supervisory requirements by business units and IT necessitates a modular and flexible architecture and appropriate project management methods.

The interesting aspect about the above capabilities is that they span multiple businesses, products and functional domains. Yet for the most part they do not fall into the traditional remit of typical IT organisations.

The current state of technology today is capable of delivering these requirements from a purely technical perspective: these are challenging problems, but for the most part they have already been solved, or are being solved, in other industries or sectors – sometimes in a larger scale even than banks have to deal with. However, finding talent is, and remains, an issue.

The big challenge, rather is in ‘business-technology’. That amorphous space that is not quite business but not quite (traditional) IT either. This is the capability that banks need to develop: the ability to interpret what outcomes a business requires, and map that not only to projects, but also to capabilities – both business capabilities and IT capabilities.

So, what core capabilities are being called out by the BIS? Here’s a rough initial pass (by no means complete, but hopefully indicative):

Data Governance Increased focus on Data Ontologies, Semantic Modelling, Linked/Open Data (RDF), Machine Learning, Self-Describing Systems, Integration
Analytics & Reporting Application of Big Data techniques for scaling timely analysis of large data sets, not only for reporting but also as part of feedback loops into automated processes. Data science approach to analytics.
Processes & Databases Use of meta-data in exposing capabilities that can be orchestrated by many business-aligned IT teams to support specific end-to-end business processes. Databases only exposed via prescribed services; model-driven product development; business architecture.
Automation of data exchange  Automation of all report generation, approval, publishing and distribution (i.e., throwing people at the problem won’t fix this)
Fast and flexible implementation Adoption of modular-friendly practices such as portfolio planning, domain-driven design, enterprise architecture, agile project management, & microservice (distributed, cloud-ready, reusable, modular) architectures

It should be obvious looking at this list that it will not be possible or feasible to outsource these capabilities. Individual capabilities are not developed isolation: they complement and support each other. Therefore they need to be developed and maintained in-house – although vendors will certainly have a role in successfully delivering these capabilities. And these skills are quite different from skills existing business & IT folks have (although some are evolutionary).

Nobody can accurately predict what systems need to be built to meet the demands of any business in the next 6 months, let alone 3 years from now. But the capabilities that separates the winners for the losers in given sectors are easier to identify. Banks in particular are under massive pressure, with regulatory pressure, major shifts in market dynamics, competition from smaller, more nimble alternative financial service providers, and rapidly diminishing technology infrastructure costs levelling the playing field for new contenders.

Regulators have, in fact, given banks a lifeline: those that heed the regulators and take appropriate action will actually be in a strong position to deal competitively with significant structural change to the financial services industry over the next 10+ years.

The changes (client-centricity, digital transformation, regulatory compliance) that all knowledge-based industries (especially finance) will go through will depend heavily on all of the above capabilities. So this is an opportunity for financial institutions to get 3 for the price of 1 in terms of strategic business-IT investment.

How ‘uncertainty’ can be used to create a strategic advantage

Microservice Principles and Enterprise IT Architecture

The idea that the benefits embodied in a microservices approach to solution architecture is relevant to enterprise architecture is a solid one.

In particular, it allows bottom-up, demand-driven solution architectures to evolve, while providing a useful benchmark to assess if those architectures are moving in a way that increases the organization’s ability to manage overall complexity (and hence business agility).

Microservices cannot be mandated top-down the same way Services were intended to be. But fostering a culture and developing an IT strategy that encourages the bottom-up development of microservices will have a significant sustainable positive impact on a business’s competitiveness in a highly digital environment.

I fully endorse all the points Gene has made in his blog post.

Form Follows Function

Julia Set Fractal

Ruth Malan is fond of noting that “design is fractal”. In a comment on her post “We Just Stopped Talking About Design”, she observed:

We need to get beyond thinking of design as just a do once, up-front sort of thing. If we re-orient to design as something we do at different levels (strategic, system-in-context, system, elements and mechanisms, algorithms, …), at different times (including early), and iteratively and throughout the development and evolution of systems, then we open up the option that we (can and should) design in different media.

This fractal nature is illustrated by the fact that software systems and systems of systems belonging to an organization exist within an ecosystem dominated by that organization which is itself a system of systems of the social kind operating within a larger ecosystem (i.e. the enterprise). Just as structure follows strategy then becomes a constraint on strategy…

View original post 670 more words

Microservice Principles and Enterprise IT Architecture

The role of microservices in Enterprise Architectural planning

Microservices as an architectural construct is gaining a lot of popularity, and the term is slowly but surely finding its place among the lexicon of developers and software architects. The precise definition of ‘microservices’ varies (it is not a formal concept in computer science, as to my knowledge it is not grounded in any formal mathematical abstraction) but the basic concept of a microservice being a reusable lightweight architectural component doing one thing and doing it extremely well is a reasonable baseline.

What do ‘Microservices’ mean to organisations used to understanding their world in terms of Applications? What about all the money some organisations have already spent on Service Oriented Architecture (SOA)? Have all the Services now been supplanted by Microservices? And does all this stuff need to be managed in a traditional command-and-control way, or is it possible they could self-manage?

The answers to these questions are far from reaching a consensus, but here folllows an approach that may help begin to help architects and planners rationalise all these concepts, taken from a pragmatic enterprise perspective.

Note that the definitions below do not focus on technologies so much as scenarios.

Applications

Applications are designed to meet the needs of a specific set of users for a specific set of related tasks or activities. Applications should not be shared in an enterprise sense. This means applications must not expose any re-usable services.

However, application functionality may depend on services exposed by many domains, and applications should use enterprise services to the fullest extent possible. To that extent, applications must focus on orchestrating rather than implementing services and microservices, even though it is generally through the act of building applications that services and microservices are identified and created.

Within a given domain, an Application should make extensive use of the microservices available in that domain.

Applications require a high degree of governance: i.e., for a given user base, the intent to build and maintain an application for a given purpose needs to be explicitly funded – and the domain that that application belongs to must be unambiguous. (But noting that the functionality provided by an application may leverage the capabilities of multiple domains.)

Services

Services represent the exposed interfaces (in a technical sense) of organizational units or functional domains. Services represent how that domain wants information or services to be consumed and/or published with respect to other domains.

Services represent a contract between that domain and other domains, and reflect the commitment the domain has to the efficient operation of the enterprise as a whole. Services may orchestrate other services (cross-domain) or microservices (intra-domain). The relationship between applications and services is asymmetric: if a service has a dependency on an application, then the architecture needs reviewing.

Services require a high degree of governance: i.e., the commitments/obligations that a domain has on other domains (and vice-versa) needs to be formalised and not left to individual projects to work out. It means some form of Service Design methodology, including data governance, is required – most likely in parallel with application development.

Microservices

Within a domain, many microservices may be created to support how that domain builds or deploys functionality. Microservices are optimised for the domain’s performance, and not necessarily for the enterprise, and do not depend upon microservices in other domains.

Applications that are local to a domain may use local microservices directly, but microservices should be invisible to applications or services in other domains.

However, some microservices may be exposed as Services by the domain where there is seen to be value in doing so. This imposes an external obligation on a domain, which are enforced through planning and resourcing.

Microservices require a low degree of governance: the architectural strategy for a domain will define the extent to which microservices form part of the solution architecture for a domain. Microservices developers choose to make their services available and discoverable in ways appropriate to their solution architecture.

In fact, evidence of a successful microservices based approach to application development is an indicator of a healthy technical strategy. Rather than require microservices to be formally planned, the existence of microservices in a given domain should be a hint as to what services could or should be formally exposed by the domain to support cross-divisional goals.

It is quite unlikely that meaningful (resilient, robust, scaleable) Services can be identified and deployed without an underlying microservices strategy. Therefore, the role of a domain architect should be to enable a culture of microservice development, knowing that such a culture will directly support strategic enterprise goals.

Domains

A domain consists of a collection of applications, services and microservices, underpinning a business capability. It is an architectural partition of the organisation, which can be reflected in the firm’s organisational structure. It encapsulates processes, data and technology.

In particular, domains represent business capabilities and are clusters of business and technology – or, equivalently, business function and technology.

The organisational structure as it relates to domains must reflect the strategy of the firm; without this, any attempt at developing a technology strategy will likely fail to achieve the intended outcome.

So, the organisational structure is key, as without appropriate accountability and a forward-looking vision, the above concepts will not usefully survive ad-hoc organisational changes.

While organisational hierarchies may change, what a business actually does will not change very often (except as part of major transformation efforts). And while different individuals may take different responsibilities for different domains (or sets of domains), if an organisation change materially impacts the architecture, then architectural boundaries and roadmaps need to change accordingly so that they re-align with the new organisational goals.

Summary

Partitioning an organisation’s capabilities is a necessary prerequisite to developing a technology strategy. A technology strategy can be expressed in terms of domains, applications, services and (the presence of) microservices.

The key is to not be overly prescriptive, and to allow innovation and creativity within and across domains, but providing sufficient constraints to encourage innovation while simultaneously keeping a lid on complexity. The presence of microservices and a viable microservice strategy within a domain can be used as a good proxy for enterprise technology strategy alignment.

The role of microservices in Enterprise Architectural planning

Mining the gold in the legacy application hills

One recurring challenge that CIOs & CTOs have is talent retention: in the face of continuing and accelerating innovation in technology, and continued lowering of barriers to entry for those who want to get involved in technology, finding and retaining people who are professionally satisfied with looking after so-called ‘legacy’ technology is increasingly difficult.

The most visible impact is inexorable rising cost of change, which CIOs usually deal with in the traditional way: by outsourcing or offshoring the work to locations where similar talent can be found, but cheaper.

But this tends not to solve the problem: rather, the ‘hard’ bits of the legacy applications are ignored in favour of writing new code that the current maintainers can understand. Fear, uncertainty and doubt then creeps in when any of the ‘legacy’ software needs to change. The end result is increased complexity and no sustainable reduction in cost of change.

Only a few years ago, mainframes were considered to be legacy applications. Today, applications written 7+ years ago are the new legacy applications in most organisations, but ostensibly any application whose original architects and maintainers have moved on are now ‘legacy’.

With all the focus on new, exciting technologies, what technical strategy should be adopted to maximise valuable existing functionality, retain and engage development talent, and allow legacy architectures to fit seamlessly and sustainably into new, innovative (cloud/mobile-based) architectures?

Legacy applications have a significant advantage over new applications: they reflect what is effective and useful from a business perspective (even if their implementations do not reflect what is efficient). With the benefit of hindsight, it is possible to look at these applications through a modular lens, and define architectural boundaries of change that can be enforced through technology.

Such an approach can then lead to potential opportunities for decomposing complex monolithic applications, optimising infrastructure usage, deploying components onto anti-fragile infrastructure (elastic fabrics), and finally identifying potentially reusable business services and components that can form building blocks for new applications using innovative technologies.

In particular, it allows complexity to be managed, useful working software to be maximised, and talent to be retained, through providing a route to working with new, in-demand technologies.

In defining a technical strategy supporting a enterprise-wide digital strategy, this is a feasible, cost manageable way to evolve the large portfolio of legacy applications that many large organisations have, both from the bottom up, as well as the top down.

The broad approach is well captured in this presentation here from a global financial services firm, and the overall architectural approach is well described in this book, from 2010.

In the last few years, there have been significant advances in certain enabling technologies (around modularity, APIs, IaaS/PaaS, containers, HTML, etc) and increased focus on enabling architectural disciplines (like business, data, and solution architecture) that are making such a strategy feasible as part of a medium/long-term plan to address IT complexity.

In short, if firms do not have some strategy for dealing with their legacy technology, they will be unable to compete in the increasingly digital economy that is impacting almost every industry out there.

Mining the gold in the legacy application hills

The Reactive Development Movement

A new movement, called ‘Reactive Development’ is taking shape:

http://readwrite.com/2014/09/19/reactive-programming-jonas-boner-typesafe

The rationale behind why creating a new ‘movement’, with its own manifesto, principles and related tools, vendors and evangelists, seems quite reasonable. It leans heavily on the principles of Agile, but extends them into the domain of Architecture.

The core principles of reactive development are around building systems that are:

  • Responsive
  • Resilient
  • Elastic
  • Message Driven

This list seems like a good starting point..it may grow,but these are a solid starting point.

What this movement seems to have (correctly) realised is that building these systems aren’t easy and don’t come for free..so most folks tend to ignore these principles *unless* these are specific requirements of the application to be built. Usually, however, the above characteristics are what you wish your system had when you see your original ‘coherent’ architecture slowly falling apart while you continually accommodate other systems that have a dependency on it (resulting in ‘stovepipe’ architectures, etc’).

IMO, this is the right time for the Reactive Movement. Most mature organisations are already feeling the pain of maintaining and evolving their stovepipe architectures, and many do not have a coherent internal technical strategy to fix this situation (or, if they do, it is often not matched with a corresponding business strategy or enabling organisational design – a subject for another post). And the rapidly growing number of SaaS platforms out there means that these principles are a concern for even the smallest startups, participating as part of a connected set of services delivering value to many businesses.

Most efforts around modularity (or ‘service oriented architectures’) in the end try to deliver on the above principles, but do not bring together all the myriad of skills or capabilities needed to make it successful.

So, I think the Reactive Development effort has the opportunity bring together many architecture skills – namely process/functional (business) architects, data architects, integration architects, application architects and engineers. Today, while the individual disciplines themselves are improving rapidly, the touch-points between them are still, in my view, very weak indeed – and often falls apart completely when it gets to the developers on the ground (especially when they are geographically removed from the architects).

I will be watching this movement with great interest.

The Reactive Development Movement