Why IoT is changing Enterprise Architecture

[tl;dr The discipline of enterprise architecture as an aid to business strategy execution has failed for most organizations, but is finding a new lease of life in the Internet of Things.]

The strategic benefits of having an enterprise-architecture based approach to organizational change – at least in terms of business models and shared capabilities needed to support those models – have been the subject of much discussion in recent years.

However, enterprise architecture as a practice (as espoused by The Open Group and others) has never managed to break beyond it’s role as an IT-focused endeavor.

In the meantime, less technology-minded folks are beginning to discuss business strategy using terms like ‘modularity’, which is a first step towards bridging the gap between the business folks and the technology folks. And technology-minded folks are looking at disruptive business strategy through the lens of the ‘Internet of Things‘.

Business Model Capability Decomposition

Just like manufacturing-based industries decomposed their supply-chains over the past 30+ years (driving an increasingly modular view of manufacturing), knowledge-based industries are going through a similar transformation.

Fundamentally, knowledge based industries are based on the transfer and management of human knowledge or understanding. So, for example, you pay for something, there is an understanding on both sides that that payment has happened. Technology allows such information to be captured and managed at scale.

But the ‘units’ of knowledge have only slowly been standardized, and knowledge-based organizations are incentivized to ensure they are the only ones to be able to act on the information they have gathered – to often disastrous social and economic consequences (e.g., the financial crisis of 2008).

Hence, regulators are stepping into to ensure that at least some of this ‘knowledge’ is available in a form that allows governments to ensure such situations do not arise again.

In the FinTech world, every service provided by big banks is being attacked by nimble competitors able to take advantage of new, more meaningful technology-enabled means of engaging with customers, and who are willing to make at least some of this information more accessible so that they can participate in a more flexible, dynamic ecosystem.

For these upstart FinTech firms, they often have a stark choice to make in order to succeed. Assuming they have cleared the first hurdle of actually having a product people want, at some point, they must decide whether they are competing directly with the big banks, or if they are providing a key part of the electronic financial knowledge ecosystem that big banks must (eventually) be part of.

In the end, what matters is their approach to data: how they capture it (i.e., ‘UX’), what they do with it, how they manage it, and how it is leveraged for competitive and commercial advantage (without falling foul of privacy laws etc). Much of the rest is noise from businesses trying to get attention in an increasingly crowded space.

Historically, many ‘enterprise architecture’ or strategy departments fail to have impact because firms do not treat data (or information, or knowledge) as an asset, but rather as something to be freely and easily created and shunted around, leaving a trail of complexity and lost opportunity cost wherever it goes. So this attitude must change before ‘enterprise architecture’ as a concept will have a role in boardroom discussions, and firms change how they position IT in their business strategy. (Regulators are certainly driving this for certain sectors like finance and health.)

Internet of Things

Why does the Internet Of Things (IoT) matter, and where does IoT fit into all this?

At one level, IoT presents a large opportunity for firms which see the potential implied by the technologies underpinning IoT; the technology can provide a significant level of convenience and safety to many aspects of a modern, digitally enabled life.

But fundamentally, IoT is about having a large number of autonomous actors collaborating in some way to deliver a particular service, which is of value to some set of interested stakeholders.

But this sounds a lot like what a ‘company’ is. So IoT is, in effect, a company where the actors are technology actors rather than human actors. They need some level of orchestration. They need a common language for communication. And they need laws/protocols that govern what’s permitted and what is not.

If enterprise architecture is all about establishing the functional, data and protocol boundaries between discrete capabilities within an organization, then EA for IoT is the same thing but for technical components, such as sensors or driverless cars, etc.

So IoT seems a much more natural fit for EA thinking than traditional organizations, especially as, unlike departments in traditional ‘human’ companies, technical components like standards: they like fixed protocols, fixed functional boundaries and well-defined data sets. And while the ‘things’ themselves may not be organic, their behavior in such an environment could exhibit ‘organic’ characteristics.

So, IoT and and the benefits of an enterprise architecture-oriented approach to business strategy do seem like a match made in heaven.

The Converged Enterprise

For information-based industries in particular, there appears to be an inevitable convergence: as IoT and the standards, protocols and governance underpinning it mature, so too will the ‘modular’ aspects of existing firms operating models, and the eco-system of technology-enabled platforms will mature along with it. Firms will be challenged to deliver value by picking the most capable components in the eco-system around which to deliver unique service propositions – and the most successful of those solutions will themselves become the basis for future eco-systems (a Darwinian view of software evolution, if you will).

The converged enterprise will consist of a combination of human and technical capabilities collaborating in well-defined ways. Some capabilities will be highly human, others highly technical, some will be in-house, some will be part of a wider platform eco-system.

In such an organization, enterprise architects will find a natural home. In the meantime, enterprise architects must choose their starting point, behavioral or structural: focusing first on decomposed business capabilities and finishing with IoT (behavioral->structural), or focusing first on IoT and finishing with business capabilities (structural->behavioral).

Technical Footnote

I am somewhat intrigued at how the OSGi Alliance has over the years shifted its focus from basic Java applications, to discrete embedded systems, to enterprise systems and now to IoT. OSGi, (disappointingly, IMO), has had a patchy record changing how firms build enterprise software – much of this is to do with a culture of undisciplined dependency management in the software industry which is very, very hard to break.

IoT raises the bar on dependency management: you simply cannot comprehensively test software updates to potentially hundreds of thousands or millions of components running that software. The ability to reliably change modules without forcing a test of all dependent instantiated components is a necessity. As enterprises get more complex and digitally interdependent, standards such as OSGi will become more critical to the plumbing of enabling technologies. But as evidenced above, for folks who have tried and failed to change how enterprises treat their technology-enabled business strategy, it’s a case of FIETIOT – Failed in Enterprise, Trying IoT. And this, indeed, seems a far more rational use of an enterprise architect’s time, as things currently stand.

 

 

 

 

 

 

Why IoT is changing Enterprise Architecture

Achieving modularity: functional vs volatility decomposition

Enterprise architecture is all about managing complexity. Many EA initiatives tend to focus on managing IT complexity, but there is only so much that can be done there before it becomes obvious that IT complexity is, for the most part, a direct consequence of enterprise complexity. To recap, complexity needs to be managed in order to maintain agility – the ability for an organisation to respond (relatively) quickly and efficiently to changes in markets, regulations or innovation, and to continue to do this over time.

Enterprise complexity can be considered to be the activities performed and resources consumed by the organisation in order to deliver ‘value’, a metric usually measured through the ability to maintain (financial) income in excess of expenses over time.

Breaking down these activities and resources into appropriate partitions that allow holistic thinking and planning to occur is one of the key challenges of enterprise architecture, and there are various techniques to do this.

Top-Down Decomposition

The natural approach to decomposition is to first understand what an organisation does – i.e., what are the (business) functions that it performs. Simply put, a function is a collection of data and decision points that are closely related (e.g., ‘Payments ‘is a function). Functions typically add little value in and of themselves – rather they form part of an end-to-end process that delivers value for a person or legal entity in some context. For example, a payment on its own means nothing: it is usually performed in the context of a specific exchange of value or service.

So a first course of action is to create a taxonomy (or, more accurately, an ontology) to describe the functions performed consistently across an enterprise. Then, various processes, products or services can be described as a composition of those functions.

If we accept (and this is far from accepted everywhere) that EA is focused on information systems complexity, then EA is not responsible for the complexity relating to the existence of processes, products or services. The creation or destruction of these are usually a direct consequence of business decisions. However, EA should be responsible for cataloging these, and ensuring these are incorporated into other enterprise processes (such as, for example, disaster recovery or business continuity processes). And EA should relate these to the functional taxonomy and the information systems architecture.

This can get very complex very quickly due to the sheer number of processes, products and services – including their various variations – most organisations have. So it is important to partition or decompose the complexity into manageable chunks to facilitate meaningful conversations.

Enterprise Equivalence Relations

One way to do this at enterprise level is to group functions into partitions (aka domains) according to synergy or autonomy (as described by Roger Sessions), for all products/services supporting a particular business. This approach is based on the mathematical concept of equivalenceBecause different functions in different contexts may have differing equivalence relationships, functions may appear in multiple partitions. One role of EA is to assess and validate if those functions are actually autonomous or if there is the potential to group apparently duplicate functions into a new partition.

Once partitions are identified, it is possible to apply ‘traditional’ EA thinking to a particular partition, because that partition is of a manageable size. By ‘traditional’ EA, I mean applying Zachman, TOGAF, PEAF, or any of the myriad methodologies/frameworks that are out there. More specifically, at that level, it is possible to establish a meaningful information systems strategy or goal for a particular partition that is directly supporting business agility objectives.

The Fallacy of Functional Decomposition

Once you get down to the level of partition, the utility of functional decomposition when it comes to architecting solutions becomes less useful. The natural tendency for architects would be to build reusable components or services that realise the various functions that comprise the partition. In fact, this may be the wrong thing to do. As Jüval Lowy demonstrates in his excellent webinar, this may result in more complexity, not less (and hence less agility).

When it comes to software architecture, the real reason to modularise your architecture is to manage volatility or uncertainty – and to ensure that volatility in one part of the architecture does not unnecessarily negatively impact another part of the architecture over time. Doing this allows agility to be maintained, so volatile parts of the application can, in fact, change frequently, at low impact to other parts of the application.

When looking at a software architecture through this lens, a quite different set of components/modules/services may become evident than those which may otherwise be obvious when using functional decomposition – the example in the webinar demonstrates this very well. A key argument used by Jüval in his presentation is that (to paraphrase him somewhat) functions are, in general, highly dependent on the context in which they are used, so to split them out into separate services may require making often impossible assumptions about all possible contexts the functions could be invoked in.

In this sense, identified components, modules or services can be considered to be providing options in terms of what is done, or how it is done, within the context of a larger system with parts of variable volatility. (See my earlier post on real options in the context of agility to understand more about options in this context.)

Partitions as Enterprise Architecture

When each partition is considered with respect to its relationship with other partitions, there is a lot of uncertainty around how different partitions will evolve. To allow for maximum flexibility, every partition should assume each other partition is a volatile part of their architecture, and design accordingly for this. This allows each partition to evolve (reasonably) independently with minimum fixed co-ordination points, without compromising the enterprise architecture by having different partitions replicate the behaviours of partitions they depend on.

This then allows:

  • Investment to be expressed in terms of impact to one or more partitions
  • Partitions to establish their own implementation strategies
  • Agile principles to be agreed on a per partition basis
  • Architectural standards to be agreed on a per partition basis
  • Partitions to define internally reusable components relevant to that partition only
  • Partitions to expose partition behaviour to other partitions in an enterprise-consistent way

In generative organisation cultures, partitions do not need to be organisationally aligned. However, in other organisation cultures (pathological or bureaucratic), alignment of enterprise infrastructure functions such as IT or operations (at least) with partitions (domains) may help accelerate the architectural and cultural changes needed – especially if coupled with broader transformations around investment planning, agile adoption and enterprise architecture.

Achieving modularity: functional vs volatility decomposition

Achieving Agile at Scale

[tl;dr Scaling agile at the enterprise level will need rethinking how portfolio management and enterprise architecture are done to ensure success.]

Agility,as a concept, is gaining increasing attention within large organisations. The idea that business functions – and in particular IT – can respond quickly and iteratively to business needs is an appealing one.

The reasons why agility is getting attention are easy to spot: larger firms are getting more and more obviously unagile – i.e., the ability of business functions to respond to business needs in a timely and sustainable manner is getting progressively worse, even as a rapidly evolving competitive and technology-led commercial environment is demanding more agility.

Couple that with the heavy cost of failing to meet ever increasing regulatory compliance obligations, and ‘agile’ seems a very good idea indeed.

Agile is a great idea, but when implemented at scale (in large enterprise organisations), it can actually reduce enterprise agility, rather than increase it, unless great care is taken.

This is partly because Agile’s origins come from developing web applications: in these scenarios, there is usually a clear customer, a clear goal (to the extent that the team exists in the first place), and relatively tight timelines that favour short or non-existent analysis/design phases. Agile is perfect for these scenarios.

Let’s call this scenario ‘local agile’. It is quite easy to see a situation where every team, in response to the question, ‘are you doing agile?’, for teams to say ‘Yes, we do!’. So if every team is doing ‘local agile’, does that mean your organisation is now ‘agile’?

The answer is No. Getting every team to adopt agile practices is a necessary but insufficient step towards achieving enterprise agility. In particular, two key factors needs to be addressed before true a firm can be said to be ‘agile’ at the enterprise level. These are:

  1. The process by which teams are created and funded, and
  2. Enterprise awareness

Creating & Funding (Agile) Teams

Historically, teams are usually created as a result of projects being initiated: the project passes investment justification criteria, the project is initiated and a team is put in place, led by the project manager. Also, this process was owned entirely by the IT organisation, irrespective of which other organisations were stakeholders in the project.

At this point, IT’s main consideration is, will the project be delivered on time and on budget? The business sponsor’s main consideration is, will it give us what we need when we need it? And the enterprise’s consideration (which is often ignored) is who is accountable for ensuring that the IT implementation delivers value to the enterprise. (In this sense, the ‘enterprise’ could be either a major business line with full P&L responsibility for all activities performed in support of their business, or the whole organisation, including shared enterprise functions).

Delivering ‘value’ is principally about ensuring that  on-going or operational processes, roles and responsibilities are adjusted to maximise the benefits of a new technology implementation – which could include organisational change, marketing, customer engagement, etc.

However, delivering ‘value’ is not always correlated to one IT implementation; value can be derived from leveraging multiple IT capabilities in concert. Given the complexity of large organisations, it is often neither desirable or feasible to have a single IT partner be responsible for all the IT elements that collectively deliver business value.

On this basis, it is evident that how businesses plan and structure their portfolio of IT investments needs to change dramatically. In particular,

  1. The business value agenda is outcome focused and explicit about which IT capabilities are required to enable it, and
  2. IT investment is focused around the capability investment lifecycle that IT is responsible for stewarding.

In particular ‘capabilities’ (or IT products or services) have a lifecycle: this affects the investment and expectations around those capabilities. And some capabilities need to be more ‘agile’ than others – some must be agile to be useful, whereas for others, stability may be the over-riding priority, and therefore their lack of agility must be made explicit – so agile teams can plan around that.

Enterprise Awareness

‘Locally’ agile teams are a step in the right direction – particularly if the business stakeholders all agree they are seeing the value from that agility. But often this comes at the expense of enterprise awareness. In short, agility in the strict business sense can often only deliver results by ignoring some stakeholders interests. So ‘locally’ agile teams may feel they must minimise their interactions with other teams – particularly if those teams are not themselves agile.

If we assume that teams have been created through a process as described in the previous section, it becomes more obvious where the team sits in relation to its obligations to other teams. Teams can then make appropriate compromises to their architecture, planning and agile SDLC to allow for those obligations.

If the team was created through ‘traditional’ planning processes, then it becomes a lot harder to figure out what ‘enterprise awareness’ is appropriate (except perhaps or IT-imposed standards or gates, which only contributes indirectly to business value).

Most public agile success stories describe very well how they achieved success up to  – but not including – the point at which architecture becomes an issue. Architecture, in this sense, refers to either parts of the solution architecture which can no longer be delivered via one or two members of an agile team, or those parts of the business value chain that cannot be entirely delivered via the agile team on its own.

However, there are success stores (e.g., Spotify) that show how ‘enterprise awareness’ can be achieved without limiting agility. For many organisations, transitioning from existing organisation structures to new ‘agile-ready’ structures will be a major challenge, and far harder than simply having teams ‘adopt agile’.

Conclusions

With the increased attention on Agile, there is fortunately increased attention on scaling agile. Methodologies like Disciplined Agile Development (DaD) and LargE Scale Scrum (LeSS), coupled with portfolio concepts like Scaled Agile Framework (SAFe) propose ways in which Agile can scale beyond the team and up to enterprise level, without losing the key benefits of the agile approach.

All scaled agile methodologies call for changes in how Portfolio Management and Enterprise Architecture are typically done within an organisation, as doing these activities right are key to the success of adopting Agile at scale.

Achieving Agile at Scale

Strategic Theme # 4/5: Systems Theory & Systems Thinking

[tl;dr Systems thinking is the ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts. Understanding and applying systems thinking in software-enabled businesses is in everyone’s interest.]

First thing’s first: I am not an expert in systems theory or systems thinking. But I am learning. Many of the ideas resonate with my personal experience and work practices, so I guess I am a Systems Thinker by nature. The corollary of this is that, it seems, not everyone is a Systems Thinker by nature.

Enterprise Architecture is, in essence, a discipline within the general field of systems theory and systems thinking. It attempts to explain or communicate a system from many perspectives, explicitly looking at the whole as well as the parts.

As every enterprise architect knows, it is impossible to communicate all perspectives and viewpoints in a single view…the concept of a ‘thing’ in enterprise architecture can be very amorphous, a bit like Shrödingers Cat, only tangible when you look at it. So meta-models such as ArchiMate and the Zachman Framework have been devised to help architects manage this complexity.

But this post isn’t about enterprise architecture: the fact is, many complex problems faced by businesses today can only be solved by systems thinking across the board. It cannot be left to a solitary group of ‘enterprise architects’ to try to bring everything together. Rather, anybody involved in business change, continuous improvement (kaizen) or strategy setting must be versed in systems thinking. At the very least, it is this understanding that will allow people to know when and why to engage with enterprise architects.

An excellent example of where systems thinking would be particularly useful is in financial services regulatory reform: regulations impact the entire lifecycle, front-to-back, of every financial product. The sheer volume of regulatory change makes it a practical impossibility to address all regulations all the time, while still operating a profitable business in a competitive environment. The temptation to solve known, specific requirements one-by-one is high,  but without systems thinking complexity will creep in and sustainable change becomes a real challenge.

Although I have a lot to learn on systems thinking, I have found the book ‘Thinking in Systems’ by Donella H Meadows to be a very accessible introduction to the subject (recommended by Adrian Cockcroft and as such it is also a strong influence within the DevOps community). I am also seeing some excellent views on the subject coming out of established architecture practitioners such as Avancier and Tetradian – where challenges and frustrations around the conventional IT-centric approach to enterprise architecture are vented and addressed in very informative ways.

The point of this post is to emphasise that systems thinking (which breaks through traditional organisational silos and boundaries) applied across all professions and disciplines is what will help drive success in the future: culturally, it complements the goals of the ‘lean enterprise’ (strategic theme #1).

Many businesses  struggling to maintain profitability in the face of rapid technology innovation, changing markets, increased competition and ever-rising regulatory obligations would, in my view, do well to embark on a program of education on aspects of this topic across their entire workforce. Those that do can reasonably expect to see a significantly greater return on their (technology) investments in the longer term.

Strategic Theme # 4/5: Systems Theory & Systems Thinking