Achieving modularity: functional vs volatility decomposition

Enterprise architecture is all about managing complexity. Many EA initiatives tend to focus on managing IT complexity, but there is only so much that can be done there before it becomes obvious that IT complexity is, for the most part, a direct consequence of enterprise complexity. To recap, complexity needs to be managed in order to maintain agility – the ability for an organisation to respond (relatively) quickly and efficiently to changes in markets, regulations or innovation, and to continue to do this over time.

Enterprise complexity can be considered to be the activities performed and resources consumed by the organisation in order to deliver ‘value’, a metric usually measured through the ability to maintain (financial) income in excess of expenses over time.

Breaking down these activities and resources into appropriate partitions that allow holistic thinking and planning to occur is one of the key challenges of enterprise architecture, and there are various techniques to do this.

Top-Down Decomposition

The natural approach to decomposition is to first understand what an organisation does – i.e., what are the (business) functions that it performs. Simply put, a function is a collection of data and decision points that are closely related (e.g., ‘Payments ‘is a function). Functions typically add little value in and of themselves – rather they form part of an end-to-end process that delivers value for a person or legal entity in some context. For example, a payment on its own means nothing: it is usually performed in the context of a specific exchange of value or service.

So a first course of action is to create a taxonomy (or, more accurately, an ontology) to describe the functions performed consistently across an enterprise. Then, various processes, products or services can be described as a composition of those functions.

If we accept (and this is far from accepted everywhere) that EA is focused on information systems complexity, then EA is not responsible for the complexity relating to the existence of processes, products or services. The creation or destruction of these are usually a direct consequence of business decisions. However, EA should be responsible for cataloging these, and ensuring these are incorporated into other enterprise processes (such as, for example, disaster recovery or business continuity processes). And EA should relate these to the functional taxonomy and the information systems architecture.

This can get very complex very quickly due to the sheer number of processes, products and services – including their various variations – most organisations have. So it is important to partition or decompose the complexity into manageable chunks to facilitate meaningful conversations.

Enterprise Equivalence Relations

One way to do this at enterprise level is to group functions into partitions (aka domains) according to synergy or autonomy (as described by Roger Sessions), for all products/services supporting a particular business. This approach is based on the mathematical concept of equivalenceBecause different functions in different contexts may have differing equivalence relationships, functions may appear in multiple partitions. One role of EA is to assess and validate if those functions are actually autonomous or if there is the potential to group apparently duplicate functions into a new partition.

Once partitions are identified, it is possible to apply ‘traditional’ EA thinking to a particular partition, because that partition is of a manageable size. By ‘traditional’ EA, I mean applying Zachman, TOGAF, PEAF, or any of the myriad methodologies/frameworks that are out there. More specifically, at that level, it is possible to establish a meaningful information systems strategy or goal for a particular partition that is directly supporting business agility objectives.

The Fallacy of Functional Decomposition

Once you get down to the level of partition, the utility of functional decomposition when it comes to architecting solutions becomes less useful. The natural tendency for architects would be to build reusable components or services that realise the various functions that comprise the partition. In fact, this may be the wrong thing to do. As Jüval Lowy demonstrates in his excellent webinar, this may result in more complexity, not less (and hence less agility).

When it comes to software architecture, the real reason to modularise your architecture is to manage volatility or uncertainty – and to ensure that volatility in one part of the architecture does not unnecessarily negatively impact another part of the architecture over time. Doing this allows agility to be maintained, so volatile parts of the application can, in fact, change frequently, at low impact to other parts of the application.

When looking at a software architecture through this lens, a quite different set of components/modules/services may become evident than those which may otherwise be obvious when using functional decomposition – the example in the webinar demonstrates this very well. A key argument used by Jüval in his presentation is that (to paraphrase him somewhat) functions are, in general, highly dependent on the context in which they are used, so to split them out into separate services may require making often impossible assumptions about all possible contexts the functions could be invoked in.

In this sense, identified components, modules or services can be considered to be providing options in terms of what is done, or how it is done, within the context of a larger system with parts of variable volatility. (See my earlier post on real options in the context of agility to understand more about options in this context.)

Partitions as Enterprise Architecture

When each partition is considered with respect to its relationship with other partitions, there is a lot of uncertainty around how different partitions will evolve. To allow for maximum flexibility, every partition should assume each other partition is a volatile part of their architecture, and design accordingly for this. This allows each partition to evolve (reasonably) independently with minimum fixed co-ordination points, without compromising the enterprise architecture by having different partitions replicate the behaviours of partitions they depend on.

This then allows:

  • Investment to be expressed in terms of impact to one or more partitions
  • Partitions to establish their own implementation strategies
  • Agile principles to be agreed on a per partition basis
  • Architectural standards to be agreed on a per partition basis
  • Partitions to define internally reusable components relevant to that partition only
  • Partitions to expose partition behaviour to other partitions in an enterprise-consistent way

In generative organisation cultures, partitions do not need to be organisationally aligned. However, in other organisation cultures (pathological or bureaucratic), alignment of enterprise infrastructure functions such as IT or operations (at least) with partitions (domains) may help accelerate the architectural and cultural changes needed – especially if coupled with broader transformations around investment planning, agile adoption and enterprise architecture.

Achieving modularity: functional vs volatility decomposition

The Meaning of Data

[tl;dr The Semantic Web may be a CDOs best friend in their efforts to help the business realise the full value of enterprise data.]

Large organisations with complex legacy infrastructure are faced with a dilemma: the technology that allowed them to grow and capture market share has reached a turning point in terms of the value returned from additional spending. The only practical investments that can be made is to shore up (protect) those investments by performing essential maintenance, focusing on security, and (perhaps) moving to lower-cost, cloud-based infrastructure. Mandatory (usually regulatory) enhancements also need to be done.

But in terms of protecting existing revenue or capturing new markets, legacy applications are often seen as a burden. Businesses are increasingly looking outside their core IT capability to address their needs – especially in technologies that can help pinpoint opportunities in the first place (i.e., analysing and processing the digital foot-prints left by customers and prospective customers, whether those foot-prints are internal to the firm or external).

Technology is both the problem and the solution. And herein lies the dilemma: internal IT organisations cannot possibly advise on all possible technology-enabled solutions for a business. Rather, businesses need to become more “tech-savvy” – a term which is bandied about a lot these days.

What does “tech-savvy” actually mean, and where is the line drawn between a “tech-savvy” business person, and the professional IT service provider? And what form should this ‘line’ take?

Arguably, most businesses have always been tech-savvy: they knew by-and-large where technology was necessary to grow their business. So for the most successful firms, there is a *lot* of technology. Does that mean those businesses are tech-savvy?

Yes, if those firms are able to manage the complexity of their IT – in other words, to be able to adapt their IT to shifting market needs, and to incorporate/absorb innovations into their architecture without significantly reducing that agility. In practice, few firms have such mastery of their IT (and, by implication, their processes and business information systems).

So a new form of “tech savvy” is needed, that allows business folks to leverage opportunities to both find customers and meet their needs (profitably), while preventing a build-up of unsustainable complexity of processes and systems.

In essence, tech-savvy business folk need to get better at understanding data. And IT needs to get much better at making meaningful business data available to their business – irrespective of where that business data may actually reside.

What does this mean in practice? It’s all about semantics – something previously in the domain of data geeks. But major initiatives like the semantic web (led by Tim Berners-Lee, of WWW fame) are making semantics useful to traditionally non-technical people.

The tools available to business folks around what information (i.e., data + context) is available is very poor, and even when the required data is found (i.e., it is known where it is) it can be difficult to extract it and use it in a productive way.

A good example of this may be observed in the proliferation of Excel spreadsheets and Access databases in organisations that support critical business functions. These tools are necessary to support those areas as the data they need from various systems are not where they need when they need it – and that is often due to business needs changing faster than IT can capture, absorb and deliver requirements.

Over time (at least in principle), all meaningful business data will (must) end up being processed exclusively by controlled enterprise systems..but this doesn’t mean waiting until IT has made the necessary changes in order to effectively govern that data and make maximum use out of it.

A key principle behind the semantic web is that data can be anywhere. In an enterprise scenario, that means the data could exist in:

  • an internal enterprise system
  • a file system (e.g., Excel sheet or Access database)
  • a website (file download, or website screen scraping)
  • a commercial information provider (Reuters, Bloomberg, etc)
  • a business partner/supplier
  • etc

To take advantage of this, meta-data (i.e., data about data) needs to be made available to businesses, and it needs to be business-relevant and agnostic to internal IT systems and processes.

More than this, the tools needed to discover useful information, as well as to retrieve and process that information need to be available to tech-savvy business folk. The right set of tools, which are suitably agnostic to specific architectures and systems, can allow businesses to explore new opportunities and business models, while allowing the IT systems and platforms to catch up and evolve over time – noting that there is no presumption that the eventual source of useful business data originates from, or is stewarded by, internal systems managed by IT.

Such tools also need to be enforce compliance to ensure only the right people get access to the right data, and (if necessary) limit the ability for people to pull data outside of the retrieval platform. In essence, providing a controlled sandbox in which businesses can distill value from information, whatever its source.

Many organisations may approach this by implementing a ‘data lake‘. This may (perhaps) be a workable solution for all data assets managed by the IT organisation. But it is not feasible for many sets of data which are not managed by the IT organisation, but which still have business utility.

Emerging standards and technologies are evolving to meet this need for a data-centric view of information systems – in particular, RDF, OWL and related ‘ontology‘ languages, as well as standards like SPARQL which enable the discovery and retrieval of data originating from multiple sources. But tools are still very primitive and generally require too much technology savviness. However, efforts like, for example, the Callimachus Project gives a hint at the potential of data-driven applications.

At some point, it will not be unreasonable to ask an IT organisation to expose all of its data assets semantically via SPARQL end-points (with appropriate access controls), and to provide tools to businesses to allow them to explore that data and (where permissible) incorporate them into spreadsheets, models and other tools in ways that allow the business to realise value from that data without requiring IT change requests. Developing the capabilities to understand semantic data, and use it to commercial advantage, will take time but it will be a worthwhile investment (and arguably before folks start spending money on ‘big data’ projects that have no wider context).

In fact, I would go so far as to say that any provider (startup or established) delivering technology services to a business should provide SPARQL endpoints as a matter of course. Making data useful and available to the business will help the business realise the value of data and get more involved in how it is captured, processed and stored in the future – and make it easier to incorporate 3rd party solution providers into business operations.

In a nutshell, the Semantic Web may be the CDOs best friend in their efforts to help the business realise the full value of enterprise data.

The Meaning of Data

The CDO: Enterprise Architect’s best friend

[tl;dr The CDO’s agenda may be the best way to bring the Enterprise Architecture agenda to the business end of the C-suite table.]

Enterprise Architecture as a concept has for some time claimed intellectual ownership of a holistic, integrated view of architecture within an organisation. In that lofty position, enterprise architects have put a stake in the ground with respect to key architecture activities, and formalised them all within frameworks such as TOGAF.

In TOGAF, Data Architecture has been buried within the Information Systems Architecture phase of the Architecture Development Method, along with Application Architecture.

But the real world often gets in the way of (arguably) conceptually clean thinking: I note the rise of the CDO, or Chief Data Officer, usually as part of the COO function of an organisation. (This CDO is not to be confused with the Chief Digital Officer, who’s primary focus could critically be seen as to build cool apps…)

So, why has Data gotten all this attention, while Enterprise Architecture still, for the most part, languishes within IT?

Well, as with most things, its happening for all the wrong reasons: mostly its because regulators are holding businesses accountable for how they manage regulated data, and businesses need, in turn, to hold people to account for doing that. Hence the need for a CDO.

But in the act of trying to understand what data they are liable for managing, and to what extent, CDO’s necessarily have to do Data Architecture and Data Governance. And at this point the CDO’s activities starts dipping into new areas such as business process management, and business & IT strategy – and EA.

If CDOs choose to take an expansive view of their role (and I believe they should), then it would most definitely include all the domains of Enterprise Architecture – especially if one views data complexity and technology complexity as two sides of the same enterprise complexity coin: one as a consequence of business decisions, the other as a consequence of IT decisions.

The good news is that this would at long last put EAs into a role more aligned with business goals than with the goals of the IT organisation. This is not to say that the goals of the IT organisation are in some way wrong, but rather than because the business had abdicated its responsibility for many decisions that IT needs in order to do “the right thing”, IT has had to fill the gaps itself – and in ways which said abdicating businesses could understand/appreciate.

For anyone in the position of CTO, this should help a lot: without prescribing which technologies should be used, businesses then can give real context to their demand which the CTO can interpret strategically, and, in collaboration with his CDO colleague, agree a technology strategy that can deliver on those needs.

In this way, the CDO/CTO have a very symbiotic relationship: the boundaries should be fuzzy, although the responsibilities should be clear: the CDO provides the context, the CTO delivers the technology.

So collaboration is key: while in principle one should be able to describe what a business needs strictly in terms of process and data flows etc, the reality is that business needs and practices change in *response* to technology. In other words, it is not a one-way street. What technology is *capable* of doing can shape what the business *should* be doing. (In many cases, for example, technology may eliminate the need for entire processes or activities.)

Folks on the edge of this CDO/CTO collaboration have a choice: do I become predominantly business facing (CDO), or do I become predominantly technology facing (CTO)? Folks may choose to specialise, or some, like myself, may choose to focus on different perspectives according to the needs of the organisation.

One interesting impact from this approach may  address one of the biggest questions out there at the moment: how to absorb the services and capabilities offered by innovative, efficient, and nimble external businesses into the architecture of the enterprise, while retaining control, compliance and agility of the data and processes? But more on this later.

So, where does all this sit with respect to the ‘3 pillars of a digital strategy’? The points raised there are still valid. With the right collaboration and priorities, it is possible to have different folks fill the roles of ‘client centricity’, ‘chief data officer’ and ‘head of enterprise complexity’. But for the most part, all these roles begin and end with data. A proper, disciplined and thoughtful approach to how and why data is captured, processed, stored, retrieved and transmitted should properly address firm-wide themes such as improving client experience, integrating innovative services, keeping a lid on complexity and retaining an appropriate level of enterprise agility.

Some folks may be wondering where the CIO fits in all this: the CIO maintains responsibility for the management and operation of all IT processes (as defined in The Open Group’s IT4IT model). This is a big job. But it is not the job of the CTO, whose focus is to ensure the right technology is brought to bear to solve the right problem.

The CDO: Enterprise Architect’s best friend