Why IoT is changing Enterprise Architecture

[tl;dr The discipline of enterprise architecture as an aid to business strategy execution has failed for most organizations, but is finding a new lease of life in the Internet of Things.]

The strategic benefits of having an enterprise-architecture based approach to organizational change – at least in terms of business models and shared capabilities needed to support those models – have been the subject of much discussion in recent years.

However, enterprise architecture as a practice (as espoused by The Open Group and others) has never managed to break beyond it’s role as an IT-focused endeavor.

In the meantime, less technology-minded folks are beginning to discuss business strategy using terms like ‘modularity’, which is a first step towards bridging the gap between the business folks and the technology folks. And technology-minded folks are looking at disruptive business strategy through the lens of the ‘Internet of Things‘.

Business Model Capability Decomposition

Just like manufacturing-based industries decomposed their supply-chains over the past 30+ years (driving an increasingly modular view of manufacturing), knowledge-based industries are going through a similar transformation.

Fundamentally, knowledge based industries are based on the transfer and management of human knowledge or understanding. So, for example, you pay for something, there is an understanding on both sides that that payment has happened. Technology allows such information to be captured and managed at scale.

But the ‘units’ of knowledge have only slowly been standardized, and knowledge-based organizations are incentivized to ensure they are the only ones to be able to act on the information they have gathered – to often disastrous social and economic consequences (e.g., the financial crisis of 2008).

Hence, regulators are stepping into to ensure that at least some of this ‘knowledge’ is available in a form that allows governments to ensure such situations do not arise again.

In the FinTech world, every service provided by big banks is being attacked by nimble competitors able to take advantage of new, more meaningful technology-enabled means of engaging with customers, and who are willing to make at least some of this information more accessible so that they can participate in a more flexible, dynamic ecosystem.

For these upstart FinTech firms, they often have a stark choice to make in order to succeed. Assuming they have cleared the first hurdle of actually having a product people want, at some point, they must decide whether they are competing directly with the big banks, or if they are providing a key part of the electronic financial knowledge ecosystem that big banks must (eventually) be part of.

In the end, what matters is their approach to data: how they capture it (i.e., ‘UX’), what they do with it, how they manage it, and how it is leveraged for competitive and commercial advantage (without falling foul of privacy laws etc). Much of the rest is noise from businesses trying to get attention in an increasingly crowded space.

Historically, many ‘enterprise architecture’ or strategy departments fail to have impact because firms do not treat data (or information, or knowledge) as an asset, but rather as something to be freely and easily created and shunted around, leaving a trail of complexity and lost opportunity cost wherever it goes. So this attitude must change before ‘enterprise architecture’ as a concept will have a role in boardroom discussions, and firms change how they position IT in their business strategy. (Regulators are certainly driving this for certain sectors like finance and health.)

Internet of Things

Why does the Internet Of Things (IoT) matter, and where does IoT fit into all this?

At one level, IoT presents a large opportunity for firms which see the potential implied by the technologies underpinning IoT; the technology can provide a significant level of convenience and safety to many aspects of a modern, digitally enabled life.

But fundamentally, IoT is about having a large number of autonomous actors collaborating in some way to deliver a particular service, which is of value to some set of interested stakeholders.

But this sounds a lot like what a ‘company’ is. So IoT is, in effect, a company where the actors are technology actors rather than human actors. They need some level of orchestration. They need a common language for communication. And they need laws/protocols that govern what’s permitted and what is not.

If enterprise architecture is all about establishing the functional, data and protocol boundaries between discrete capabilities within an organization, then EA for IoT is the same thing but for technical components, such as sensors or driverless cars, etc.

So IoT seems a much more natural fit for EA thinking than traditional organizations, especially as, unlike departments in traditional ‘human’ companies, technical components like standards: they like fixed protocols, fixed functional boundaries and well-defined data sets. And while the ‘things’ themselves may not be organic, their behavior in such an environment could exhibit ‘organic’ characteristics.

So, IoT and and the benefits of an enterprise architecture-oriented approach to business strategy do seem like a match made in heaven.

The Converged Enterprise

For information-based industries in particular, there appears to be an inevitable convergence: as IoT and the standards, protocols and governance underpinning it mature, so too will the ‘modular’ aspects of existing firms operating models, and the eco-system of technology-enabled platforms will mature along with it. Firms will be challenged to deliver value by picking the most capable components in the eco-system around which to deliver unique service propositions – and the most successful of those solutions will themselves become the basis for future eco-systems (a Darwinian view of software evolution, if you will).

The converged enterprise will consist of a combination of human and technical capabilities collaborating in well-defined ways. Some capabilities will be highly human, others highly technical, some will be in-house, some will be part of a wider platform eco-system.

In such an organization, enterprise architects will find a natural home. In the meantime, enterprise architects must choose their starting point, behavioral or structural: focusing first on decomposed business capabilities and finishing with IoT (behavioral->structural), or focusing first on IoT and finishing with business capabilities (structural->behavioral).

Technical Footnote

I am somewhat intrigued at how the OSGi Alliance has over the years shifted its focus from basic Java applications, to discrete embedded systems, to enterprise systems and now to IoT. OSGi, (disappointingly, IMO), has had a patchy record changing how firms build enterprise software – much of this is to do with a culture of undisciplined dependency management in the software industry which is very, very hard to break.

IoT raises the bar on dependency management: you simply cannot comprehensively test software updates to potentially hundreds of thousands or millions of components running that software. The ability to reliably change modules without forcing a test of all dependent instantiated components is a necessity. As enterprises get more complex and digitally interdependent, standards such as OSGi will become more critical to the plumbing of enabling technologies. But as evidenced above, for folks who have tried and failed to change how enterprises treat their technology-enabled business strategy, it’s a case of FIETIOT – Failed in Enterprise, Trying IoT. And this, indeed, seems a far more rational use of an enterprise architect’s time, as things currently stand.

 

 

 

 

 

 

Advertisements
Why IoT is changing Enterprise Architecture

Scaled Agile needs Slack

[tl;dr In order to effectively scale agile, organisations need to ensure that a portion of team capacity is explicitly set aside for enterprise priorities. A re-imagined enterprise architecture capability is a key factor in enabling scaled agile success.]

What’s the problem?

From an architectural perspective, Agile methodologies are heavily dependent on business- (or function-) aligned product owners, which tend to be very focused on *their* priorities – and not the enterprise’s priorities (i.e., other functions or businesses that may benefit from the work the team is doing).

This results in very inward-focused development, and where dependencies on other parts of the organisation are identified, these (in the absence of formal architecture governance) tend to be minimised where possible, if necessary through duplicative development. And other teams requiring access to the team’s resources (e.g., databases, applications, etc) are served on a best-effort basis – often causing those teams to seek other solutions instead, or work without formal support from the team they depend on.

This, of course, leads to architectural complexity, leading to reduced agility all round.

The Solution?

If we accept the premise that, from an architectural perspective, teams are the main consideration (it is where domain and technical knowledge resides), then the question is how to get the right demand to the right teams, in as scalable, agile manner as possible?

In agile teams, the product backlog determines their work schedule. The backlog usually has a long list of items awaiting prioritisation, and part of the Agile processes is to be constantly prioritising this backlog to ensure high-value tasks are done first.

Well known management research such as The Mythical Man Month has influenced Agile’s goal to keep team sizes small (e.g., 5-9 people for scrum). So when new work comes, adding people is generally not a scalable option.

So, how to reconcile the enterprise’s needs with the Agile team’s needs?

One approach would be to ensure that every team pays an ‘enterprise’ tax – i.e., in prioritising backlog items, at least, say, 20% of work-in-progress items must be for the benefit of other teams. (Needless to say, such work should be done in such a way as to preserve product architectural integrity.)

20% may seem like a lot – especially when there is so much work to be done for immediate priorities – but it cuts both ways. If *every* team allows 20% of their backlog to be for other teams, then every team has the possibility of using capacity from other teams – in effect, increasing their capacity by much more than they could do on their own. And by doing so they are helping achieve enterprise goals, reducing overall complexity and maximising reuse – resulting in a reduction in project schedule over-runs, higher quality resulting architecture, and overall reduced cost of change.

Slack does not mean Under-utilisation

The concept of ‘Slack’ is well described in the book ‘Slack: Getting Past Burn-out, Busywork, and the Myth of Total Efficiency‘. In effect, in an Agile sense, we are talking about organisational slack, and not team slack. Teams, in fact, will continue to be 100% utilised, as long as their backlog consists of more high-value items then they can deliver. The backlog owner – e.g., scrum master – can obviously embed local team slack into how a particular team’s backlog is managed.

Implications for Project & Financial Management

Project managers are used to getting funding to deliver a project, and then to be able to bring all those resources to bear to deliver that project. The problem is, that is neither agile, nor does it scale – in an enterprise environment, it leads to increasingly complex architectures, resulting in projects getting increasingly more expensive, increasingly late, or delivering increasingly poor quality.

It is difficult for a project manager to accept that 20% of their budget may actually be supporting other (as yet unknown) projects. So perhaps the solution here is to have Enterprise Architecture account for the effective allocation of that spending in an agile way? (i.e., based on how teams are prioritising and delivering those enterprise items on their backlog). An idea worth considering..

Note that the situation is a little different for planned cross-business initiatives, where product owners must actively prioritise the needs of those initiatives alongside their local needs. Such planned work does not count in the 20% enterprise allowance, but rather counts as part of how the team’s cost to the enterprise is formally funded. It may result in a temporary increase in resources on the team, but in this case discipline around ‘staff liquidity’ is required to ensure the team can still function optimally after the temporary resource boost has gone.

The challenge regarding project-oriented financial planning is that, once a project’s goals have been achieved, what’s left is the team and underlying architecture – both of which need to be managed over time. So some dissociation between transitory project goals and longer term team and architecture goals is necessary to manage complexity.

For smaller, non-strategic projects – i.e., no incoming enterprise dependencies – the technology can be maintained on a lights-on basis.

Enterprise architecture can be seen as a means to asses the relevance of a team’s work to the enterprise – i.e., managing both incoming and outgoing team dependencies.  The higher the enterprise relevance of the team, the more critical the team must be managed well over time – i.e., team structure changes must be carefully managed, and not left entirely to the discretion of individual managers.

Conclusion

By ensuring that every project that purports to be Agile has a mandatory allowance for enterprise resource requirements, teams can have confidence that there is a route for them to get their dependencies addressed through other agile teams, in a manner that is independent of annual budget planning processes or short-term individual business priorities.

The effectiveness of this approach can be governed and evaluated by Enterprise Architecture, which would then allow enterprise complexity goals to be addressed without concentrating such spending within the central EA function.

In summary, to effectively scale agile, an effective (and possibly rethought) enterprise architecture capability is needed.

Scaled Agile needs Slack

Making good architectural moves

[tl;dr In Every change is an opportunity to make the ‘right’ architectural move to improve complexity management and to maintain an acceptable overall cost of change.]

Accompanying every new project, business requirement or product feature is an implicit or explicit ‘architectural move’ – i.e., a change to your overall architecture that moves it from a starting state to another (possibly interim) state.

The art of good architecture is making the ‘right’ architectural moves over time. The art of enterprise architecture is being able to effectively identify and communicate what a ‘right’ move actually looks like from an enterprise perspective, rather than leaving such decisions solely to the particular implementation team – who, it must be assumed, are best qualified to identify the right move from the perspective of the relevant domain.

The challenge is the limited inputs that enterprise architects have, namely:

  • Accumulated skill/knowledge/experience from past projects, including any architectural artefacts
  • A view of the current enterprise priorities based on the portfolio of projects underway
  • A corporate strategy and (ideally) individual business strategies, including a view of the environment the enterprise operates in (e.g., regulatory, commercial, technological, etc)

From these inputs, architects need to guide the overall architecture of the enterprise to ensure every project or deliverable results in a ‘good’ move – or at least not a ‘bad’ move.

In this situation, it is difficult if not impossible to measure the actual success of an architecture capability. This is because, in many respects, the beneficiaries of a ‘good’ enterprise architecture (at least initially) are the next deliverables (projects/requirements/features), and only rarely the current deliverables.

Since the next projects to be done is generally an unknown (i.e., the business situation may change between the time the current projects complete and the time the next projects start), it is rational for people to focus exclusively on delivering the current projects. Which makes it even more important the current projects are in some way delivering the ‘right’ architectural moves.

In many organisations, the typical engagement with enterprise architecture is often late in the architectural development process – i.e., at a ‘toll-gate’ or formal architectural review. And the focus is often on ‘compliance’ with enterprise standards, principles and guidelines. Given that such guidelines can get quite detailed, it can get quite difficult for anyone building a project start architecture (PSA) to come up with an architecture that will fully comply: the first priority is to develop an architecture that will work, and is feasible to execute within the project constraints of time, budget and resources.

Only then does it make sense to formally apply architectural constraints from enterprise architecture – at least some of which may negatively impact the time, cost, resource or feasibility needs of the project – albeit to the presumed benefit of the overall landscape. Hence the need for board-level sponsorship for such reviews, as otherwise the project’s needs will almost always trump enterprise needs.

The approach espoused by an interesting new book, Chess and the Art of Enterprise Architecture, is that enterprise architects need to focus more on design and less on principles, guidelines, roadmaps, etc. Such an approach involves enterprise architects more closely in the creation and evolution of project (start) architectures, which represents the architectural basis for all the work the project does (although it does not necessarily lay out the detailed solution architecture).

This approach is also acceptable for planning processes which are more agile than waterfall. In particular, it acknowledges that not every architectural ‘move’ is necessarily independently ‘usable’ by end users of an agile process. In fact, some user stories may require several architectural moves to fully implement. The question is whether the user story is itself validated enough to merit doing the architectural moves necessary to enable it, as otherwise those architectural moves may be premature.

The alternative, then, is to ‘prototype’ the user story,  so users can evaluate it – but at the cost of non-conformance with the project architecture. This is also known as ‘technical debt’, and where teams are mature and disciplined enough to pay down technical debt when needed, it is a good approach. But users (and sometimes product owners) struggle to tell the difference between an (apparently working) prototype and a production solution that is fully architecturally compliant, and it often happens that project teams move on to the next visible deliverable without removing the technical debt.

In applications where the end-user is a person or set of persons, this may be acceptable in the short term, but where the end-user could be another application (interacting via, for example, an API invoked by either a GUI or an automated process), then such technical debt will likely cause serious problems if not addressed. At the various least, it will make future changes harder (unmanaged dependencies, lack of automated testing), and may present significant scalability challenges.

So, what exactly constitutes a ‘good’ architectural move? In general, this is what the project start architecture should aim to capture. A good basic principle could be that architectural commitments should be postponed for as long as possible, by taking steps to minimise the impact of changed architectural decisions (this is a ‘real-option‘ approach to architectural change management). Over the long term, this reduces the cost of change.

In addition, project start architectures may need to make explicit where architectural commitments must be made (e.g., for a specific database, PaaS or integration solution, etc) – i.e., areas where change will be expensive.

Other things the project start architecture may wish to capture or at least address (as part of enterprise design) could include:

  • Cataloging data semantics and usage
    • to support data governance and big data initiatives
  • Management of business context & scope (business area, product, entity, processes, etc)
    • to minimize unnecessary redundancy and process duplication
  • Controlled exposure of data and behaviour to other domains
    • to better manage dependencies and relationships with other domains
  • Compliance with enterprise policies around security and data management
    • to address operational risk
  • Automated build, test & deploy processes
    • to ensure continued agility and responsiveness to change, and maximise operational stability
  • Minimal lock-in to specific solution architectures (minimise solution architecture impact on other domains)
    • to minimize vendor lock-in and maximize solution options

The Chess book mentioned above includes a good description of a PSA, in the context of a PRINCE2 project framework. Note that the approach also works for Agile, but the PSA should set the boundaries within which the agile team can operate: if those boundaries must be crossed to deliver a user story, then enterprise design architects should be brought back into the discussion to establish the best way forward.

In summary, every change is an opportunity to make the ‘right’ architectural move to improve complexity management and to maintain an acceptable overall cost of change.

Making good architectural moves

Achieving modularity: functional vs volatility decomposition

Enterprise architecture is all about managing complexity. Many EA initiatives tend to focus on managing IT complexity, but there is only so much that can be done there before it becomes obvious that IT complexity is, for the most part, a direct consequence of enterprise complexity. To recap, complexity needs to be managed in order to maintain agility – the ability for an organisation to respond (relatively) quickly and efficiently to changes in markets, regulations or innovation, and to continue to do this over time.

Enterprise complexity can be considered to be the activities performed and resources consumed by the organisation in order to deliver ‘value’, a metric usually measured through the ability to maintain (financial) income in excess of expenses over time.

Breaking down these activities and resources into appropriate partitions that allow holistic thinking and planning to occur is one of the key challenges of enterprise architecture, and there are various techniques to do this.

Top-Down Decomposition

The natural approach to decomposition is to first understand what an organisation does – i.e., what are the (business) functions that it performs. Simply put, a function is a collection of data and decision points that are closely related (e.g., ‘Payments ‘is a function). Functions typically add little value in and of themselves – rather they form part of an end-to-end process that delivers value for a person or legal entity in some context. For example, a payment on its own means nothing: it is usually performed in the context of a specific exchange of value or service.

So a first course of action is to create a taxonomy (or, more accurately, an ontology) to describe the functions performed consistently across an enterprise. Then, various processes, products or services can be described as a composition of those functions.

If we accept (and this is far from accepted everywhere) that EA is focused on information systems complexity, then EA is not responsible for the complexity relating to the existence of processes, products or services. The creation or destruction of these are usually a direct consequence of business decisions. However, EA should be responsible for cataloging these, and ensuring these are incorporated into other enterprise processes (such as, for example, disaster recovery or business continuity processes). And EA should relate these to the functional taxonomy and the information systems architecture.

This can get very complex very quickly due to the sheer number of processes, products and services – including their various variations – most organisations have. So it is important to partition or decompose the complexity into manageable chunks to facilitate meaningful conversations.

Enterprise Equivalence Relations

One way to do this at enterprise level is to group functions into partitions (aka domains) according to synergy or autonomy (as described by Roger Sessions), for all products/services supporting a particular business. This approach is based on the mathematical concept of equivalenceBecause different functions in different contexts may have differing equivalence relationships, functions may appear in multiple partitions. One role of EA is to assess and validate if those functions are actually autonomous or if there is the potential to group apparently duplicate functions into a new partition.

Once partitions are identified, it is possible to apply ‘traditional’ EA thinking to a particular partition, because that partition is of a manageable size. By ‘traditional’ EA, I mean applying Zachman, TOGAF, PEAF, or any of the myriad methodologies/frameworks that are out there. More specifically, at that level, it is possible to establish a meaningful information systems strategy or goal for a particular partition that is directly supporting business agility objectives.

The Fallacy of Functional Decomposition

Once you get down to the level of partition, the utility of functional decomposition when it comes to architecting solutions becomes less useful. The natural tendency for architects would be to build reusable components or services that realise the various functions that comprise the partition. In fact, this may be the wrong thing to do. As Jüval Lowy demonstrates in his excellent webinar, this may result in more complexity, not less (and hence less agility).

When it comes to software architecture, the real reason to modularise your architecture is to manage volatility or uncertainty – and to ensure that volatility in one part of the architecture does not unnecessarily negatively impact another part of the architecture over time. Doing this allows agility to be maintained, so volatile parts of the application can, in fact, change frequently, at low impact to other parts of the application.

When looking at a software architecture through this lens, a quite different set of components/modules/services may become evident than those which may otherwise be obvious when using functional decomposition – the example in the webinar demonstrates this very well. A key argument used by Jüval in his presentation is that (to paraphrase him somewhat) functions are, in general, highly dependent on the context in which they are used, so to split them out into separate services may require making often impossible assumptions about all possible contexts the functions could be invoked in.

In this sense, identified components, modules or services can be considered to be providing options in terms of what is done, or how it is done, within the context of a larger system with parts of variable volatility. (See my earlier post on real options in the context of agility to understand more about options in this context.)

Partitions as Enterprise Architecture

When each partition is considered with respect to its relationship with other partitions, there is a lot of uncertainty around how different partitions will evolve. To allow for maximum flexibility, every partition should assume each other partition is a volatile part of their architecture, and design accordingly for this. This allows each partition to evolve (reasonably) independently with minimum fixed co-ordination points, without compromising the enterprise architecture by having different partitions replicate the behaviours of partitions they depend on.

This then allows:

  • Investment to be expressed in terms of impact to one or more partitions
  • Partitions to establish their own implementation strategies
  • Agile principles to be agreed on a per partition basis
  • Architectural standards to be agreed on a per partition basis
  • Partitions to define internally reusable components relevant to that partition only
  • Partitions to expose partition behaviour to other partitions in an enterprise-consistent way

In generative organisation cultures, partitions do not need to be organisationally aligned. However, in other organisation cultures (pathological or bureaucratic), alignment of enterprise infrastructure functions such as IT or operations (at least) with partitions (domains) may help accelerate the architectural and cultural changes needed – especially if coupled with broader transformations around investment planning, agile adoption and enterprise architecture.

Achieving modularity: functional vs volatility decomposition

The Meaning of Data

[tl;dr The Semantic Web may be a CDOs best friend in their efforts to help the business realise the full value of enterprise data.]

Large organisations with complex legacy infrastructure are faced with a dilemma: the technology that allowed them to grow and capture market share has reached a turning point in terms of the value returned from additional spending. The only practical investments that can be made is to shore up (protect) those investments by performing essential maintenance, focusing on security, and (perhaps) moving to lower-cost, cloud-based infrastructure. Mandatory (usually regulatory) enhancements also need to be done.

But in terms of protecting existing revenue or capturing new markets, legacy applications are often seen as a burden. Businesses are increasingly looking outside their core IT capability to address their needs – especially in technologies that can help pinpoint opportunities in the first place (i.e., analysing and processing the digital foot-prints left by customers and prospective customers, whether those foot-prints are internal to the firm or external).

Technology is both the problem and the solution. And herein lies the dilemma: internal IT organisations cannot possibly advise on all possible technology-enabled solutions for a business. Rather, businesses need to become more “tech-savvy” – a term which is bandied about a lot these days.

What does “tech-savvy” actually mean, and where is the line drawn between a “tech-savvy” business person, and the professional IT service provider? And what form should this ‘line’ take?

Arguably, most businesses have always been tech-savvy: they knew by-and-large where technology was necessary to grow their business. So for the most successful firms, there is a *lot* of technology. Does that mean those businesses are tech-savvy?

Yes, if those firms are able to manage the complexity of their IT – in other words, to be able to adapt their IT to shifting market needs, and to incorporate/absorb innovations into their architecture without significantly reducing that agility. In practice, few firms have such mastery of their IT (and, by implication, their processes and business information systems).

So a new form of “tech savvy” is needed, that allows business folks to leverage opportunities to both find customers and meet their needs (profitably), while preventing a build-up of unsustainable complexity of processes and systems.

In essence, tech-savvy business folk need to get better at understanding data. And IT needs to get much better at making meaningful business data available to their business – irrespective of where that business data may actually reside.

What does this mean in practice? It’s all about semantics – something previously in the domain of data geeks. But major initiatives like the semantic web (led by Tim Berners-Lee, of WWW fame) are making semantics useful to traditionally non-technical people.

The tools available to business folks around what information (i.e., data + context) is available is very poor, and even when the required data is found (i.e., it is known where it is) it can be difficult to extract it and use it in a productive way.

A good example of this may be observed in the proliferation of Excel spreadsheets and Access databases in organisations that support critical business functions. These tools are necessary to support those areas as the data they need from various systems are not where they need when they need it – and that is often due to business needs changing faster than IT can capture, absorb and deliver requirements.

Over time (at least in principle), all meaningful business data will (must) end up being processed exclusively by controlled enterprise systems..but this doesn’t mean waiting until IT has made the necessary changes in order to effectively govern that data and make maximum use out of it.

A key principle behind the semantic web is that data can be anywhere. In an enterprise scenario, that means the data could exist in:

  • an internal enterprise system
  • a file system (e.g., Excel sheet or Access database)
  • a website (file download, or website screen scraping)
  • a commercial information provider (Reuters, Bloomberg, etc)
  • a business partner/supplier
  • etc

To take advantage of this, meta-data (i.e., data about data) needs to be made available to businesses, and it needs to be business-relevant and agnostic to internal IT systems and processes.

More than this, the tools needed to discover useful information, as well as to retrieve and process that information need to be available to tech-savvy business folk. The right set of tools, which are suitably agnostic to specific architectures and systems, can allow businesses to explore new opportunities and business models, while allowing the IT systems and platforms to catch up and evolve over time – noting that there is no presumption that the eventual source of useful business data originates from, or is stewarded by, internal systems managed by IT.

Such tools also need to be enforce compliance to ensure only the right people get access to the right data, and (if necessary) limit the ability for people to pull data outside of the retrieval platform. In essence, providing a controlled sandbox in which businesses can distill value from information, whatever its source.

Many organisations may approach this by implementing a ‘data lake‘. This may (perhaps) be a workable solution for all data assets managed by the IT organisation. But it is not feasible for many sets of data which are not managed by the IT organisation, but which still have business utility.

Emerging standards and technologies are evolving to meet this need for a data-centric view of information systems – in particular, RDF, OWL and related ‘ontology‘ languages, as well as standards like SPARQL which enable the discovery and retrieval of data originating from multiple sources. But tools are still very primitive and generally require too much technology savviness. However, efforts like, for example, the Callimachus Project gives a hint at the potential of data-driven applications.

At some point, it will not be unreasonable to ask an IT organisation to expose all of its data assets semantically via SPARQL end-points (with appropriate access controls), and to provide tools to businesses to allow them to explore that data and (where permissible) incorporate them into spreadsheets, models and other tools in ways that allow the business to realise value from that data without requiring IT change requests. Developing the capabilities to understand semantic data, and use it to commercial advantage, will take time but it will be a worthwhile investment (and arguably before folks start spending money on ‘big data’ projects that have no wider context).

In fact, I would go so far as to say that any provider (startup or established) delivering technology services to a business should provide SPARQL endpoints as a matter of course. Making data useful and available to the business will help the business realise the value of data and get more involved in how it is captured, processed and stored in the future – and make it easier to incorporate 3rd party solution providers into business operations.

In a nutshell, the Semantic Web may be the CDOs best friend in their efforts to help the business realise the full value of enterprise data.

The Meaning of Data

The Importance of Principles in Managing Complexity

[tl;dr The first step to managing complexity is to define what is important. Principles are a great way to reach consensus on what is important, and establishing governance around adherence to those principles are the first step to getting on top of change-related complexity.]

Managing complexity is not easy, whether you are in a large organisation, or in a relatively green-field environment where growth is rapid and complexity is a far-off problem you hope to have some day…

The fact is, if your organisation suffers from a complexity problem (in technology, and implicitly, processes and data), and you are still in (a profitable) business, then it is that complexity which, for better or for worse, has been the instrument of that organisation’s success. As a consequence, it is quite hard to convince people to implement complexity management (read ‘enterprise architecture’) before complexity actually becomes a problem.

Sometimes, however, unmanaged complexity brings an organisation’s growth to a grinding halt, in the worst cases leading to eventual shutdown. (In this context, an organisation can be an independent business or a business unit within a larger company.)

Historically, investment in IT incurred large capital costs, so usually there was a large focus on realising value from that investment – i.e. to exploit it to the maximum. Exploiting technology usually means adding on more and more functionality to existing systems in ways that meet business imperatives.  But business imperatives tend to be fairly short-term in nature. Without guiding principles, the act of exploiting technology leads to ever more expensive change-related costs.

For any self-respecting IT organisation, a core set of principles, actively governed, is an absolute necessity. Principles transcend solution architectures and technologies, and enable project autonomy as decisions can be made locally without requiring ‘central’ approval, provided principles are being adhered to.

Such principles, call them ‘implementation principles’ should guide how IT organisations do their work and manage change.

But many principles that stay only within the IT organisation may break in the face of over-whelming business imperatives: on the basis that the (immediate) business needs comes first, IT often takes it on the chin and accumulates technical debt.

So it is important to agree principles that business stakeholders agree to and supports, and for which they are willing to have an open discussion about in the context of delaying projects, increasing costs, prioritising change, or driving organisational change.

Let’s call these principles ‘value principles’ – they are focused on how the business can realise value from its investments, particularly in IT.

In order to understand what this means in practice, it may be worth returning to the ‘Zero CapEx IT’ concept introduced in my last post. What if all IT was sourced through (virtual or actual) 3rd parties, so that all of IT was delivered as software-as-a-service?

In this case, the ‘implementation principles’ would be whatever guidelines or standards were used by the SaaS provider to ensure they could deliver change reliably, and maintain service according to agreed levels. The SaaS provider has a vested interest in the client getting value out of their service, but the only levers they have to ensure that is their ability to react in a timely manner to change requests.

From the business’ perspective, adopting an IT (SaaS) service requires structural change to ensure the business can take advantage of the technology. This may mean assigning roles and responsibilities, defining new activities, maybe creating whole new departments, etc. The business may be paying for the service, but unless it pro-actively maximises what the service can offer in terms of what the business needs, it may not get the value. The SaaS (i.e., IT) can do little to address this situation, and may end up losing the client if the client does not take action.

On the other hand, if a client is pro-actively engaged, then they will likely want regular changes so they can continually optimise the value of the service to their operations. This puts the onus on the provider to remain agile.

In a corporate environment, where the accountability between value and implementation is often blurred, there is usually little optionality with respect to who your provider is – or, from a provider perspective, who your client is. In fact, the client/provider perspective in a corporate environment can be damaging, as it can reinforce an us-vs-them culture, leading to mistrust and an ‘order-taking’ mentality developing between the client and the provider.

In this environment, complexity will thrive because, after all, the client is tied and therefore has to keep paying for their service. There is little incentive to be agile..if the client wants more quicker, they have to pay for it.

If, on the other hand, the client retains an active, supportive interest in how their IT is delivered, as well as what is delivered, then they will get the IT they want.

If the client is not willing to agree certain base-line principles with their (IT) provider, and to put in place structures to enforce them (including a formal waiver process), then one can expect little to improve.

On the other hand, if the client or business partner will willingly participate in establishing principles and related governance forums, this will set the tone for how the business/IT relationship will evolve going forward.

At the very least, ‘value principles’ (such as, for example, “The business will commit to implementing the necessary structural changes to maximise IT investment”) should be agreed, and the consequence/implications of these to project planning should be understood.

Note that value principles are increasingly including concepts around data governance, which historically have been an ‘implementation’ principle – for example, the principle that ‘information is an asset’.

‘Implementation principles’  are principles which determine how (IT) change is decided and implemented, and how complexity is managed, by the provider. Where the business and technology are inter-twined (i.e., the business value proposition *is* the technology), these would simply form part of the business ‘value principles’. But where the business is agnostic as to the technology, and are only focused on value, then the ‘implementation principles’ are solely for the provider to define and enforce. An example of an ‘implementation’ principle may be ‘Control of technical diversity’, with consequent implications for solution architectures.

Adopting principles means that an organisation has some sense of what it wants to be, even if only aspirationally. It allows individuals or teams to feel comfortable making decisions on their own without feeling they have to get approval, which is key for agile cultures. Establishing formal business+IT forums to review principle waivers helps provide transparency with respect to project risk and platform risk. And it allows Technical Debt to be managed, as every waiver is a recognition of technical debt incurred.

In conclusion, managing complexity is hard. But Principles can make it manageable.

The Importance of Principles in Managing Complexity

The CDO: Enterprise Architect’s best friend

[tl;dr The CDO’s agenda may be the best way to bring the Enterprise Architecture agenda to the business end of the C-suite table.]

Enterprise Architecture as a concept has for some time claimed intellectual ownership of a holistic, integrated view of architecture within an organisation. In that lofty position, enterprise architects have put a stake in the ground with respect to key architecture activities, and formalised them all within frameworks such as TOGAF.

In TOGAF, Data Architecture has been buried within the Information Systems Architecture phase of the Architecture Development Method, along with Application Architecture.

But the real world often gets in the way of (arguably) conceptually clean thinking: I note the rise of the CDO, or Chief Data Officer, usually as part of the COO function of an organisation. (This CDO is not to be confused with the Chief Digital Officer, who’s primary focus could critically be seen as to build cool apps…)

So, why has Data gotten all this attention, while Enterprise Architecture still, for the most part, languishes within IT?

Well, as with most things, its happening for all the wrong reasons: mostly its because regulators are holding businesses accountable for how they manage regulated data, and businesses need, in turn, to hold people to account for doing that. Hence the need for a CDO.

But in the act of trying to understand what data they are liable for managing, and to what extent, CDO’s necessarily have to do Data Architecture and Data Governance. And at this point the CDO’s activities starts dipping into new areas such as business process management, and business & IT strategy – and EA.

If CDOs choose to take an expansive view of their role (and I believe they should), then it would most definitely include all the domains of Enterprise Architecture – especially if one views data complexity and technology complexity as two sides of the same enterprise complexity coin: one as a consequence of business decisions, the other as a consequence of IT decisions.

The good news is that this would at long last put EAs into a role more aligned with business goals than with the goals of the IT organisation. This is not to say that the goals of the IT organisation are in some way wrong, but rather than because the business had abdicated its responsibility for many decisions that IT needs in order to do “the right thing”, IT has had to fill the gaps itself – and in ways which said abdicating businesses could understand/appreciate.

For anyone in the position of CTO, this should help a lot: without prescribing which technologies should be used, businesses then can give real context to their demand which the CTO can interpret strategically, and, in collaboration with his CDO colleague, agree a technology strategy that can deliver on those needs.

In this way, the CDO/CTO have a very symbiotic relationship: the boundaries should be fuzzy, although the responsibilities should be clear: the CDO provides the context, the CTO delivers the technology.

So collaboration is key: while in principle one should be able to describe what a business needs strictly in terms of process and data flows etc, the reality is that business needs and practices change in *response* to technology. In other words, it is not a one-way street. What technology is *capable* of doing can shape what the business *should* be doing. (In many cases, for example, technology may eliminate the need for entire processes or activities.)

Folks on the edge of this CDO/CTO collaboration have a choice: do I become predominantly business facing (CDO), or do I become predominantly technology facing (CTO)? Folks may choose to specialise, or some, like myself, may choose to focus on different perspectives according to the needs of the organisation.

One interesting impact from this approach may  address one of the biggest questions out there at the moment: how to absorb the services and capabilities offered by innovative, efficient, and nimble external businesses into the architecture of the enterprise, while retaining control, compliance and agility of the data and processes? But more on this later.

So, where does all this sit with respect to the ‘3 pillars of a digital strategy’? The points raised there are still valid. With the right collaboration and priorities, it is possible to have different folks fill the roles of ‘client centricity’, ‘chief data officer’ and ‘head of enterprise complexity’. But for the most part, all these roles begin and end with data. A proper, disciplined and thoughtful approach to how and why data is captured, processed, stored, retrieved and transmitted should properly address firm-wide themes such as improving client experience, integrating innovative services, keeping a lid on complexity and retaining an appropriate level of enterprise agility.

Some folks may be wondering where the CIO fits in all this: the CIO maintains responsibility for the management and operation of all IT processes (as defined in The Open Group’s IT4IT model). This is a big job. But it is not the job of the CTO, whose focus is to ensure the right technology is brought to bear to solve the right problem.

The CDO: Enterprise Architect’s best friend