The CDO: Enterprise Architect’s best friend

[tl;dr The CDO’s agenda may be the best way to bring the Enterprise Architecture agenda to the business end of the C-suite table.]

Enterprise Architecture as a concept has for some time claimed intellectual ownership of a holistic, integrated view of architecture within an organisation. In that lofty position, enterprise architects have put a stake in the ground with respect to key architecture activities, and formalised them all within frameworks such as TOGAF.

In TOGAF, Data Architecture has been buried within the Information Systems Architecture phase of the Architecture Development Method, along with Application Architecture.

But the real world often gets in the way of (arguably) conceptually clean thinking: I note the rise of the CDO, or Chief Data Officer, usually as part of the COO function of an organisation. (This CDO is not to be confused with the Chief Digital Officer, who’s primary focus could critically be seen as to build cool apps…)

So, why has Data gotten all this attention, while Enterprise Architecture still, for the most part, languishes within IT?

Well, as with most things, its happening for all the wrong reasons: mostly its because regulators are holding businesses accountable for how they manage regulated data, and businesses need, in turn, to hold people to account for doing that. Hence the need for a CDO.

But in the act of trying to understand what data they are liable for managing, and to what extent, CDO’s necessarily have to do Data Architecture and Data Governance. And at this point the CDO’s activities starts dipping into new areas such as business process management, and business & IT strategy – and EA.

If CDOs choose to take an expansive view of their role (and I believe they should), then it would most definitely include all the domains of Enterprise Architecture – especially if one views data complexity and technology complexity as two sides of the same enterprise complexity coin: one as a consequence of business decisions, the other as a consequence of IT decisions.

The good news is that this would at long last put EAs into a role more aligned with business goals than with the goals of the IT organisation. This is not to say that the goals of the IT organisation are in some way wrong, but rather than because the business had abdicated its responsibility for many decisions that IT needs in order to do “the right thing”, IT has had to fill the gaps itself – and in ways which said abdicating businesses could understand/appreciate.

For anyone in the position of CTO, this should help a lot: without prescribing which technologies should be used, businesses then can give real context to their demand which the CTO can interpret strategically, and, in collaboration with his CDO colleague, agree a technology strategy that can deliver on those needs.

In this way, the CDO/CTO have a very symbiotic relationship: the boundaries should be fuzzy, although the responsibilities should be clear: the CDO provides the context, the CTO delivers the technology.

So collaboration is key: while in principle one should be able to describe what a business needs strictly in terms of process and data flows etc, the reality is that business needs and practices change in *response* to technology. In other words, it is not a one-way street. What technology is *capable* of doing can shape what the business *should* be doing. (In many cases, for example, technology may eliminate the need for entire processes or activities.)

Folks on the edge of this CDO/CTO collaboration have a choice: do I become predominantly business facing (CDO), or do I become predominantly technology facing (CTO)? Folks may choose to specialise, or some, like myself, may choose to focus on different perspectives according to the needs of the organisation.

One interesting impact from this approach may  address one of the biggest questions out there at the moment: how to absorb the services and capabilities offered by innovative, efficient, and nimble external businesses into the architecture of the enterprise, while retaining control, compliance and agility of the data and processes? But more on this later.

So, where does all this sit with respect to the ‘3 pillars of a digital strategy’? The points raised there are still valid. With the right collaboration and priorities, it is possible to have different folks fill the roles of ‘client centricity’, ‘chief data officer’ and ‘head of enterprise complexity’. But for the most part, all these roles begin and end with data. A proper, disciplined and thoughtful approach to how and why data is captured, processed, stored, retrieved and transmitted should properly address firm-wide themes such as improving client experience, integrating innovative services, keeping a lid on complexity and retaining an appropriate level of enterprise agility.

Some folks may be wondering where the CIO fits in all this: the CIO maintains responsibility for the management and operation of all IT processes (as defined in The Open Group’s IT4IT model). This is a big job. But it is not the job of the CTO, whose focus is to ensure the right technology is brought to bear to solve the right problem.

The CDO: Enterprise Architect’s best friend

Strategic Theme # 4/5: Systems Theory & Systems Thinking

[tl;dr Systems thinking is the ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts. Understanding and applying systems thinking in software-enabled businesses is in everyone’s interest.]

First thing’s first: I am not an expert in systems theory or systems thinking. But I am learning. Many of the ideas resonate with my personal experience and work practices, so I guess I am a Systems Thinker by nature. The corollary of this is that, it seems, not everyone is a Systems Thinker by nature.

Enterprise Architecture is, in essence, a discipline within the general field of systems theory and systems thinking. It attempts to explain or communicate a system from many perspectives, explicitly looking at the whole as well as the parts.

As every enterprise architect knows, it is impossible to communicate all perspectives and viewpoints in a single view…the concept of a ‘thing’ in enterprise architecture can be very amorphous, a bit like Shrödingers Cat, only tangible when you look at it. So meta-models such as ArchiMate and the Zachman Framework have been devised to help architects manage this complexity.

But this post isn’t about enterprise architecture: the fact is, many complex problems faced by businesses today can only be solved by systems thinking across the board. It cannot be left to a solitary group of ‘enterprise architects’ to try to bring everything together. Rather, anybody involved in business change, continuous improvement (kaizen) or strategy setting must be versed in systems thinking. At the very least, it is this understanding that will allow people to know when and why to engage with enterprise architects.

An excellent example of where systems thinking would be particularly useful is in financial services regulatory reform: regulations impact the entire lifecycle, front-to-back, of every financial product. The sheer volume of regulatory change makes it a practical impossibility to address all regulations all the time, while still operating a profitable business in a competitive environment. The temptation to solve known, specific requirements one-by-one is high,  but without systems thinking complexity will creep in and sustainable change becomes a real challenge.

Although I have a lot to learn on systems thinking, I have found the book ‘Thinking in Systems’ by Donella H Meadows to be a very accessible introduction to the subject (recommended by Adrian Cockcroft and as such it is also a strong influence within the DevOps community). I am also seeing some excellent views on the subject coming out of established architecture practitioners such as Avancier and Tetradian – where challenges and frustrations around the conventional IT-centric approach to enterprise architecture are vented and addressed in very informative ways.

The point of this post is to emphasise that systems thinking (which breaks through traditional organisational silos and boundaries) applied across all professions and disciplines is what will help drive success in the future: culturally, it complements the goals of the ‘lean enterprise’ (strategic theme #1).

Many businesses  struggling to maintain profitability in the face of rapid technology innovation, changing markets, increased competition and ever-rising regulatory obligations would, in my view, do well to embark on a program of education on aspects of this topic across their entire workforce. Those that do can reasonably expect to see a significantly greater return on their (technology) investments in the longer term.

Strategic Theme # 4/5: Systems Theory & Systems Thinking

Strategic Theme #2/5: Enterprise Modularity

[tl;dr Enterprise modularity is the state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Standards for these are slowly gaining traction in the enterprise.]

Enterprise Modularity is, in essence, Service Oriented Architecture but ‘all the way down’; it has evolved a lot over the last 15-20 years since SOA was first promoted as a concept.

Specifically, SOA historically has been focused on ‘services’ – or, more explicitly, end-points, the contracts they expose, and service discovery. It has been less concerned with the architectures of the systems behind the end-points, which seems logically correct but has practical implications which has limited the widespread adoption of SOA solutions.

SOA is a great idea that has often suffered from epic failures when implemented as part of major transformation initiatives (although less ambitious implementations may have met with more success, but limited to specific domains).

The challenge has been the disconnect between what developers build, what users/customers need, and what the enterprise desires –  not just for the duration of a programme, but over time.

Enterprise modularity needs several factors to be in place before it can succeed at scale. Namely:

  • An enterprise architecture (business context) linked to strategy
  • A focus on data governance
  • Agile cross-disciplinary/cross-team planning and delivery
  • Integration technologies and tools consistent with development practices

Modularity standards and abstractions like OSGi, Resource Oriented Computing and RESTful computing enable the development of loosely coupled, cohesive modules potentially deployable as stand-alone services and which can be orchestrated in innovative ways using   many technologies – especially when linked with canonical enterprise data standards.

The upshot of all this is that traditional ESBs are transforming..technologies like Red Hat Fuse ESB and open-source solutions like Apache ServiceMix provide powerful building blocks that allow ESBs to become first-class applications rather than simply integration hubs. (MuleSoft has not, to my knowledge, adopted an open modularity standard such as OSGi, so I believe it is not appropriate to use as the basis for a general application architecture.)

This approach allows for the eventual construction of ‘domain applications’ which, more than being an integration hub, actually hosts the applications (or, more accurately, application components) in ways that make dependencies explicit.

Vendors such as CA Layer7, Apigee and AxWay are prioritising API management over more traditional, heavy-weight ESB functions – essentially, anyone can build an API internal to their application architecture, but once you want to expose it beyond the application itself, then an API management tool will be required. These can be implemented top-down once the bottom-up API architecture has been developed and validated. Enterprise or integration architecture teams should be driving the adoption and implementation of API management tools, as these (done well) can be non-intrusive with respect to how application development is done, provided application developers take a micro-services led approach to application architecture and design. The concept of ‘dumb pipes’ and ‘smart endpoints are key here, to avoid putting inappropriate complexity in the API management layer or in the (traditional) ESB.

Microservices is a complex topic, as this post describes. But modules (along the lines defined by OSGi) are a great stepping stone, as they embrace microservice thinking without necessarily introducing distributed systems complexity. Innovative technologies like Paremus Service Fabric  building on OSGi standards, help make the transition from monolithic modular architectures to a (distributed) microservice architecture as painless as can be reasonably expected, and can be a very effective way to manage evolution from monolithic to distributed (agile, microservice-based) architectures.

Other solutions, such as Cloud Foundry, offer a means of taking much of the complexity out of building green-field microservices solutions by standardising many of the platform components – however, Cloud Foundry, unlike OSGi, is not a formal open standard, and so carries risks. (It may become a de-facto standard, however, if enough PaaS providers use it as the basis for their offerings.)

The decisions as to which ‘PaaS’ solutions to build/adopt enterprise-wide will be a key consideration for enterprise CTOs in the coming years. It is vital that such decisions are made such that the chosen PaaS solution(s) will directly support enterprise modularity (integration) goals linked to development standards and practices.

For 2015, it will be interesting to see how nascent enterprise standards such as described above evolve in tandem with IaaS and PaaS innovations. Any strategic choices made in these domains must consider the cost/impact of changing those choices: defining architectures that are too heavily dependent on a specific PaaS or IaaS solution limits optionality and is contrary to the principles of (enterprise) modularity.

This suggests that Enterprise modularity standards which are agnostic to specific platform implementation choices must be put in place, and different parts of the enterprise can then choose implementations appropriate for their needs. Hence open standards and abstractions must dominate the conversation rather than specific products or solutions.

Strategic Theme #2/5: Enterprise Modularity

The Learning CTO’s Strategic themes for 2015

Like most folks, I cannot predict the future. I can only relate the themes that most interest me. On that basis, here is what is occupying my mind as we go into 2015.

The Lean Enterprise The cultural and process transformations necessary to innovate and maintain agility in large enterprises – in technology, financial management, and risk, governance and compliance. Includes using real-option theory to manage risk.
Enterprise Modularity The state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Or SOA Mark II.
Continuous Delivery The state-of-the-art technologies and techniques which brings together agile software delivery with operational considerations. Or DevOps Mark I.
Systems Theory & Systems Thinking The ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts.
Machine Learning Using business intelligence and advanced data semantics to dynamically improve automated or automatable processes.

[Blockchain technologies are another key strategic theme which are covered in a separate blog, dappsinfintech.com.]

Why these particular topics?

Specifically, over the next few years, large organisations need to be able to

  1. Out-innovate smaller firms in their field of expertise, and
  2. Embrace and leverage external innovative solutions and services that serve to enhance a firm’s core product offerings.

And firms need to be able to do this while keeping a lid on overall costs, which means managing complexity (or ‘cleaning up’ as you go along, also known as keeping technical debt under control).

It is also interesting to note that for many startups, these themes are generally not an explicit concern, as they tend to be ‘obvious’ and therefore do not need additional management action to address them. However, small firms may fail to scale successfully if they do not explicitly recognise where they are doing these already, and so ensure they maintain focus on these capabilities as they grow.

Also, for many startups, their success may be predicated on larger organisations getting on top of these themes, as otherwise the startups may struggle to get large firms to adopt their innovations on a sustainable basis.

The next few blog posts will explain these themes in a bit more detail, with relevant references.

The Learning CTO’s Strategic themes for 2015

DevOps & the role of Architecture in Achieving Business Success

[TL;DR] Creating a positive re-inforcing effect between enterprise architecture and DevOps will provide firms with a future digital competitive advantage.

I finished reading an interesting book recently:

The Phoenix Project: A Novel About IT, DevOps and Helping Your Business Win (Gene Kim, Kevin Behr, & George Spafford)

Apart from achieving the improbable outcome of turning a story about day-to-day IT into a page-turner thriller (relatively speaking..), it identified that many challenges in IT are of its own making – irrespective of the underlying technology or architecture. Successfully applying lean techniques, such as Kanban, can often dramatically improve IT throughput and productivity, and should be the first course of action to address a perceived lack of resources/capacity in the IT organisation, ahead of looking at tools or architecture.

This approach is likely to be particularly effective in organisations in which IT teams seem to be constantly reacting to unplanned change-related events.

As Kanban techniques are applied, constraints are better understood, and the benefit of specific enhancements to tools and architecture should become more evident to everyone. This forms part of the feedback loop into the software design/development process.

This whole process can be supported by improving the process by which demand into IT (and specific teams within IT) is identified and prioritised: the feedback loop from DevOps continuous improvement activities form part of the product/architecture roadmap which is shared with the business stakeholders.

The application of these techniques dovetails very nicely with advancements in how technology can be commissioned today: specifically, infrastructure-as-a-service (virtualised servers); elastic anti-fragile cloud environments, the development of specialised DevOps tools such as Puppet, Ansible, Chef, etc; the rise of lightweight VMs/containers such as Docker and Mesos, microservices as a unit of deployment, automated testing tools, open modular architecture standards such as OSGi, orchestration technologies such as Resource Oriented Computing, sophisticated monitoring tools using big data and machine learning techniques such as Splunk, etc, etc.

This means there are many paths to improving DevOps productivity. This presents its own technical challenges, but is mostly down to finding the right talent and deploying them productively.

An equal challenge relates to enteprise architecture and scaled agile planning and delivery techniques, which serves to ensure that the right demand gets to the right teams, and that the scope of each team’s work does not grow beyond an architecturally sustainable boundary – as well as ensuring that teams are conscious of and directly support domains that touch on their boundary.

This implies more sophisticated planning, and a conscious effort to continually be prepared to refactor teams and architectures in order to maintain an architecturally agile environment over time.

The identification and definition of appropriate boundaries must start at the business level (for delivering IT value), then to the programme/project level (which are the agents of strategic change), and finally to the enterprise level (for managing complexity). There is an art to identifying the right boundaries (appropriate for a given context), but the Roger Sessions approach to managing complexity (as described in this book) describes a sensible approach.

The establishment of boundaries (domains) allows information architecture standards and processes to be readily mapped  to systems and processes, and to have these formalised through flows exposed by those domains. This in the end makes data standards compliance much easier, as the (exposed) implementation architecture reflects the desired information architecture.

How does this help the DevOps agenda? In general, Agile and DevOps methods work best when the business context is understood, and the scope of the team(s) work is understood in the context of the whole business, other projects/programs, and for the organisation as a whole. This allows the team to innovate freely within that context, and to know where to go if there is any doubt.

DevOps is principally about people collaborating, and (enterprise) architecture is in the end the same. Success in DevOps can only scale to provide firm-wide benefits with a practical approach to enterprise architecture that is sensitive to people’s needs, and which acts as a positive reinforcement of DevOps activities.

In the end, complexity management through enterprise architecture and change agility through dev-ops go hand-in-hand. However, most folks haven’t figured out exactly how this works in practice, just yet…for now, these disciplines remain unconnected islands of activity, but firms that connect them successfully stand to benefit enormously.

DevOps & the role of Architecture in Achieving Business Success

How ‘uncertainty’ can be used to create a strategic advantage

[TL;DR This post outlines a strategy for dealing with uncertainty in enterprise architecture planning, with specific reference to regulatory change in financial services.]

One of the biggest challenges anyone involved in technology has is in balancing the need to address the immediate requirement with the need to prepare for future change at the risk of over-engineering a solution.

The wrong balance over time results in complex, expensive legacy technology that ends up being an inhibitor to change, rather than an enabler.

It should not be unreasonable to expect that, over time, and with appropriate investment, any firm should have significant IT capability that can be brought to bear for a multitude of challenges or opportunities – even those not thought of at the time.

Unfortunately, most legacy systems are so optimised to solve the specific problem they were commissioned to solve, they often cannot be easily or cheaply adapted for new scenarios or problem domains.

In other words, as more functionality is added to a system, the ability to change it diminishes rapidly:

Agility vs FunctionalityThe net result is that the technology platform already in place is optimised to cope with existing business models and practices, but generally incapable of (cost effectively) adapting to new business models or practices.

To address this needs some forward thinking: specifically, what capabilities need to be developed to support where the business needs to be, given the large number of known unknowns? (accepting that everybody is in the same boat when it comes to dealing with unknown unknowns..).

These capabilities are generally determined by external factors – trends in the specific sector, technology, society, economics, etc, coupled with internal forward-looking strategies.

An excellent example of where a lack of focus on capabilities has caused structural challenges is the financial industry. A recent conference at the Bank for International Settlements (BIS) has highlighted the following capability gaps in how banks do their IT – at least as it relates to regulator’s expectations:

  • Data governance and data architecture need to be optimised in order to enhance the quality, accuracy and integrity of data.
  • Analytical and reporting processes need to facilitate faster decision-making and direct availability of the relevant information.
  • Processes and databases for the areas of finance, control and risk need to be harmonised.
  • Increased automation of the data exchange processes with the supervisory authorities is required.
  • Fast and flexible implementation of supervisory requirements by business units and IT necessitates a modular and flexible architecture and appropriate project management methods.

The interesting aspect about the above capabilities is that they span multiple businesses, products and functional domains. Yet for the most part they do not fall into the traditional remit of typical IT organisations.

The current state of technology today is capable of delivering these requirements from a purely technical perspective: these are challenging problems, but for the most part they have already been solved, or are being solved, in other industries or sectors – sometimes in a larger scale even than banks have to deal with. However, finding talent is, and remains, an issue.

The big challenge, rather is in ‘business-technology’. That amorphous space that is not quite business but not quite (traditional) IT either. This is the capability that banks need to develop: the ability to interpret what outcomes a business requires, and map that not only to projects, but also to capabilities – both business capabilities and IT capabilities.

So, what core capabilities are being called out by the BIS? Here’s a rough initial pass (by no means complete, but hopefully indicative):

Data Governance Increased focus on Data Ontologies, Semantic Modelling, Linked/Open Data (RDF), Machine Learning, Self-Describing Systems, Integration
Analytics & Reporting Application of Big Data techniques for scaling timely analysis of large data sets, not only for reporting but also as part of feedback loops into automated processes. Data science approach to analytics.
Processes & Databases Use of meta-data in exposing capabilities that can be orchestrated by many business-aligned IT teams to support specific end-to-end business processes. Databases only exposed via prescribed services; model-driven product development; business architecture.
Automation of data exchange  Automation of all report generation, approval, publishing and distribution (i.e., throwing people at the problem won’t fix this)
Fast and flexible implementation Adoption of modular-friendly practices such as portfolio planning, domain-driven design, enterprise architecture, agile project management, & microservice (distributed, cloud-ready, reusable, modular) architectures

It should be obvious looking at this list that it will not be possible or feasible to outsource these capabilities. Individual capabilities are not developed isolation: they complement and support each other. Therefore they need to be developed and maintained in-house – although vendors will certainly have a role in successfully delivering these capabilities. And these skills are quite different from skills existing business & IT folks have (although some are evolutionary).

Nobody can accurately predict what systems need to be built to meet the demands of any business in the next 6 months, let alone 3 years from now. But the capabilities that separates the winners for the losers in given sectors are easier to identify. Banks in particular are under massive pressure, with regulatory pressure, major shifts in market dynamics, competition from smaller, more nimble alternative financial service providers, and rapidly diminishing technology infrastructure costs levelling the playing field for new contenders.

Regulators have, in fact, given banks a lifeline: those that heed the regulators and take appropriate action will actually be in a strong position to deal competitively with significant structural change to the financial services industry over the next 10+ years.

The changes (client-centricity, digital transformation, regulatory compliance) that all knowledge-based industries (especially finance) will go through will depend heavily on all of the above capabilities. So this is an opportunity for financial institutions to get 3 for the price of 1 in terms of strategic business-IT investment.

How ‘uncertainty’ can be used to create a strategic advantage

The true scope of enterprise architecture

This article addresses what is, or should be, the true scope of enterprise architecture.

To date, most enterprise architecture efforts have been focused on supporting IT’s role as the ‘owner’ of all applications and infrastructure used by an organisation to deliver its core value proposition. (Note: in this article, ‘IT’ refers to the IT organisation within a firm, rather than ‘technology’ as a general theme.)

The view of the ‘enterprise’ was therefore heavily influenced by IT’s view of the enterprise.

There has been an increasing awareness in recent times that enterprise architects should look beyond IT and its role in the enterprise – notably led by Tom Graves and his view on concepts such as the business model canvas.

But while in principle this perspective correct, in practice it is hard for most organisations to swallow. So most enterprise architects are still focused on IT-centric activities, which most corporate folks are comfortable for them to do – and, indeed, for which there is still a considerable need.

But the world is changing, and this IT-centric view of the enterprise is becoming less useful, for two principle (ironically) technology-led reasons:

  • The rise of Everything-as-a-Service, and
  • Decentralised applications

Let’s cover the above two major technology trends in turn.

The rise of Everything-as-a-Service

This is a reflection that more and more technology and technology-enabled capability is being delivered as a service – i.e., on-tap, pay-as-you-go and there when you need it.

This starts with basic infrastructure (so called IaaS), and rises up to Platform-as-a-Service (PaaS), then up to Software-as-a-Service (SaaS) and finally  Business Process-as-a Service (BPaaS) and its ilk.

The implications of this are far-reaching, as this research from Horses for Sources indicates.

This trend puts the provision and management of IT capability squarely outside of the ‘traditional’ IT organisation. While IT as an organisation must still exist, it’s role will focus increasingly on the ‘digital’ agenda, namely, focusing on client-centricity (bespoke optimised client experience), data as an asset (and related governance/compliance issues), and managing complexity (of service utilisation, service integration, and legacy application evolution/retirement).

Of particular note is that many firms may be willing to write-down their legacy IT investments in the face of newer, more innovative, agile and cheaper solutions available via the ‘as-a-Service’ route.

Decentralised Applications

Decentralised applications are applications which are built on so-called ‘blockchain’ technology – i.e., applications where transactions between two or more mutually distrusting and legally distinct parties can be reliably and irrefutably recorded without the need for an intermediate 3rd party.

Bitcoin is the first and most visible decentralised application: it acts as a distributed ledger. ‘bitcoin’ the currency is built upon the Bitcoin protocol, and its focus is on being a reliable store of value. But there are many other protocols and technologies (such as Ethereum, Ripple, MasterCoin, Namecoin, etc).

Decentralised applications will enable many new and innovative distributed processes and services. Where previously many processes were retained in-house for maximum control, processes can in future be performed by one or more external parties, with decentralised application technology ensuring all parties are meeting their obligations within the overall process.

Summary

In the short term, many EA’s will be focused on helping optimise change within their respective organisation’s, and will remain accountable to IT while supporting a particular business area.

As IT’s role changes to accommodate the new reality of ‘everything-as-a-service’ and decentralised applications, EA’s should be given a broader remit to look beyond traditional IT-centric solutions.

In the meantime, businesses are investing heavily in innovation accelerators and incubators – most of which are dominated by ‘as-a-service’ solutions.

It will take some time for the ‘everything-as-a-service’ model to play itself out, as there are many issues concerning privacy, security, performance, etc that will need to be addressed in practice and in law. But the direction is clear, and firms that start to take a service-oriented view of the capabilities they need will have a competitive advantage over those that do not.

 

 

 

 

 

The true scope of enterprise architecture