The Learning CTO’s Strategic themes for 2015

Like most folks, I cannot predict the future. I can only relate the themes that most interest me. On that basis, here is what is occupying my mind as we go into 2015.

The Lean Enterprise The cultural and process transformations necessary to innovate and maintain agility in large enterprises – in technology, financial management, and risk, governance and compliance. Includes using real-option theory to manage risk.
Enterprise Modularity The state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Or SOA Mark II.
Continuous Delivery The state-of-the-art technologies and techniques which brings together agile software delivery with operational considerations. Or DevOps Mark I.
Systems Theory & Systems Thinking The ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts.
Machine Learning Using business intelligence and advanced data semantics to dynamically improve automated or automatable processes.

[Blockchain technologies are another key strategic theme which are covered in a separate blog, dappsinfintech.com.]

Why these particular topics?

Specifically, over the next few years, large organisations need to be able to

  1. Out-innovate smaller firms in their field of expertise, and
  2. Embrace and leverage external innovative solutions and services that serve to enhance a firm’s core product offerings.

And firms need to be able to do this while keeping a lid on overall costs, which means managing complexity (or ‘cleaning up’ as you go along, also known as keeping technical debt under control).

It is also interesting to note that for many startups, these themes are generally not an explicit concern, as they tend to be ‘obvious’ and therefore do not need additional management action to address them. However, small firms may fail to scale successfully if they do not explicitly recognise where they are doing these already, and so ensure they maintain focus on these capabilities as they grow.

Also, for many startups, their success may be predicated on larger organisations getting on top of these themes, as otherwise the startups may struggle to get large firms to adopt their innovations on a sustainable basis.

The next few blog posts will explain these themes in a bit more detail, with relevant references.

The Learning CTO’s Strategic themes for 2015

DevOps & the role of Architecture in Achieving Business Success

[TL;DR] Creating a positive re-inforcing effect between enterprise architecture and DevOps will provide firms with a future digital competitive advantage.

I finished reading an interesting book recently:

The Phoenix Project: A Novel About IT, DevOps and Helping Your Business Win (Gene Kim, Kevin Behr, & George Spafford)

Apart from achieving the improbable outcome of turning a story about day-to-day IT into a page-turner thriller (relatively speaking..), it identified that many challenges in IT are of its own making – irrespective of the underlying technology or architecture. Successfully applying lean techniques, such as Kanban, can often dramatically improve IT throughput and productivity, and should be the first course of action to address a perceived lack of resources/capacity in the IT organisation, ahead of looking at tools or architecture.

This approach is likely to be particularly effective in organisations in which IT teams seem to be constantly reacting to unplanned change-related events.

As Kanban techniques are applied, constraints are better understood, and the benefit of specific enhancements to tools and architecture should become more evident to everyone. This forms part of the feedback loop into the software design/development process.

This whole process can be supported by improving the process by which demand into IT (and specific teams within IT) is identified and prioritised: the feedback loop from DevOps continuous improvement activities form part of the product/architecture roadmap which is shared with the business stakeholders.

The application of these techniques dovetails very nicely with advancements in how technology can be commissioned today: specifically, infrastructure-as-a-service (virtualised servers); elastic anti-fragile cloud environments, the development of specialised DevOps tools such as Puppet, Ansible, Chef, etc; the rise of lightweight VMs/containers such as Docker and Mesos, microservices as a unit of deployment, automated testing tools, open modular architecture standards such as OSGi, orchestration technologies such as Resource Oriented Computing, sophisticated monitoring tools using big data and machine learning techniques such as Splunk, etc, etc.

This means there are many paths to improving DevOps productivity. This presents its own technical challenges, but is mostly down to finding the right talent and deploying them productively.

An equal challenge relates to enteprise architecture and scaled agile planning and delivery techniques, which serves to ensure that the right demand gets to the right teams, and that the scope of each team’s work does not grow beyond an architecturally sustainable boundary – as well as ensuring that teams are conscious of and directly support domains that touch on their boundary.

This implies more sophisticated planning, and a conscious effort to continually be prepared to refactor teams and architectures in order to maintain an architecturally agile environment over time.

The identification and definition of appropriate boundaries must start at the business level (for delivering IT value), then to the programme/project level (which are the agents of strategic change), and finally to the enterprise level (for managing complexity). There is an art to identifying the right boundaries (appropriate for a given context), but the Roger Sessions approach to managing complexity (as described in this book) describes a sensible approach.

The establishment of boundaries (domains) allows information architecture standards and processes to be readily mapped  to systems and processes, and to have these formalised through flows exposed by those domains. This in the end makes data standards compliance much easier, as the (exposed) implementation architecture reflects the desired information architecture.

How does this help the DevOps agenda? In general, Agile and DevOps methods work best when the business context is understood, and the scope of the team(s) work is understood in the context of the whole business, other projects/programs, and for the organisation as a whole. This allows the team to innovate freely within that context, and to know where to go if there is any doubt.

DevOps is principally about people collaborating, and (enterprise) architecture is in the end the same. Success in DevOps can only scale to provide firm-wide benefits with a practical approach to enterprise architecture that is sensitive to people’s needs, and which acts as a positive reinforcement of DevOps activities.

In the end, complexity management through enterprise architecture and change agility through dev-ops go hand-in-hand. However, most folks haven’t figured out exactly how this works in practice, just yet…for now, these disciplines remain unconnected islands of activity, but firms that connect them successfully stand to benefit enormously.

DevOps & the role of Architecture in Achieving Business Success

How ‘uncertainty’ can be used to create a strategic advantage

[TL;DR This post outlines a strategy for dealing with uncertainty in enterprise architecture planning, with specific reference to regulatory change in financial services.]

One of the biggest challenges anyone involved in technology has is in balancing the need to address the immediate requirement with the need to prepare for future change at the risk of over-engineering a solution.

The wrong balance over time results in complex, expensive legacy technology that ends up being an inhibitor to change, rather than an enabler.

It should not be unreasonable to expect that, over time, and with appropriate investment, any firm should have significant IT capability that can be brought to bear for a multitude of challenges or opportunities – even those not thought of at the time.

Unfortunately, most legacy systems are so optimised to solve the specific problem they were commissioned to solve, they often cannot be easily or cheaply adapted for new scenarios or problem domains.

In other words, as more functionality is added to a system, the ability to change it diminishes rapidly:

Agility vs FunctionalityThe net result is that the technology platform already in place is optimised to cope with existing business models and practices, but generally incapable of (cost effectively) adapting to new business models or practices.

To address this needs some forward thinking: specifically, what capabilities need to be developed to support where the business needs to be, given the large number of known unknowns? (accepting that everybody is in the same boat when it comes to dealing with unknown unknowns..).

These capabilities are generally determined by external factors – trends in the specific sector, technology, society, economics, etc, coupled with internal forward-looking strategies.

An excellent example of where a lack of focus on capabilities has caused structural challenges is the financial industry. A recent conference at the Bank for International Settlements (BIS) has highlighted the following capability gaps in how banks do their IT – at least as it relates to regulator’s expectations:

  • Data governance and data architecture need to be optimised in order to enhance the quality, accuracy and integrity of data.
  • Analytical and reporting processes need to facilitate faster decision-making and direct availability of the relevant information.
  • Processes and databases for the areas of finance, control and risk need to be harmonised.
  • Increased automation of the data exchange processes with the supervisory authorities is required.
  • Fast and flexible implementation of supervisory requirements by business units and IT necessitates a modular and flexible architecture and appropriate project management methods.

The interesting aspect about the above capabilities is that they span multiple businesses, products and functional domains. Yet for the most part they do not fall into the traditional remit of typical IT organisations.

The current state of technology today is capable of delivering these requirements from a purely technical perspective: these are challenging problems, but for the most part they have already been solved, or are being solved, in other industries or sectors – sometimes in a larger scale even than banks have to deal with. However, finding talent is, and remains, an issue.

The big challenge, rather is in ‘business-technology’. That amorphous space that is not quite business but not quite (traditional) IT either. This is the capability that banks need to develop: the ability to interpret what outcomes a business requires, and map that not only to projects, but also to capabilities – both business capabilities and IT capabilities.

So, what core capabilities are being called out by the BIS? Here’s a rough initial pass (by no means complete, but hopefully indicative):

Data Governance Increased focus on Data Ontologies, Semantic Modelling, Linked/Open Data (RDF), Machine Learning, Self-Describing Systems, Integration
Analytics & Reporting Application of Big Data techniques for scaling timely analysis of large data sets, not only for reporting but also as part of feedback loops into automated processes. Data science approach to analytics.
Processes & Databases Use of meta-data in exposing capabilities that can be orchestrated by many business-aligned IT teams to support specific end-to-end business processes. Databases only exposed via prescribed services; model-driven product development; business architecture.
Automation of data exchange  Automation of all report generation, approval, publishing and distribution (i.e., throwing people at the problem won’t fix this)
Fast and flexible implementation Adoption of modular-friendly practices such as portfolio planning, domain-driven design, enterprise architecture, agile project management, & microservice (distributed, cloud-ready, reusable, modular) architectures

It should be obvious looking at this list that it will not be possible or feasible to outsource these capabilities. Individual capabilities are not developed isolation: they complement and support each other. Therefore they need to be developed and maintained in-house – although vendors will certainly have a role in successfully delivering these capabilities. And these skills are quite different from skills existing business & IT folks have (although some are evolutionary).

Nobody can accurately predict what systems need to be built to meet the demands of any business in the next 6 months, let alone 3 years from now. But the capabilities that separates the winners for the losers in given sectors are easier to identify. Banks in particular are under massive pressure, with regulatory pressure, major shifts in market dynamics, competition from smaller, more nimble alternative financial service providers, and rapidly diminishing technology infrastructure costs levelling the playing field for new contenders.

Regulators have, in fact, given banks a lifeline: those that heed the regulators and take appropriate action will actually be in a strong position to deal competitively with significant structural change to the financial services industry over the next 10+ years.

The changes (client-centricity, digital transformation, regulatory compliance) that all knowledge-based industries (especially finance) will go through will depend heavily on all of the above capabilities. So this is an opportunity for financial institutions to get 3 for the price of 1 in terms of strategic business-IT investment.

How ‘uncertainty’ can be used to create a strategic advantage

The true scope of enterprise architecture

This article addresses what is, or should be, the true scope of enterprise architecture.

To date, most enterprise architecture efforts have been focused on supporting IT’s role as the ‘owner’ of all applications and infrastructure used by an organisation to deliver its core value proposition. (Note: in this article, ‘IT’ refers to the IT organisation within a firm, rather than ‘technology’ as a general theme.)

The view of the ‘enterprise’ was therefore heavily influenced by IT’s view of the enterprise.

There has been an increasing awareness in recent times that enterprise architects should look beyond IT and its role in the enterprise – notably led by Tom Graves and his view on concepts such as the business model canvas.

But while in principle this perspective correct, in practice it is hard for most organisations to swallow. So most enterprise architects are still focused on IT-centric activities, which most corporate folks are comfortable for them to do – and, indeed, for which there is still a considerable need.

But the world is changing, and this IT-centric view of the enterprise is becoming less useful, for two principle (ironically) technology-led reasons:

  • The rise of Everything-as-a-Service, and
  • Decentralised applications

Let’s cover the above two major technology trends in turn.

The rise of Everything-as-a-Service

This is a reflection that more and more technology and technology-enabled capability is being delivered as a service – i.e., on-tap, pay-as-you-go and there when you need it.

This starts with basic infrastructure (so called IaaS), and rises up to Platform-as-a-Service (PaaS), then up to Software-as-a-Service (SaaS) and finally  Business Process-as-a Service (BPaaS) and its ilk.

The implications of this are far-reaching, as this research from Horses for Sources indicates.

This trend puts the provision and management of IT capability squarely outside of the ‘traditional’ IT organisation. While IT as an organisation must still exist, it’s role will focus increasingly on the ‘digital’ agenda, namely, focusing on client-centricity (bespoke optimised client experience), data as an asset (and related governance/compliance issues), and managing complexity (of service utilisation, service integration, and legacy application evolution/retirement).

Of particular note is that many firms may be willing to write-down their legacy IT investments in the face of newer, more innovative, agile and cheaper solutions available via the ‘as-a-Service’ route.

Decentralised Applications

Decentralised applications are applications which are built on so-called ‘blockchain’ technology – i.e., applications where transactions between two or more mutually distrusting and legally distinct parties can be reliably and irrefutably recorded without the need for an intermediate 3rd party.

Bitcoin is the first and most visible decentralised application: it acts as a distributed ledger. ‘bitcoin’ the currency is built upon the Bitcoin protocol, and its focus is on being a reliable store of value. But there are many other protocols and technologies (such as Ethereum, Ripple, MasterCoin, Namecoin, etc).

Decentralised applications will enable many new and innovative distributed processes and services. Where previously many processes were retained in-house for maximum control, processes can in future be performed by one or more external parties, with decentralised application technology ensuring all parties are meeting their obligations within the overall process.

Summary

In the short term, many EA’s will be focused on helping optimise change within their respective organisation’s, and will remain accountable to IT while supporting a particular business area.

As IT’s role changes to accommodate the new reality of ‘everything-as-a-service’ and decentralised applications, EA’s should be given a broader remit to look beyond traditional IT-centric solutions.

In the meantime, businesses are investing heavily in innovation accelerators and incubators – most of which are dominated by ‘as-a-service’ solutions.

It will take some time for the ‘everything-as-a-service’ model to play itself out, as there are many issues concerning privacy, security, performance, etc that will need to be addressed in practice and in law. But the direction is clear, and firms that start to take a service-oriented view of the capabilities they need will have a competitive advantage over those that do not.

 

 

 

 

 

The true scope of enterprise architecture

The long journey for legacy application evolution

The CTO of Sungard recently authored this article on Sungard’s technical strategy for its large and diverse portfolio of financial applications:

http://www.bankingtech.com/247292/how-to-build-a-better-sungard/

The technical strategy is shockingly simple on the surface:

  • All products are to provide hosted solutions, and
  • All new products must use HTML5 technology

This (one assumes) reflects the changing nature of how Sungard would like to support its clients – i.e., moving from bespoke shrink-wrapped applications deployed and run in client environments, to providing hosted services that are more accessible and cheaper to deploy and operate.

Executing this strategy will have, I believe, some interesting side effects, as it is quite difficult to meet those to primary objectives without addressing a number of others along the way. For example, the development of key capabilities such as:

  • Elasticity
  • Dev-Ops
  • Mobile
  • Shared Services
  • Messaging/Integration

Elasticity

Simply put, hosted services require elasticity if cost effectiveness is to be achieved. At the very least, the capability to quickly add server capacity to a hosted service is needed, but – especially when multiple hosted services are offered – the ability to reallocate capacity to different services at different times is especially key.

For legacy applications, this can be a challenge to do: most legacy applications are designed to work with a known (ahead of time) configuration of servers, and significant configuration and/or coding work is needed when this changes. There is also a reliance on underlying platforms (e.g., application servers) to themselves be able to move to a truly elastic environment. In the end, some product architectures may be able to do no more than allow products to host a replica of the client’s installation on their data centres, shifting the operational cost from the client to the provider. I suspect that it won’t be long before these products become commercially unviable.

Dev-Ops

Dev-ops is a must-have in a hosted environment – i.e., effective, efficient processes that support robust change from programmer check-in to operational deployment. All SaaS-based firms understand the value of this, and while it’s been talked about a lot in ‘traditional’ organisations, the cultural and organisational changes needed to make it happen are often lagging.

For traditional software vendors in particular, this is a new skill: why worry about dev-ops when all you need to do is publish a new version of your software for your client to configure, test and deploy – probably no more than on a quarterly basis, or even much longer. So a whole new set of capabilities need to be built-up to support a strategy like this.

Mobile

HTML5 is ready-made for mobile. By adopting HTML5 as a standard, application technology stacks will be mobile-ready, across the range of devices, enabling the creation of many products optimised for specific clients or client user groups.

In general, it should not be necessary to develop applications specifically for a particular mobile platform: the kinds of functionality that need to be provided will (in the case of financial services) likely be operational in nature – i.e., workflows or dashboards, etc.

Applications specifically designed to support trading activities may need to be custom-built, but even these should be easier to built on a technology stack that supports HTML5 vs more traditional 2-tier fat client & database architectures.

Traditional 3-tier architectures are obviously easier to migrate to HTML5, but in a large portfolio of (legacy) applications, these may be in a relative minority.

Shared Services

In order to build many more innovative products, new products need to build upon the capabilities of older or existing products – but without creating ‘drag’ (i.e., software complexity) that can slow down and eventually kill the ability to innovate.

The way to do this is with shared services, which all teams recognise and evolve, and who build their products on top of their shared capabilities. Shared services mean better data governance and architectural oversight, to ensure the most appropriate services are identified, and to ensure the design of good APIs and reusable software. Shared services pave the way for applications to consolidate functionality at a portfolio level.

Messaging/Integration

On the face of it, the Sungard strategy doesn’t say anything about messaging or integration: after all, HTML5 is a client GUI technology, and hosted services are (in principle) primarily about infrastructure.

And one could certainly meet those stated objectives without addressing messaging/integration. However, once you start to have hosted services, you can start to interact with clients from a business development perspective on a ‘whole capability’ basis – i.e., offer solutions based on the range of hosted services offered. This will require a degree of interoperability between services predicated primarily on message passing of some sort.

On top of that, with hosted services, connection points into client environments need to be more formally managed: it is untenable (from a cost/efficiency perspective) to have custom interface points for every client, so some standards will be needed.

Messaging/Integration is closely related to shared services (they both are concerned with APIs and message content etc), but are more loosely coupled, as the client should not (to the fullest extent possible) know or care about the technical implementation of the hosted services, and instead focus on message characteristics and behaviour as it relates to their internal platforms.

Conclusion

The two objectives set by Sungard are a reflection of what is both meaningful and achievable for a firm with a portfolio of its size and complexity. Products that choose to adopt the directives to the letter will eventually find they have no strategic future.

Product owners, on the other hand, that understand where the CTO is really trying to get to, will thrive and survive in an environment where legacy financial applications – feature-rich as they are – are at risk of being outed by new internet-based firms that are able to leverage the best of the latest technologies and practices to scale and take market share very quickly from traditional vendors.

Portfolio owners on the buy-side (i.e., financial service firms) should take note. A similar simply stated strategy may be all that is needed to get the ball rolling.

The long journey for legacy application evolution

The Reactive Development Movement

A new movement, called ‘Reactive Development’ is taking shape:

http://readwrite.com/2014/09/19/reactive-programming-jonas-boner-typesafe

The rationale behind why creating a new ‘movement’, with its own manifesto, principles and related tools, vendors and evangelists, seems quite reasonable. It leans heavily on the principles of Agile, but extends them into the domain of Architecture.

The core principles of reactive development are around building systems that are:

  • Responsive
  • Resilient
  • Elastic
  • Message Driven

This list seems like a good starting point..it may grow,but these are a solid starting point.

What this movement seems to have (correctly) realised is that building these systems aren’t easy and don’t come for free..so most folks tend to ignore these principles *unless* these are specific requirements of the application to be built. Usually, however, the above characteristics are what you wish your system had when you see your original ‘coherent’ architecture slowly falling apart while you continually accommodate other systems that have a dependency on it (resulting in ‘stovepipe’ architectures, etc’).

IMO, this is the right time for the Reactive Movement. Most mature organisations are already feeling the pain of maintaining and evolving their stovepipe architectures, and many do not have a coherent internal technical strategy to fix this situation (or, if they do, it is often not matched with a corresponding business strategy or enabling organisational design – a subject for another post). And the rapidly growing number of SaaS platforms out there means that these principles are a concern for even the smallest startups, participating as part of a connected set of services delivering value to many businesses.

Most efforts around modularity (or ‘service oriented architectures’) in the end try to deliver on the above principles, but do not bring together all the myriad of skills or capabilities needed to make it successful.

So, I think the Reactive Development effort has the opportunity bring together many architecture skills – namely process/functional (business) architects, data architects, integration architects, application architects and engineers. Today, while the individual disciplines themselves are improving rapidly, the touch-points between them are still, in my view, very weak indeed – and often falls apart completely when it gets to the developers on the ground (especially when they are geographically removed from the architects).

I will be watching this movement with great interest.

The Reactive Development Movement

Why modularity?

Modularity isn’t new: like a number of techniques developed in the 60’s and 70’s, modularity is seeing a renaissance with the advent of cloud computing (i.e., software-defined infrastructure).

Every developer knows the value of modularity – whether it is implemented through objects, packages, services or APIs. But only the most disciplined development teams actually maintain modular integrity over time. This is because it is not obvious initially what modules are needed: usually code evolves, and only when recurring patterns are identified is the need for modules identified. And then usually some refactoring is required, which takes time and costs money.

For project teams intent on meeting a project deadline, refactoring is often seen as a luxury, and so stovepipe development continues, and continues. And, of course, for outsourced bespoke solutions, it’s not necessarily in the interest of the service provider to take the architectural initiative.

In the ‘old-school’ way of enterprise software development, this results in an obvious scenario: a ‘rats nest’ of system integrations and stove-pipe architectures, with a high cost of change, and correspondingly high total cost of ownership – and all the while with diminishing business value (related to the shrinking ability of businesses to adapt quickly to changing market needs).

Enter cloud-computing (and social, big-data, etc, etc). How does this effect the importance of modularity?

Cloud-computing encourages horizontal scaling of technology and infrastructure. Stove-pipe architectures and large numbers of point-to-point integrations do not lend themselves to scalable architectures, as not every component of a large monolithic application can be or should be scaled horizontally, and maintenance costs are high as each component has to be maintained/tested in lock-step with other changes in the application (since the components weren’t designed specifically to work together).

In addition, the demands/requirements of vertical integration encourage monolithic application development – which usually implies architectures which are not horizontally scalable and which are therefore expensive to share in adaptable, agile ways. This gets progressively worse as businesses mature and the business desire for vertical integration conflicts with the horizontal needs of business processes/services which are shared by many business units.

 

The only way, IMHO, to resolve this apparent conflict in needs is via modular architectures – modularity in this sense representing the modular view of the enterprise, which can then be used to define boundaries around which appropriate strategies for resolving the vertical/horizontal conflicts can be resolved.

Modules defined at the enterprise level which map to cloud infrastructures provide the maximum flexibility: the ability to compose modules in a multitude of ways vertically, and to scale them out horizontally as demand/utilisation increases.

Current approaches to enterprise architecture create such maps (often called ‘business component models’), but to date it has been notoriously difficult to translate this into a coherent technical strategy – previous attempts such as SOA provide mixed results, especially where it matters: at enterprise scale.

There are many disciplines that must come together to solve this vertical/horizontal integration paradox, starting with enterprise architecture, but leaning heavily on data architecture, business architecture, and solution architecture – not to mention project management, business analysis, software development, testing, etc.

I don’t (yet!) have the answers to resolving this paradox, but technologies like infrastructure-as-a-service, OSGi, API and API management, big data and the impact mobile has on how IT is commissioned and delivered all have a part to play in how this will play out in the enterprise.

Enterprises that realise the opportunity will have a huge advantage: right now, the small players have the one thing they don’t have: agility. But the small players don’t have what the big players have: lots and lots of data. The commercial survivors will be the big players that have figured out how they can leverage their data while rediscovering agility and innovation. And modularity will be the way to do it.

 

Why modularity?