The Learning CTO’s Strategic Themes (2015) Revisited

The Learning CTO’s Strategic Themes (2015) Revisited

Way back in late 2014, I published what I considered at the time to be key themes that would dominate the technology landscape in 2015. 4 years on, I’m revisiting these themes. Strategic themes for 2019 will be covered in a separate blog post.

2015 Strategic Themes Recap

The Lean EnterpriseThe cultural and process transformations necessary to innovate and maintain agility in large enterprises – in technology, financial management, and risk, governance and compliance.
Enterprise ModularityThe state-of-the-art technologies and techniques which enable agile, cost effective enterprise-scale integration and reuse of capabilities and services. Aka SOA Mark II.
Continuous DeliveryThe state-of-the-art technologies and techniques which brings together agile software delivery with operational considerations. Aka DevOps Mark I.
Systems Theory & Systems ThinkingThe ability to look at the whole as well as the parts of any dynamic system, and understand the consequences/impacts of decisions to the whole or those parts.
Machine LearningUsing business intelligence and advanced data semantics to dynamically improve automated or automatable processes.

The Lean Enterprise

Over the past few years, the most obvious themes related to the ‘lean enterprise’ have been simultaneously the focus on ‘digital transformation’ and scepticism of the ‘agile-industrial’ machine (see Fowler (2018)). 

Firms have finally realized that digital transformation extends well beyond having a mobile application and a big-data analytics solution (see Mastering the Mix). However, this understanding is far from universal, and many firms believe digital transformation involves funding a number of ‘digital projects’ without changing fundamentally how the business plans, prioritises and funds both change initiatives and on-going operations.

The implications of this are nicely captured in Mark Schwarz’s ‘Digital CFO’ blog post. In essence, if the CFO function hasn’t materially changed how it works, it’s hard to see how the rest of the organization can truly be ‘digital’, irrespective of how much it is spending on ‘digital transformation’.

Other notable material on this subject is Barry O’Reilly’s “Unlearn” book – more on this when I have read it, but essentially it recommends folks (and corporations) unlearn mindsets and behaviours in order to excel in the fast-changing, technology-enabled environments of today.

Related to digital transformation is the so-called ‘agile industrial’ machine, which aims to sell agile to enterprises. Every firm wants to ‘be’ agile, but many firms end up ‘doing’ agile (often imposing ceremony on teams) – and are then surprised when overall business agility does not change. If ‘agile’ isn’t led and implemented by appropriately structured cross-functional value-stream aligned teams, then it is not going to move the dial.

The latest and best thinking on end-to-end value-stream delivery improvement and optimization is from Mik Kersten, as captured in the book Project-to-Product. This is a significant development in digital transformation execution, and merits further understanding – in particular, potential implications for teams to adopting cloud and other self-service-based capabilities as part of delivering value to customers.

Enterprise Modularity

These days, ‘enterprise modularity’ is dominated principally by microservices, and in particular the engineering challenges associated with delivering microservices-based architectures.

While there is no easy (cheap) solution to unwinding the complexity of non-modular ‘ball-of-mud’ legacy applications, numerous patterns and solutions exist to enable such applications to participate in a enterprise’s microservices eco-system (the work by Chris Richardson in this space is especially notable). Indeed, if these ‘legacy’ applications can be packaged up in containers and orchestrated via automation-friendly, framework-neutral tools like Kubernetes, it would go some way to extend the operable longevity and potential re-usability of these platforms. But for many legacy systems, the business case of such a migration is not yet obvious, especially when viewed in the context of overall end-to-end value-stream delivery (vs the horizontal view of central-IT optimizing for itself).

It is interesting to note that (synchronous) RESTful APIs are still the dominant communication mechanism for distributed modular applications – most likely because no good decentralized alternative to enterprise-service buses has been identified. And while event-driven architectures are getting more popular, IT organizations will generally avoid having centrally managed messaging infrastructure that is not completely self-service (and self-governed), as otherwise the central bus team becomes a significant bottleneck to every other team.

But RESTful APIs are complex when used at scale – hence the need for software patterns like ‘Circuit Breaker‘, Service Discovery and Observability, among others. Service mesh technologies like Istio and Envoy need to be introduced to fill the gaps – which need their own management, but in principle do not impair team-level progress.

Asynchronous messaging-based technologies such as Rabbit MQ address many of these issues, but come with their own complexities. Messaging technologies, especially those providing any level of delivery guarantee, require their own management. But, for the most part, these can be delivered as managed services used on a self-service basis.

For cloud-based systems, usually the cloud provider will have some messaging services available to use. Messaging systems are also available packaged into Docker containers and can be readily configured and launched on cloud infrastructure.

Related to distributed IPC is network configuration and security: with the advent of cloud, security is no longer confined to the ‘perimeter’ of the datacenter. This mean network configurations, authentication/encryption and security monitoring much be implemented and configured at a per-service level. This is not scalable without a significant level of infrastructure configuration automation (i.e., infrastructure-as-code), and enabled through mesh or messaging technologies.

Finally, it is worth noting that most of the effort today is going into framework- and language-neutral solutions. Language specific frameworks such as OSGi and Cloud Foundry (for Java), are generally assumed to be built on top of these capabilities, but are not necessary to take advantage of them. Indeed, any software written in any language should be able to leverage these technologies.

So, should IT departments mandate use of frameworks? In the end, to avoid developers needing to know too much about the specifics about networking, orchestration, packaging, deployment, etc, some level of abstraction is required. The question is who provides this abstraction – the cloud provider (e.g., serverless), an in-house SRE engineering team, 3rd party software vendors (such as Pivotal), or application-development teams themselves?

Continuous Delivery

The state of DevOps has advanced a lot since 2018, as described by the research published by DORA (just bought by Google, in a sign that it seems to be getting serious about listening to the needs of its cloud customers).

The findings have been compiled into a series of very solid principles and practices published in the book Accelerate – book well worth reading.

Continuous delivery itself is still an aspiration for most firms – even being capable of doing it in principle, vs actually doing it. But it seems to be evident that not having a continuous delivery capability will severely impact the extent to which a firm can succeed at digital transformation.

Whether a firm needs to digitally transform at all (and implement all the findings from the DevOps survey) depends largely on whether its business model is natively digital..the popular view these days is that every industry and business model must eventually go digital to compete.

A key aspect of ‘continuous delivery’ is that there is never a case of a system reaching ‘stability’ – i.e., that no further changes are needed (or are needed very rarely). This approach worked in the days of packaged software, but for software delivered as a service, it is impractical. Change always is needed – even if it is principally for security/operational purposes and not for features. To avoid the cost of supporting/maintaining existing software dragging down resources available for new software, software maintenance must be highly automated – including automated builds, testing, configuration, deployment and monitoring – of both infrastructure and applications.

If devops practices are not adopted and invested in, expect a high and increasing proportion of IT costs to be towards (high value) maintenance work and less towards new (uncertain value) development. With devops practices, there should be a much more consistent balance over time, even as new features continually get deployed.

Systems Theory & Systems Thinking

Systems Theory & Systems Thinking is still a fairly niche topic area, with aspects of it being addressed by the Value Stream Architecture work mentioned above, as well as continuing work in frameworks like Cynefin and business model canvas content from Tom Graves, insightful content on architecture from Graham Berrisford , ground-breaking work from Simon Wardley on strategy maps and some interesting capabilities and approaches from a number of EA tool vendors.

Generally, technical architects tend to focus on system behavior from the perspective of applications and infrastructure. ‘Human activity’ systems is rarely given sufficient thought by technical architects (beyond the act of actually developing the required software). In fact, much of the ‘devops’ movement is driven by human activity systems as they relate to the desired behavior of applications and infrastructure.

On the flip side, technical architects and implementation teams tend to rely on product owners to address the human activity systems relating to the use of applications. The balance between appropriate reliance/engagement of users vs technology (and supporting technologists) in the definition of application behaviour is where product owners have an opportunity to make a significant impact, and where IT implementation teams need to be demanding more from their product owners.

With respect to product roadmaps, product managers should encourage the use of techniques such as the Wardley Maps mentioned above to better position where investment should lie, and ensure IT teams are using 3rd party solutions appropriately rather than trying to engineer commodity/utility solutions in-house.

Machine Learning

Machine learning has had a massive resurgence over the past few years, driven principally by simple but popular use cases (such as recommendation engines) as well as more advanced use cases such as self-driving cars. A significant factor is the sheer volume of data available to train machine-learning algorithms.

As ever, the appropriate use of machine learning is still an area in need of development: machine learning is often seen by corporations as a means of reducing costs through eliminating the need to rely on humans, leading to fear and scepticism around the adoption of machine learning technologies.

For the most part, however, machine learning will be used to augment or support human activity in the face of ever increasing complexity and data. For example, in the realm of devops, an explosion in the number of interacting components in applications will make supporting and operating complex distributed systems of the futures orders of magnitude more complex than we have today – and we don’t do a particularly good job of managing such systems even today! Without some form of augmentation, humans would simply not be up to the task.

The arrival of pay-as-you-go machine learning services such as AWS SageMaker and Rekognition herald a new era where machine learning capabilities can be within reach of ‘average’ development teams, without necessarily requiring AI experts or PhD-level statisticians to be part of the teams.

In reality, machine learning can only be used for mature processes for which much data is available: humans will, for the foreseeable future at least, be much better at addressing new or immature situations.

An interesting side-effect of the focus on machine-learning is the increased interest in semantic data: general machine-learning is impossible without learning to describe data semantics. However, most firms would benefit from this practice even without machine learning. General and deep machine learning appears to be creating an increase of interest in semantic data standards such as RDF and Open Linked Data but hopefully interest in these will trickle down to more mundane but critical tasks such as system integration efforts and data lake implementations.

The Learning CTO’s Strategic Themes (2015) Revisited

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s