Culture, Collaboration & Capabilities vs People, Process & Technology

[TL;DR The term ‘people, process and technology’ has been widely understood to represent the main dimensions impacting how organisations can differentiate themselves in a fast-changing technology-enabled world. This article argues that this expression may be misinterpreted with the best of intentions, leading to undesirable/unintended outcomes. The alternative, ‘culture, collaboration and capability’ is proposed.]

People, process & technology

When teams, functions or organisations are under-performing, the underlying issues can usually be narrowed down to one or more of the dimensions of people, process and technology.

Unfortunately, these terms can lead to an incorrect focus. Specifically,

  • ‘People’ can be understood to mean individuals who are under-performing or somehow disruptive to overall performance
  • ‘Process’ can be understood to mean formal business processes, leading to a focus on business process design
  • ‘Technology’ can be understood to mean engineering or legacy technology challenges which are resolvable only by replacing or updating existing technology

In many cases, this may in fact be the approach needed: fire the disruptive individual, redesign business processes using Six Sigma experts, or find another vendor selling technology that will solve all your engineering challenges.

In general, however, these approaches are neither practical nor desirable. Removing people can be fraught with challenges, and should only be used as a last resort. Firms using this as a way to solve problems will rapidly build up a culture of distrust and self-preservation.

Redesigning business processes using Six Sigma or other techniques may work well in very mature, well understood, highly automatable situations. However, in most dynamic business situations, no sooner has the process been optimised than it requires changing again. In addition, highly optimised processes may cause the so-called ‘local optimisation’ problem, where the sum of the optimised parts yields something far from an optimised whole.

Technology is generally not easy to replace: some technologies are significantly embedded in an organisation, with a large investment in people and skills to support the technology. But technologies change faster than people can adapt, and business environments change even quicker than technology changes. So replacing technologies come at a massive cost (and risk) of replacing functionally rich existing systems with relatively immature new technology, and replacing existing people with people who may be familiar with the technology, but less so with your organisation. And what to do with the folks who have invested so much of their careers in the ‘old’ technology? (Back to the ‘people’ problem.)

A new meme

In order to effect change within a team, department or organisation, the focus on ‘people, process and technology’ needs to be adapted to ‘culture, collaboration and capabilities’. The following sections lays out what the subtle difference is and how it could change how one approaches solving certain types of performance challenges.

Culture

When we talk about ‘people’, we are really not talking about individuals, but about cultures. The culture of a team, department or organisation has a more significant impact on how people collectively perform than anything else.

Changing culture is hard: for a ‘bad’ culture caught early enough, it may be possible to simply replace the most senior person who is (consciously or not) leading the creation of the undesirable cultural characteristics. But once a culture is established, even replacing senior leadership does not guarantee it will change.

Changing culture requires the willing participation of most of the people involved. For this to happen, people need to realise that there is a problem, and they need to be open to new cultural leadership. Then, it is mainly a case of finding the right leadership to establish the new norms and carry them forward – something which can be difficult for some senior managers to do, particularly when they have an arms-length approach to management.

Typical ‘bad’ cultures in a (technology) organisation include poor practices such as lack of testing discipline, poor collaboration with other groups focused on different concerns (such as stability, infrastructure, etc), a lack of transparency into how work is done, or even a lack of collaboration within members of the same team (i.e., a ‘hero’ based approach to development).

Changing these can be notoriously difficult, especially if the firm is highly dependent on this team and what it does.

Collaboration

Processes are, ultimately, a way to formalise how different people collaborate at different times. Processes formalise collaborations, but often collaborations happen before the process is formalised – especially in high-performance teams who are aware of their environment and are open to collaboration.

Many challenges in teams, departments or organisations can be boiled down to collaboration challenges. People not understanding how (or even if) they should be collaborating, how often, how closely, etc.

In most organisations, ‘cooperation’ is a necessity: there are many different functions, most of which depend on each other. So there is a minimum level of cooperation in order to get certain things done. But this cooperation does not necessarily extend to collaboration, which is cooperation based on trust and a deeper understanding of the reasons why a collaboration is important.

Collaboration ultimately serves to strengthen relationships and improve the overall performance of the team, department or organisation.

Collaborations can be captured formally using business process design notation (such as BPMN) but often these treat roles as machines, not people, and can lead to forgetting the underlying goal: people need to collaborate in order to meet the goals of the organisation. Process design often aims to define people’s roles so narrowly that the individuals may as well be a machine – and as technology advances, this is exactly what is happening in many cases.

People will naturally resist this; defining processes in terms of collaborations will change the perspective and result in a more sustainable and productive outcome.

Capabilities

Much has been written here about ‘capabilities’, particularly when it comes to architecture. In this article, I am narrowing my definition to anything that allows an individual (or group of individuals) to perform better than they otherwise would.

From a technology perspective, particular technologies provide developers with capabilities they would not have otherwise. These capabilities allow developers to offer value to other people who need software developed to help them do their job, and who in turn offer capabilities to other people who need those people to perform that job.

When a given capability is ‘broken’ (for example, where people do not understand a particular technology very well, and so it limits their capabilities rather than expands them), then it ripples up to everybody who depends directly or indirectly on that capability: systems become unstable, change takes a long time to implement, users of systems become frustrated and unable to do their jobs, the clients of those users become frustrated at  people being paid to do a job not being able to do it.

In the worst case, this can bring a firm to its knees, unable to survive in an increasingly dynamic, fast-changing world where the weakest firms do not survive long.

Technology should *always* provide a capability: the capability to deliver value in the right hands. When it is no longer able to achieve that role in the eyes of the people who depend on it (or when the ‘right hands’ simply cannot be found), then it is time to move on quickly.

Conclusion

Many of todays innovations in technology revolves around culture, collaboration and capabilities. An agile, disciplined culture, where collaboration between systems reflects collaborations between departments and vice-versa, and where technologies provide people with the capabilities they need to do their jobs, is what every firm strives for (or should be striving for).

For new startups, much of this is a given – this is, after all, how they differentiate themselves against more established players. For larger organisations that have been around for a while, the challenge has been, and continues to be, how to drive continuous improvement and change along these three axes, while remaining sensitive to the capacity of the firm to absorb disruption to the people and technologies that those firms have relied on to get them to the (presumably) successful state they are in today.

Get it wrong and firms could rapidly lose market share and become over-taken by their upstart competitors. Get it right, and those upstart competitors will be bought out by the newly agile established players.

Culture, Collaboration & Capabilities vs People, Process & Technology

The hidden costs of PaaS & microservice engineering innovation

[tl;dr The leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.]

The pace of innovation in the PaaS and microservice space is increasing rapidly. This, coupled with increasing pressure on ‘traditional’ organisations to deliver more value more quickly from IT investments, is causing a flurry of interest in PaaS enabling technologies such as Cloud Foundry (favoured by the likes of IBM and Pivotal), OpenShift (favoured by RedHat), Azure (Microsoft), Heroku (SalesForce), AWS, Google Application Engine, etc.

A key characteristic of all these PaaS solutions is that they are ‘devops’ enabled – i.e., it is possible to automate both code and infrastructure deployment, enabling the way to have highly automated operational processes for applications built on these platforms.

For large organisations, or organisations that prefer to control their infrastructure (because of, for example, regulatory constraints), PaaS solutions that can be run in a private datacenter rather than the public cloud are preferable, as this a future option to deploy to external clouds if needed/appropriate.

These PaaS environments are feature-rich and aim to provide a lot of the building blocks needed to build enterprise applications. But other framework initiatives, such as Spring Boot, DropWizard and Vert.X aim to make it easier to build PaaS-based applications.

Combined, all of these promise to provide a dramatic increase in developer productivity: the marginal cost of developing, deploying and operating a complete application will drop significantly.

Due to the low capital investment required to build new applications, it becomes ever more feasible to move from a heavy-weight, planning intensive approach to IT investment to a more agile approach where a complete application can be built, iterated and validated (or not) in the time it takes to create a traditional requirements document.

However, this also has massive implications, as – left unchecked – the drift towards entropy will increase over time, and organisations could be severely challenged to effectively manage and generate value from the sheer number of applications and services that can be created on such platforms. So an eye on managing complexity should be in place from the very beginning.

Many of the above platforms aim to make it as easy as possible for developers to get going quickly: this is a laudable goal, and if more of the complexity can be pushed into the PaaS, then that can only be good. The consequence of this approach is that developers have less control over the evolution of key aspects of the PaaS, and this could cause unexpected issues as PaaS upgrades conflict with application lifecycles, etc. In essence, it could be quite difficult to isolate applications from some PaaS changes. How these frameworks help developers cope with such changes is something to closely monitor, as these platforms are not yet mature enough to have gone through a major upgrade with a significant number of deployed applications.

The relative benefit/complexity trade-off between established microservice frameworks such as OSGi and easier to use solutions such as described above needs to be tested in practice. Specifically, OSGi’s more robust dependency model may prove more useful in enterprise environments than environments which have a ‘move fast and break things’ approach to application development, especially if OSGi-based PaaS solutions such as JBoss Fuse on OpenShift and Paremus ServiceFabric gain more popular use.

So: all well and good from the technology side. But even if the pros and cons of the different engineering approaches are evaluated and a perfect PaaS solution emerges, that doesn’t mean Microservice Nirvana can be achieved.

A recent article on the challenges of building successful micro-service applications, coupled with a presentation by Lisa van Gelder at a recent Agile meetup in New York City, has emphasised that even given the right enabling technologies, deploying microservices is a major challenge – but if done right, the rewards are well worth it.

Specifically, there are a number of factors that impact the success of a large scale or enterprise microservice based strategy, including but not limited to:

  • Shared ownership of services
  • Setting cross-team goals
  • Performing scrum of scrums
  • Identifying swim lanes – isolating content failure & eventually consistent data
  • Provision of Circuit breakers & Timeouts (anti-fragile)
  • Service discoverability & clear ownership
  • Testing against stubs; customer driven contracts
  • Running fake transactions in production
  • SLOs and priorities
  • Shared understanding of what happens when something goes wrong
  • Focus on Mean time to repair (recover) rather than mean-time-to-failure
  • Use of common interfaces: deployment, health check, logging, monitoring
  • Tracing a users journey through the application
  • Collecting logs
  • Providing monitoring dashboards
  • Standardising common metric names

Some of these can be technically provided by the chosen PaaS, but a lot is based around the best practices consistently applied within and across development teams. In fact, it is quite hard to capture these key success factors in traditional architectural views – something that needs to be considered when architecting large-scale microservice solutions.

In summary, the leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.
The hidden costs of PaaS & microservice engineering innovation

Scaled Agile needs Slack

[tl;dr In order to effectively scale agile, organisations need to ensure that a portion of team capacity is explicitly set aside for enterprise priorities. A re-imagined enterprise architecture capability is a key factor in enabling scaled agile success.]

What’s the problem?

From an architectural perspective, Agile methodologies are heavily dependent on business- (or function-) aligned product owners, which tend to be very focused on *their* priorities – and not the enterprise’s priorities (i.e., other functions or businesses that may benefit from the work the team is doing).

This results in very inward-focused development, and where dependencies on other parts of the organisation are identified, these (in the absence of formal architecture governance) tend to be minimised where possible, if necessary through duplicative development. And other teams requiring access to the team’s resources (e.g., databases, applications, etc) are served on a best-effort basis – often causing those teams to seek other solutions instead, or work without formal support from the team they depend on.

This, of course, leads to architectural complexity, leading to reduced agility all round.

The Solution?

If we accept the premise that, from an architectural perspective, teams are the main consideration (it is where domain and technical knowledge resides), then the question is how to get the right demand to the right teams, in as scalable, agile manner as possible?

In agile teams, the product backlog determines their work schedule. The backlog usually has a long list of items awaiting prioritisation, and part of the Agile processes is to be constantly prioritising this backlog to ensure high-value tasks are done first.

Well known management research such as The Mythical Man Month has influenced Agile’s goal to keep team sizes small (e.g., 5-9 people for scrum). So when new work comes, adding people is generally not a scalable option.

So, how to reconcile the enterprise’s needs with the Agile team’s needs?

One approach would be to ensure that every team pays an ‘enterprise’ tax – i.e., in prioritising backlog items, at least, say, 20% of work-in-progress items must be for the benefit of other teams. (Needless to say, such work should be done in such a way as to preserve product architectural integrity.)

20% may seem like a lot – especially when there is so much work to be done for immediate priorities – but it cuts both ways. If *every* team allows 20% of their backlog to be for other teams, then every team has the possibility of using capacity from other teams – in effect, increasing their capacity by much more than they could do on their own. And by doing so they are helping achieve enterprise goals, reducing overall complexity and maximising reuse – resulting in a reduction in project schedule over-runs, higher quality resulting architecture, and overall reduced cost of change.

Slack does not mean Under-utilisation

The concept of ‘Slack’ is well described in the book ‘Slack: Getting Past Burn-out, Busywork, and the Myth of Total Efficiency‘. In effect, in an Agile sense, we are talking about organisational slack, and not team slack. Teams, in fact, will continue to be 100% utilised, as long as their backlog consists of more high-value items then they can deliver. The backlog owner – e.g., scrum master – can obviously embed local team slack into how a particular team’s backlog is managed.

Implications for Project & Financial Management

Project managers are used to getting funding to deliver a project, and then to be able to bring all those resources to bear to deliver that project. The problem is, that is neither agile, nor does it scale – in an enterprise environment, it leads to increasingly complex architectures, resulting in projects getting increasingly more expensive, increasingly late, or delivering increasingly poor quality.

It is difficult for a project manager to accept that 20% of their budget may actually be supporting other (as yet unknown) projects. So perhaps the solution here is to have Enterprise Architecture account for the effective allocation of that spending in an agile way? (i.e., based on how teams are prioritising and delivering those enterprise items on their backlog). An idea worth considering..

Note that the situation is a little different for planned cross-business initiatives, where product owners must actively prioritise the needs of those initiatives alongside their local needs. Such planned work does not count in the 20% enterprise allowance, but rather counts as part of how the team’s cost to the enterprise is formally funded. It may result in a temporary increase in resources on the team, but in this case discipline around ‘staff liquidity’ is required to ensure the team can still function optimally after the temporary resource boost has gone.

The challenge regarding project-oriented financial planning is that, once a project’s goals have been achieved, what’s left is the team and underlying architecture – both of which need to be managed over time. So some dissociation between transitory project goals and longer term team and architecture goals is necessary to manage complexity.

For smaller, non-strategic projects – i.e., no incoming enterprise dependencies – the technology can be maintained on a lights-on basis.

Enterprise architecture can be seen as a means to asses the relevance of a team’s work to the enterprise – i.e., managing both incoming and outgoing team dependencies.  The higher the enterprise relevance of the team, the more critical the team must be managed well over time – i.e., team structure changes must be carefully managed, and not left entirely to the discretion of individual managers.

Conclusion

By ensuring that every project that purports to be Agile has a mandatory allowance for enterprise resource requirements, teams can have confidence that there is a route for them to get their dependencies addressed through other agile teams, in a manner that is independent of annual budget planning processes or short-term individual business priorities.

The effectiveness of this approach can be governed and evaluated by Enterprise Architecture, which would then allow enterprise complexity goals to be addressed without concentrating such spending within the central EA function.

In summary, to effectively scale agile, an effective (and possibly rethought) enterprise architecture capability is needed.

Scaled Agile needs Slack

Making good architectural moves

[tl;dr In Every change is an opportunity to make the ‘right’ architectural move to improve complexity management and to maintain an acceptable overall cost of change.]

Accompanying every new project, business requirement or product feature is an implicit or explicit ‘architectural move’ – i.e., a change to your overall architecture that moves it from a starting state to another (possibly interim) state.

The art of good architecture is making the ‘right’ architectural moves over time. The art of enterprise architecture is being able to effectively identify and communicate what a ‘right’ move actually looks like from an enterprise perspective, rather than leaving such decisions solely to the particular implementation team – who, it must be assumed, are best qualified to identify the right move from the perspective of the relevant domain.

The challenge is the limited inputs that enterprise architects have, namely:

  • Accumulated skill/knowledge/experience from past projects, including any architectural artefacts
  • A view of the current enterprise priorities based on the portfolio of projects underway
  • A corporate strategy and (ideally) individual business strategies, including a view of the environment the enterprise operates in (e.g., regulatory, commercial, technological, etc)

From these inputs, architects need to guide the overall architecture of the enterprise to ensure every project or deliverable results in a ‘good’ move – or at least not a ‘bad’ move.

In this situation, it is difficult if not impossible to measure the actual success of an architecture capability. This is because, in many respects, the beneficiaries of a ‘good’ enterprise architecture (at least initially) are the next deliverables (projects/requirements/features), and only rarely the current deliverables.

Since the next projects to be done is generally an unknown (i.e., the business situation may change between the time the current projects complete and the time the next projects start), it is rational for people to focus exclusively on delivering the current projects. Which makes it even more important the current projects are in some way delivering the ‘right’ architectural moves.

In many organisations, the typical engagement with enterprise architecture is often late in the architectural development process – i.e., at a ‘toll-gate’ or formal architectural review. And the focus is often on ‘compliance’ with enterprise standards, principles and guidelines. Given that such guidelines can get quite detailed, it can get quite difficult for anyone building a project start architecture (PSA) to come up with an architecture that will fully comply: the first priority is to develop an architecture that will work, and is feasible to execute within the project constraints of time, budget and resources.

Only then does it make sense to formally apply architectural constraints from enterprise architecture – at least some of which may negatively impact the time, cost, resource or feasibility needs of the project – albeit to the presumed benefit of the overall landscape. Hence the need for board-level sponsorship for such reviews, as otherwise the project’s needs will almost always trump enterprise needs.

The approach espoused by an interesting new book, Chess and the Art of Enterprise Architecture, is that enterprise architects need to focus more on design and less on principles, guidelines, roadmaps, etc. Such an approach involves enterprise architects more closely in the creation and evolution of project (start) architectures, which represents the architectural basis for all the work the project does (although it does not necessarily lay out the detailed solution architecture).

This approach is also acceptable for planning processes which are more agile than waterfall. In particular, it acknowledges that not every architectural ‘move’ is necessarily independently ‘usable’ by end users of an agile process. In fact, some user stories may require several architectural moves to fully implement. The question is whether the user story is itself validated enough to merit doing the architectural moves necessary to enable it, as otherwise those architectural moves may be premature.

The alternative, then, is to ‘prototype’ the user story,  so users can evaluate it – but at the cost of non-conformance with the project architecture. This is also known as ‘technical debt’, and where teams are mature and disciplined enough to pay down technical debt when needed, it is a good approach. But users (and sometimes product owners) struggle to tell the difference between an (apparently working) prototype and a production solution that is fully architecturally compliant, and it often happens that project teams move on to the next visible deliverable without removing the technical debt.

In applications where the end-user is a person or set of persons, this may be acceptable in the short term, but where the end-user could be another application (interacting via, for example, an API invoked by either a GUI or an automated process), then such technical debt will likely cause serious problems if not addressed. At the various least, it will make future changes harder (unmanaged dependencies, lack of automated testing), and may present significant scalability challenges.

So, what exactly constitutes a ‘good’ architectural move? In general, this is what the project start architecture should aim to capture. A good basic principle could be that architectural commitments should be postponed for as long as possible, by taking steps to minimise the impact of changed architectural decisions (this is a ‘real-option‘ approach to architectural change management). Over the long term, this reduces the cost of change.

In addition, project start architectures may need to make explicit where architectural commitments must be made (e.g., for a specific database, PaaS or integration solution, etc) – i.e., areas where change will be expensive.

Other things the project start architecture may wish to capture or at least address (as part of enterprise design) could include:

  • Cataloging data semantics and usage
    • to support data governance and big data initiatives
  • Management of business context & scope (business area, product, entity, processes, etc)
    • to minimize unnecessary redundancy and process duplication
  • Controlled exposure of data and behaviour to other domains
    • to better manage dependencies and relationships with other domains
  • Compliance with enterprise policies around security and data management
    • to address operational risk
  • Automated build, test & deploy processes
    • to ensure continued agility and responsiveness to change, and maximise operational stability
  • Minimal lock-in to specific solution architectures (minimise solution architecture impact on other domains)
    • to minimize vendor lock-in and maximize solution options

The Chess book mentioned above includes a good description of a PSA, in the context of a PRINCE2 project framework. Note that the approach also works for Agile, but the PSA should set the boundaries within which the agile team can operate: if those boundaries must be crossed to deliver a user story, then enterprise design architects should be brought back into the discussion to establish the best way forward.

In summary, every change is an opportunity to make the ‘right’ architectural move to improve complexity management and to maintain an acceptable overall cost of change.

Making good architectural moves

Achieving modularity: functional vs volatility decomposition

Enterprise architecture is all about managing complexity. Many EA initiatives tend to focus on managing IT complexity, but there is only so much that can be done there before it becomes obvious that IT complexity is, for the most part, a direct consequence of enterprise complexity. To recap, complexity needs to be managed in order to maintain agility – the ability for an organisation to respond (relatively) quickly and efficiently to changes in markets, regulations or innovation, and to continue to do this over time.

Enterprise complexity can be considered to be the activities performed and resources consumed by the organisation in order to deliver ‘value’, a metric usually measured through the ability to maintain (financial) income in excess of expenses over time.

Breaking down these activities and resources into appropriate partitions that allow holistic thinking and planning to occur is one of the key challenges of enterprise architecture, and there are various techniques to do this.

Top-Down Decomposition

The natural approach to decomposition is to first understand what an organisation does – i.e., what are the (business) functions that it performs. Simply put, a function is a collection of data and decision points that are closely related (e.g., ‘Payments ‘is a function). Functions typically add little value in and of themselves – rather they form part of an end-to-end process that delivers value for a person or legal entity in some context. For example, a payment on its own means nothing: it is usually performed in the context of a specific exchange of value or service.

So a first course of action is to create a taxonomy (or, more accurately, an ontology) to describe the functions performed consistently across an enterprise. Then, various processes, products or services can be described as a composition of those functions.

If we accept (and this is far from accepted everywhere) that EA is focused on information systems complexity, then EA is not responsible for the complexity relating to the existence of processes, products or services. The creation or destruction of these are usually a direct consequence of business decisions. However, EA should be responsible for cataloging these, and ensuring these are incorporated into other enterprise processes (such as, for example, disaster recovery or business continuity processes). And EA should relate these to the functional taxonomy and the information systems architecture.

This can get very complex very quickly due to the sheer number of processes, products and services – including their various variations – most organisations have. So it is important to partition or decompose the complexity into manageable chunks to facilitate meaningful conversations.

Enterprise Equivalence Relations

One way to do this at enterprise level is to group functions into partitions (aka domains) according to synergy or autonomy (as described by Roger Sessions), for all products/services supporting a particular business. This approach is based on the mathematical concept of equivalenceBecause different functions in different contexts may have differing equivalence relationships, functions may appear in multiple partitions. One role of EA is to assess and validate if those functions are actually autonomous or if there is the potential to group apparently duplicate functions into a new partition.

Once partitions are identified, it is possible to apply ‘traditional’ EA thinking to a particular partition, because that partition is of a manageable size. By ‘traditional’ EA, I mean applying Zachman, TOGAF, PEAF, or any of the myriad methodologies/frameworks that are out there. More specifically, at that level, it is possible to establish a meaningful information systems strategy or goal for a particular partition that is directly supporting business agility objectives.

The Fallacy of Functional Decomposition

Once you get down to the level of partition, the utility of functional decomposition when it comes to architecting solutions becomes less useful. The natural tendency for architects would be to build reusable components or services that realise the various functions that comprise the partition. In fact, this may be the wrong thing to do. As Jüval Lowy demonstrates in his excellent webinar, this may result in more complexity, not less (and hence less agility).

When it comes to software architecture, the real reason to modularise your architecture is to manage volatility or uncertainty – and to ensure that volatility in one part of the architecture does not unnecessarily negatively impact another part of the architecture over time. Doing this allows agility to be maintained, so volatile parts of the application can, in fact, change frequently, at low impact to other parts of the application.

When looking at a software architecture through this lens, a quite different set of components/modules/services may become evident than those which may otherwise be obvious when using functional decomposition – the example in the webinar demonstrates this very well. A key argument used by Jüval in his presentation is that (to paraphrase him somewhat) functions are, in general, highly dependent on the context in which they are used, so to split them out into separate services may require making often impossible assumptions about all possible contexts the functions could be invoked in.

In this sense, identified components, modules or services can be considered to be providing options in terms of what is done, or how it is done, within the context of a larger system with parts of variable volatility. (See my earlier post on real options in the context of agility to understand more about options in this context.)

Partitions as Enterprise Architecture

When each partition is considered with respect to its relationship with other partitions, there is a lot of uncertainty around how different partitions will evolve. To allow for maximum flexibility, every partition should assume each other partition is a volatile part of their architecture, and design accordingly for this. This allows each partition to evolve (reasonably) independently with minimum fixed co-ordination points, without compromising the enterprise architecture by having different partitions replicate the behaviours of partitions they depend on.

This then allows:

  • Investment to be expressed in terms of impact to one or more partitions
  • Partitions to establish their own implementation strategies
  • Agile principles to be agreed on a per partition basis
  • Architectural standards to be agreed on a per partition basis
  • Partitions to define internally reusable components relevant to that partition only
  • Partitions to expose partition behaviour to other partitions in an enterprise-consistent way

In generative organisation cultures, partitions do not need to be organisationally aligned. However, in other organisation cultures (pathological or bureaucratic), alignment of enterprise infrastructure functions such as IT or operations (at least) with partitions (domains) may help accelerate the architectural and cultural changes needed – especially if coupled with broader transformations around investment planning, agile adoption and enterprise architecture.

Achieving modularity: functional vs volatility decomposition

Achieving Agile at Scale

[tl;dr Scaling agile at the enterprise level will need rethinking how portfolio management and enterprise architecture are done to ensure success.]

Agility,as a concept, is gaining increasing attention within large organisations. The idea that business functions – and in particular IT – can respond quickly and iteratively to business needs is an appealing one.

The reasons why agility is getting attention are easy to spot: larger firms are getting more and more obviously unagile – i.e., the ability of business functions to respond to business needs in a timely and sustainable manner is getting progressively worse, even as a rapidly evolving competitive and technology-led commercial environment is demanding more agility.

Couple that with the heavy cost of failing to meet ever increasing regulatory compliance obligations, and ‘agile’ seems a very good idea indeed.

Agile is a great idea, but when implemented at scale (in large enterprise organisations), it can actually reduce enterprise agility, rather than increase it, unless great care is taken.

This is partly because Agile’s origins come from developing web applications: in these scenarios, there is usually a clear customer, a clear goal (to the extent that the team exists in the first place), and relatively tight timelines that favour short or non-existent analysis/design phases. Agile is perfect for these scenarios.

Let’s call this scenario ‘local agile’. It is quite easy to see a situation where every team, in response to the question, ‘are you doing agile?’, for teams to say ‘Yes, we do!’. So if every team is doing ‘local agile’, does that mean your organisation is now ‘agile’?

The answer is No. Getting every team to adopt agile practices is a necessary but insufficient step towards achieving enterprise agility. In particular, two key factors needs to be addressed before true a firm can be said to be ‘agile’ at the enterprise level. These are:

  1. The process by which teams are created and funded, and
  2. Enterprise awareness

Creating & Funding (Agile) Teams

Historically, teams are usually created as a result of projects being initiated: the project passes investment justification criteria, the project is initiated and a team is put in place, led by the project manager. Also, this process was owned entirely by the IT organisation, irrespective of which other organisations were stakeholders in the project.

At this point, IT’s main consideration is, will the project be delivered on time and on budget? The business sponsor’s main consideration is, will it give us what we need when we need it? And the enterprise’s consideration (which is often ignored) is who is accountable for ensuring that the IT implementation delivers value to the enterprise. (In this sense, the ‘enterprise’ could be either a major business line with full P&L responsibility for all activities performed in support of their business, or the whole organisation, including shared enterprise functions).

Delivering ‘value’ is principally about ensuring that  on-going or operational processes, roles and responsibilities are adjusted to maximise the benefits of a new technology implementation – which could include organisational change, marketing, customer engagement, etc.

However, delivering ‘value’ is not always correlated to one IT implementation; value can be derived from leveraging multiple IT capabilities in concert. Given the complexity of large organisations, it is often neither desirable or feasible to have a single IT partner be responsible for all the IT elements that collectively deliver business value.

On this basis, it is evident that how businesses plan and structure their portfolio of IT investments needs to change dramatically. In particular,

  1. The business value agenda is outcome focused and explicit about which IT capabilities are required to enable it, and
  2. IT investment is focused around the capability investment lifecycle that IT is responsible for stewarding.

In particular ‘capabilities’ (or IT products or services) have a lifecycle: this affects the investment and expectations around those capabilities. And some capabilities need to be more ‘agile’ than others – some must be agile to be useful, whereas for others, stability may be the over-riding priority, and therefore their lack of agility must be made explicit – so agile teams can plan around that.

Enterprise Awareness

‘Locally’ agile teams are a step in the right direction – particularly if the business stakeholders all agree they are seeing the value from that agility. But often this comes at the expense of enterprise awareness. In short, agility in the strict business sense can often only deliver results by ignoring some stakeholders interests. So ‘locally’ agile teams may feel they must minimise their interactions with other teams – particularly if those teams are not themselves agile.

If we assume that teams have been created through a process as described in the previous section, it becomes more obvious where the team sits in relation to its obligations to other teams. Teams can then make appropriate compromises to their architecture, planning and agile SDLC to allow for those obligations.

If the team was created through ‘traditional’ planning processes, then it becomes a lot harder to figure out what ‘enterprise awareness’ is appropriate (except perhaps or IT-imposed standards or gates, which only contributes indirectly to business value).

Most public agile success stories describe very well how they achieved success up to  – but not including – the point at which architecture becomes an issue. Architecture, in this sense, refers to either parts of the solution architecture which can no longer be delivered via one or two members of an agile team, or those parts of the business value chain that cannot be entirely delivered via the agile team on its own.

However, there are success stores (e.g., Spotify) that show how ‘enterprise awareness’ can be achieved without limiting agility. For many organisations, transitioning from existing organisation structures to new ‘agile-ready’ structures will be a major challenge, and far harder than simply having teams ‘adopt agile’.

Conclusions

With the increased attention on Agile, there is fortunately increased attention on scaling agile. Methodologies like Disciplined Agile Development (DaD) and LargE Scale Scrum (LeSS), coupled with portfolio concepts like Scaled Agile Framework (SAFe) propose ways in which Agile can scale beyond the team and up to enterprise level, without losing the key benefits of the agile approach.

All scaled agile methodologies call for changes in how Portfolio Management and Enterprise Architecture are typically done within an organisation, as doing these activities right are key to the success of adopting Agile at scale.

Achieving Agile at Scale

What if IT had no CapEx? A gedanken experiment.

[tl;dr Imagining IT with no CapEx opens up possibilities that may not otherwise be considered. This is a thought experiment and it may never be practical or desirable to realise, but seeking to minimise direct CapEx expenditure in portfolio planning may help the firm’s overall Agile agenda.]

The ‘no CapEx’ thought experiment is related to the ‘Lean Enterprise’ theme – i.e., how enterprises can be more agile and responsive to business change, and how this could fundamentally change organisation’s approach to IT.

CapEx (or Capital Expenditure) usually relates to large up-front costs that are depreciated over a number of years. Investments such as infrastructure and software licenses are generally treated as CapEx. OpEx, on the other hand, refers to operational expenses, incurred on a recurring basis.

The interesting thing about CapEx is that it is a commitment – i.e., there is no optionality behind it. This means if business needs change, the cost of breaking that commitment could be prohibitively high. So businesses may be forced to keep pushing square pegs through round holes in their technology, reducing the potential value of the investment.

Agility is all about optionality – the deferring of (expensive) commitments to the latest possible moment, so that the commercial benefits of that commitment can be productively exploited for the life of that commitment.

Increasingly, it is becoming less and less clear to organisations, as technology rapidly advances, where to make these commitments that do not unnecessarily limit options in the future.

So, as a thought experiment, how would a CapEx-less IT organisation actually work, and what would it look like?

The first observation is that somebody, somewhere, has to make a CapEx investment for IT. For example, Amazon has made a significant CapEx investment in its datacentres, and it is this investment that allows other businesses to avoid having to make CapEx investments. Is this profitable for Amazon? It’s hard to tell..prices keep going down, and efficiency is improving all the time. In the long run, some profitable players will emerge. For now, organisations choose to use Amazon only because it is cheaper than the capital-investment alternative, over the length of the investment cycle.

Software licensing as a revenue stream is under pressure: businesses may be happier to pay for services to support the software they are using rather than for some license that gives them the right to use a specific version of a software package for commercial purposes. In the days when software was distributed to consumers and deployed onto proprietary infrastructure, this made sense. But more and more generally useful software is being open-sourced, and more and more ‘proprietary’ software is being delivered as a service. So CapEx related to software licenses will (eventually) not be as profitable as it once was (at least for the major enterprise software players).

So, in a CapEx-free IT environment, the IT landscape looks like an interacting set of software services that collectively deliver business value (i.e., give the businesses what they need in a timely manner and at a manageable cost).

The obligation of the enterprise is also clear: either derive value from the service, or stop using the service. In other words, exercise your option to terminate use of the service if the value is not there.

In this environment, do organisations actually need an IT strategy in the conventional/traditional sense? Isn’t it all about sourcing solutions and ensuring the solutions providers interact appropriately to collectively provide value to the enterprise, and that the enterprise interacts with the providers in such a way as to optimise change for maximum value? In the extreme case, yes.

For this model to work, providers need to be agile. They will likely have a number of clients, all requiring different solutions (albeit in the same domain). They need to be responsive, and they need to be able to scale. It is these characteristics that will differentiate among different providers.

All of this has massive implications with respect to data management, security and compliance. A strong culture of data governance coupled with sensible security and compliance policies could (eventually) allow these hurdles to be overcome.

The question for firms then, is where does it make sense to have an internal IT capability? And, apart from some R&D capabilities, is it worth it? Especially as individual businesses could innovate with a number of providers at relatively low risk. Perhaps the model should be to treat each internal capability as potentially a future third party service provider? Give the team the ability to innovate on agility, responsiveness and scaleability. Retain the option in the future to let the ‘internal’ service provider serve other businesses, or to shut it down if it is not providing value.

It is worth bearing in mind that the concept of the traditional ‘end user’ is changing. The end-user is someone who is often quite adept at making technology do what they want, without being considered traditional developers. Providing end-users with innovative ways to connect and orchestrate various (enterprise) services is expected. The difference between ‘local’ and ‘enterprise’ development is then emphasised, with local development being reactive and using enterprise services to ensure compliance etc. while still meeting localised needs.

Such an approach would maximise enterprise agility and also provide skilled technologists and even end users the ability to flex their technical chops in ways that businesses can exploit to their fullest advantage. The cost of this agility is that other organisations can use the same ‘building blocks’ to greater competitive advantage. As has always been the case, it is not so much the technology, as what you do with it, that really matters.

To summarise, while not always possible or even desirable, minimising (direct) CapEx expenses for IT is becoming a feasible way to achieve IT agility.

What if IT had no CapEx? A gedanken experiment.