Why IoT is changing Enterprise Architecture

[tl;dr The discipline of enterprise architecture as an aid to business strategy execution has failed for most organizations, but is finding a new lease of life in the Internet of Things.]

The strategic benefits of having an enterprise-architecture based approach to organizational change – at least in terms of business models and shared capabilities needed to support those models – have been the subject of much discussion in recent years.

However, enterprise architecture as a practice (as espoused by The Open Group and others) has never managed to break beyond it’s role as an IT-focused endeavor.

In the meantime, less technology-minded folks are beginning to discuss business strategy using terms like ‘modularity’, which is a first step towards bridging the gap between the business folks and the technology folks. And technology-minded folks are looking at disruptive business strategy through the lens of the ‘Internet of Things‘.

Business Model Capability Decomposition

Just like manufacturing-based industries decomposed their supply-chains over the past 30+ years (driving an increasingly modular view of manufacturing), knowledge-based industries are going through a similar transformation.

Fundamentally, knowledge based industries are based on the transfer and management of human knowledge or understanding. So, for example, you pay for something, there is an understanding on both sides that that payment has happened. Technology allows such information to be captured and managed at scale.

But the ‘units’ of knowledge have only slowly been standardized, and knowledge-based organizations are incentivized to ensure they are the only ones to be able to act on the information they have gathered – to often disastrous social and economic consequences (e.g., the financial crisis of 2008).

Hence, regulators are stepping into to ensure that at least some of this ‘knowledge’ is available in a form that allows governments to ensure such situations do not arise again.

In the FinTech world, every service provided by big banks is being attacked by nimble competitors able to take advantage of new, more meaningful technology-enabled means of engaging with customers, and who are willing to make at least some of this information more accessible so that they can participate in a more flexible, dynamic ecosystem.

For these upstart FinTech firms, they often have a stark choice to make in order to succeed. Assuming they have cleared the first hurdle of actually having a product people want, at some point, they must decide whether they are competing directly with the big banks, or if they are providing a key part of the electronic financial knowledge ecosystem that big banks must (eventually) be part of.

In the end, what matters is their approach to data: how they capture it (i.e., ‘UX’), what they do with it, how they manage it, and how it is leveraged for competitive and commercial advantage (without falling foul of privacy laws etc). Much of the rest is noise from businesses trying to get attention in an increasingly crowded space.

Historically, many ‘enterprise architecture’ or strategy departments fail to have impact because firms do not treat data (or information, or knowledge) as an asset, but rather as something to be freely and easily created and shunted around, leaving a trail of complexity and lost opportunity cost wherever it goes. So this attitude must change before ‘enterprise architecture’ as a concept will have a role in boardroom discussions, and firms change how they position IT in their business strategy. (Regulators are certainly driving this for certain sectors like finance and health.)

Internet of Things

Why does the Internet Of Things (IoT) matter, and where does IoT fit into all this?

At one level, IoT presents a large opportunity for firms which see the potential implied by the technologies underpinning IoT; the technology can provide a significant level of convenience and safety to many aspects of a modern, digitally enabled life.

But fundamentally, IoT is about having a large number of autonomous actors collaborating in some way to deliver a particular service, which is of value to some set of interested stakeholders.

But this sounds a lot like what a ‘company’ is. So IoT is, in effect, a company where the actors are technology actors rather than human actors. They need some level of orchestration. They need a common language for communication. And they need laws/protocols that govern what’s permitted and what is not.

If enterprise architecture is all about establishing the functional, data and protocol boundaries between discrete capabilities within an organization, then EA for IoT is the same thing but for technical components, such as sensors or driverless cars, etc.

So IoT seems a much more natural fit for EA thinking than traditional organizations, especially as, unlike departments in traditional ‘human’ companies, technical components like standards: they like fixed protocols, fixed functional boundaries and well-defined data sets. And while the ‘things’ themselves may not be organic, their behavior in such an environment could exhibit ‘organic’ characteristics.

So, IoT and and the benefits of an enterprise architecture-oriented approach to business strategy do seem like a match made in heaven.

The Converged Enterprise

For information-based industries in particular, there appears to be an inevitable convergence: as IoT and the standards, protocols and governance underpinning it mature, so too will the ‘modular’ aspects of existing firms operating models, and the eco-system of technology-enabled platforms will mature along with it. Firms will be challenged to deliver value by picking the most capable components in the eco-system around which to deliver unique service propositions – and the most successful of those solutions will themselves become the basis for future eco-systems (a Darwinian view of software evolution, if you will).

The converged enterprise will consist of a combination of human and technical capabilities collaborating in well-defined ways. Some capabilities will be highly human, others highly technical, some will be in-house, some will be part of a wider platform eco-system.

In such an organization, enterprise architects will find a natural home. In the meantime, enterprise architects must choose their starting point, behavioral or structural: focusing first on decomposed business capabilities and finishing with IoT (behavioral->structural), or focusing first on IoT and finishing with business capabilities (structural->behavioral).

Technical Footnote

I am somewhat intrigued at how the OSGi Alliance has over the years shifted its focus from basic Java applications, to discrete embedded systems, to enterprise systems and now to IoT. OSGi, (disappointingly, IMO), has had a patchy record changing how firms build enterprise software – much of this is to do with a culture of undisciplined dependency management in the software industry which is very, very hard to break.

IoT raises the bar on dependency management: you simply cannot comprehensively test software updates to potentially hundreds of thousands or millions of components running that software. The ability to reliably change modules without forcing a test of all dependent instantiated components is a necessity. As enterprises get more complex and digitally interdependent, standards such as OSGi will become more critical to the plumbing of enabling technologies. But as evidenced above, for folks who have tried and failed to change how enterprises treat their technology-enabled business strategy, it’s a case of FIETIOT – Failed in Enterprise, Trying IoT. And this, indeed, seems a far more rational use of an enterprise architect’s time, as things currently stand.

 

 

 

 

 

 

Why IoT is changing Enterprise Architecture

Transforming IT: From a solution-driven model to a capability-driven model

[tl;dr Moving from a solution-oriented to a capability-oriented model for software development is necessary to enable enterprises to achieve agility, but has substantial impacts on how enterprises organise themselves to support this transition.]

Most organisations which manage software change as part of their overall change portfolio take a project-oriented approach to delivery: the project goals are set up front, and a solution architecture and delivery plan are created in order to achieve the project goals.

Most organisations also fix project portfolios on a yearly basis, and deviating from this plan can often very difficult for organisations to cope with – at least partly because such plans are intrinsically tied into financial planning and cost-saving techniques such as capitalisation of expenses, etc, which reduce bottom-line cost to the firm of the investment (even if it says nothing about the value added).

As the portfolio of change projects rise every year, due to many extraneous factors (business opportunities, revenue protection, regulatory demand, maintenance, exploration, digital initiatives,  etc), cross-project dependency management becomes increasingly difficult. It becomes even more complex to manage solution architecture dependencies within that overall dependency framework.

What results is a massive set of compromises that ends up with building solutions that are sub-optimal for pretty much every project, and an investment in technology that is so enterprise-specific, that no other organisation could possibly derive any significant value from it.

While it is possible that even that sub-optimal technology can yield significant value to the organisation as a whole, this benefit may be short lived, as the cost-effective ability to change the architecture must inevitably decrease over time, reducing agility and therefore the ability to compete.

So a balance needs to be struck, between delivering enterprise value (even at the expense of individual projects) while maintaining relative technical and business agility. By relative I mean relative to peers in the same competitive sector…sectors which are themselves being disrupted by innovative technology firms which are very specialist and agile within their domain.

The concept of ‘capabilities’ realised through technology ‘products’, in addition to the traditional project/program management approach, is key to this. In particular, it recognises the following key trends:

  • Infrastructure- and platform-as-a-service
  • Increasingly tech-savvy work-force
  • Increasing controls on IT by regulators, auditors, etc
  • Closer integration of business functions led by ‘digital’ initiatives
  • The replacement of the desktop by mobile & IoT (Internet of Things)
  • The tension between innovation and standards in large organisations

Enterprises are adapting to all the above by recognising that the IT function cannot be responsible for both technical delivery and ensuring that all technology-dependent initiatives realise the value they were intended to realise.

As a result, many aspects of IT project and programme management are no longer driven out of the ‘core’ IT function, but by domain-specific change management functions. IT itself must consolidate its activities to focus on those activities that can only be performed by highly qualified and expert technologists.

The inevitable consequence of this transformation is that IT becomes more product driven, where a given product may support many projects. As such, IT needs to be clear on how to govern change for that product, to lead it in a direction that is most appropriate for the enterprise as a whole, and not just for any particular project or business line.

A product must provide capabilities to the stakeholders or users of that product. In the past, those capabilities were entirely decided by whatever IT built and delivered: if IT delivered something that in practice wasn’t entirely fit for purpose, then business functions had no alternative but to find ways to work around the system deficiencies – usually creating more complexity (through end-user-developed applications in tools like Excel etc) and more expense (through having to hire more people).

By taking a capability-based approach to product development, however, IT can give business functions more options and ways to work around inevitable IT shortfalls without compromising controls or data integrity – e.g., through controlled APIs and services, etc.

So, while solutions may explode in number and complexity, the number of products can be controlled – with individual businesses being more directly accountable for the complexity they create, rather than ‘IT’.

This approach requires a step-change in how traditional IT organisations manage change. Techniques from enterprise architecture, scaled agile, and DevOps are all key enablers for this new model of structuring the IT organisation.

In particular, except for product-strategy (where IT must be the leader), IT must get out of the business of deciding the relative value/importance of individual product changes requested by projects, which historically IT has been required to do. By imposing a governance structure to control the ‘epics’ and ‘stories’ that drive product evolution, projects and stakeholders have some transparency into when the work they need will be done, and demand can be balanced fairly across stakeholders in accordance with their ability to pay.

If changes implemented by IT do not end up delivering value, it should not be because IT delivered the wrong thing, but rather the right thing was delivered for the wrong reason. As long as IT maintains its product roadmap and vision, such mis-steps can be tolerated. But they cannot be tolerated if every change weakens the ability of the product platform to change.

Firms which successfully balance between the project and product view of their technology landscape will find that productivity increases, complexity is reduced and agility increases massively. This model also lends itself nicely to bounded domain development, microservices, use of container technologies and automated build/deployment – all of which will likely feature strongly in the enterprise technology platform of the future.

The changes required to support this are significant..in terms of financial governance, delivery oversight, team collaborations, and the roles of senior managers and leaders. But organisations must be prepared to do this transition, as historical approaches to enterprise IT software development are clearly unsustainable.

Transforming IT: From a solution-driven model to a capability-driven model

Culture, Collaboration & Capabilities vs People, Process & Technology

[TL;DR The term ‘people, process and technology’ has been widely understood to represent the main dimensions impacting how organisations can differentiate themselves in a fast-changing technology-enabled world. This article argues that this expression may be misinterpreted with the best of intentions, leading to undesirable/unintended outcomes. The alternative, ‘culture, collaboration and capability’ is proposed.]

People, process & technology

When teams, functions or organisations are under-performing, the underlying issues can usually be narrowed down to one or more of the dimensions of people, process and technology.

Unfortunately, these terms can lead to an incorrect focus. Specifically,

  • ‘People’ can be understood to mean individuals who are under-performing or somehow disruptive to overall performance
  • ‘Process’ can be understood to mean formal business processes, leading to a focus on business process design
  • ‘Technology’ can be understood to mean engineering or legacy technology challenges which are resolvable only by replacing or updating existing technology

In many cases, this may in fact be the approach needed: fire the disruptive individual, redesign business processes using Six Sigma experts, or find another vendor selling technology that will solve all your engineering challenges.

In general, however, these approaches are neither practical nor desirable. Removing people can be fraught with challenges, and should only be used as a last resort. Firms using this as a way to solve problems will rapidly build up a culture of distrust and self-preservation.

Redesigning business processes using Six Sigma or other techniques may work well in very mature, well understood, highly automatable situations. However, in most dynamic business situations, no sooner has the process been optimised than it requires changing again. In addition, highly optimised processes may cause the so-called ‘local optimisation’ problem, where the sum of the optimised parts yields something far from an optimised whole.

Technology is generally not easy to replace: some technologies are significantly embedded in an organisation, with a large investment in people and skills to support the technology. But technologies change faster than people can adapt, and business environments change even quicker than technology changes. So replacing technologies come at a massive cost (and risk) of replacing functionally rich existing systems with relatively immature new technology, and replacing existing people with people who may be familiar with the technology, but less so with your organisation. And what to do with the folks who have invested so much of their careers in the ‘old’ technology? (Back to the ‘people’ problem.)

A new meme

In order to effect change within a team, department or organisation, the focus on ‘people, process and technology’ needs to be adapted to ‘culture, collaboration and capabilities’. The following sections lays out what the subtle difference is and how it could change how one approaches solving certain types of performance challenges.

Culture

When we talk about ‘people’, we are really not talking about individuals, but about cultures. The culture of a team, department or organisation has a more significant impact on how people collectively perform than anything else.

Changing culture is hard: for a ‘bad’ culture caught early enough, it may be possible to simply replace the most senior person who is (consciously or not) leading the creation of the undesirable cultural characteristics. But once a culture is established, even replacing senior leadership does not guarantee it will change.

Changing culture requires the willing participation of most of the people involved. For this to happen, people need to realise that there is a problem, and they need to be open to new cultural leadership. Then, it is mainly a case of finding the right leadership to establish the new norms and carry them forward – something which can be difficult for some senior managers to do, particularly when they have an arms-length approach to management.

Typical ‘bad’ cultures in a (technology) organisation include poor practices such as lack of testing discipline, poor collaboration with other groups focused on different concerns (such as stability, infrastructure, etc), a lack of transparency into how work is done, or even a lack of collaboration within members of the same team (i.e., a ‘hero’ based approach to development).

Changing these can be notoriously difficult, especially if the firm is highly dependent on this team and what it does.

Collaboration

Processes are, ultimately, a way to formalise how different people collaborate at different times. Processes formalise collaborations, but often collaborations happen before the process is formalised – especially in high-performance teams who are aware of their environment and are open to collaboration.

Many challenges in teams, departments or organisations can be boiled down to collaboration challenges. People not understanding how (or even if) they should be collaborating, how often, how closely, etc.

In most organisations, ‘cooperation’ is a necessity: there are many different functions, most of which depend on each other. So there is a minimum level of cooperation in order to get certain things done. But this cooperation does not necessarily extend to collaboration, which is cooperation based on trust and a deeper understanding of the reasons why a collaboration is important.

Collaboration ultimately serves to strengthen relationships and improve the overall performance of the team, department or organisation.

Collaborations can be captured formally using business process design notation (such as BPMN) but often these treat roles as machines, not people, and can lead to forgetting the underlying goal: people need to collaborate in order to meet the goals of the organisation. Process design often aims to define people’s roles so narrowly that the individuals may as well be a machine – and as technology advances, this is exactly what is happening in many cases.

People will naturally resist this; defining processes in terms of collaborations will change the perspective and result in a more sustainable and productive outcome.

Capabilities

Much has been written here about ‘capabilities’, particularly when it comes to architecture. In this article, I am narrowing my definition to anything that allows an individual (or group of individuals) to perform better than they otherwise would.

From a technology perspective, particular technologies provide developers with capabilities they would not have otherwise. These capabilities allow developers to offer value to other people who need software developed to help them do their job, and who in turn offer capabilities to other people who need those people to perform that job.

When a given capability is ‘broken’ (for example, where people do not understand a particular technology very well, and so it limits their capabilities rather than expands them), then it ripples up to everybody who depends directly or indirectly on that capability: systems become unstable, change takes a long time to implement, users of systems become frustrated and unable to do their jobs, the clients of those users become frustrated at  people being paid to do a job not being able to do it.

In the worst case, this can bring a firm to its knees, unable to survive in an increasingly dynamic, fast-changing world where the weakest firms do not survive long.

Technology should *always* provide a capability: the capability to deliver value in the right hands. When it is no longer able to achieve that role in the eyes of the people who depend on it (or when the ‘right hands’ simply cannot be found), then it is time to move on quickly.

Conclusion

Many of todays innovations in technology revolves around culture, collaboration and capabilities. An agile, disciplined culture, where collaboration between systems reflects collaborations between departments and vice-versa, and where technologies provide people with the capabilities they need to do their jobs, is what every firm strives for (or should be striving for).

For new startups, much of this is a given – this is, after all, how they differentiate themselves against more established players. For larger organisations that have been around for a while, the challenge has been, and continues to be, how to drive continuous improvement and change along these three axes, while remaining sensitive to the capacity of the firm to absorb disruption to the people and technologies that those firms have relied on to get them to the (presumably) successful state they are in today.

Get it wrong and firms could rapidly lose market share and become over-taken by their upstart competitors. Get it right, and those upstart competitors will be bought out by the newly agile established players.

Culture, Collaboration & Capabilities vs People, Process & Technology

The hidden costs of PaaS & microservice engineering innovation

[tl;dr The leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.]

The pace of innovation in the PaaS and microservice space is increasing rapidly. This, coupled with increasing pressure on ‘traditional’ organisations to deliver more value more quickly from IT investments, is causing a flurry of interest in PaaS enabling technologies such as Cloud Foundry (favoured by the likes of IBM and Pivotal), OpenShift (favoured by RedHat), Azure (Microsoft), Heroku (SalesForce), AWS, Google Application Engine, etc.

A key characteristic of all these PaaS solutions is that they are ‘devops’ enabled – i.e., it is possible to automate both code and infrastructure deployment, enabling the way to have highly automated operational processes for applications built on these platforms.

For large organisations, or organisations that prefer to control their infrastructure (because of, for example, regulatory constraints), PaaS solutions that can be run in a private datacenter rather than the public cloud are preferable, as this a future option to deploy to external clouds if needed/appropriate.

These PaaS environments are feature-rich and aim to provide a lot of the building blocks needed to build enterprise applications. But other framework initiatives, such as Spring Boot, DropWizard and Vert.X aim to make it easier to build PaaS-based applications.

Combined, all of these promise to provide a dramatic increase in developer productivity: the marginal cost of developing, deploying and operating a complete application will drop significantly.

Due to the low capital investment required to build new applications, it becomes ever more feasible to move from a heavy-weight, planning intensive approach to IT investment to a more agile approach where a complete application can be built, iterated and validated (or not) in the time it takes to create a traditional requirements document.

However, this also has massive implications, as – left unchecked – the drift towards entropy will increase over time, and organisations could be severely challenged to effectively manage and generate value from the sheer number of applications and services that can be created on such platforms. So an eye on managing complexity should be in place from the very beginning.

Many of the above platforms aim to make it as easy as possible for developers to get going quickly: this is a laudable goal, and if more of the complexity can be pushed into the PaaS, then that can only be good. The consequence of this approach is that developers have less control over the evolution of key aspects of the PaaS, and this could cause unexpected issues as PaaS upgrades conflict with application lifecycles, etc. In essence, it could be quite difficult to isolate applications from some PaaS changes. How these frameworks help developers cope with such changes is something to closely monitor, as these platforms are not yet mature enough to have gone through a major upgrade with a significant number of deployed applications.

The relative benefit/complexity trade-off between established microservice frameworks such as OSGi and easier to use solutions such as described above needs to be tested in practice. Specifically, OSGi’s more robust dependency model may prove more useful in enterprise environments than environments which have a ‘move fast and break things’ approach to application development, especially if OSGi-based PaaS solutions such as JBoss Fuse on OpenShift and Paremus ServiceFabric gain more popular use.

So: all well and good from the technology side. But even if the pros and cons of the different engineering approaches are evaluated and a perfect PaaS solution emerges, that doesn’t mean Microservice Nirvana can be achieved.

A recent article on the challenges of building successful micro-service applications, coupled with a presentation by Lisa van Gelder at a recent Agile meetup in New York City, has emphasised that even given the right enabling technologies, deploying microservices is a major challenge – but if done right, the rewards are well worth it.

Specifically, there are a number of factors that impact the success of a large scale or enterprise microservice based strategy, including but not limited to:

  • Shared ownership of services
  • Setting cross-team goals
  • Performing scrum of scrums
  • Identifying swim lanes – isolating content failure & eventually consistent data
  • Provision of Circuit breakers & Timeouts (anti-fragile)
  • Service discoverability & clear ownership
  • Testing against stubs; customer driven contracts
  • Running fake transactions in production
  • SLOs and priorities
  • Shared understanding of what happens when something goes wrong
  • Focus on Mean time to repair (recover) rather than mean-time-to-failure
  • Use of common interfaces: deployment, health check, logging, monitoring
  • Tracing a users journey through the application
  • Collecting logs
  • Providing monitoring dashboards
  • Standardising common metric names

Some of these can be technically provided by the chosen PaaS, but a lot is based around the best practices consistently applied within and across development teams. In fact, it is quite hard to capture these key success factors in traditional architectural views – something that needs to be considered when architecting large-scale microservice solutions.

In summary, the leap from monolithic application development into the world of PaaS and microservices highlights the need for consistent collaboration, disciplined development and a strong vision in order to ensure sustainable business value.
The hidden costs of PaaS & microservice engineering innovation

Scaled Agile needs Slack

[tl;dr In order to effectively scale agile, organisations need to ensure that a portion of team capacity is explicitly set aside for enterprise priorities. A re-imagined enterprise architecture capability is a key factor in enabling scaled agile success.]

What’s the problem?

From an architectural perspective, Agile methodologies are heavily dependent on business- (or function-) aligned product owners, which tend to be very focused on *their* priorities – and not the enterprise’s priorities (i.e., other functions or businesses that may benefit from the work the team is doing).

This results in very inward-focused development, and where dependencies on other parts of the organisation are identified, these (in the absence of formal architecture governance) tend to be minimised where possible, if necessary through duplicative development. And other teams requiring access to the team’s resources (e.g., databases, applications, etc) are served on a best-effort basis – often causing those teams to seek other solutions instead, or work without formal support from the team they depend on.

This, of course, leads to architectural complexity, leading to reduced agility all round.

The Solution?

If we accept the premise that, from an architectural perspective, teams are the main consideration (it is where domain and technical knowledge resides), then the question is how to get the right demand to the right teams, in as scalable, agile manner as possible?

In agile teams, the product backlog determines their work schedule. The backlog usually has a long list of items awaiting prioritisation, and part of the Agile processes is to be constantly prioritising this backlog to ensure high-value tasks are done first.

Well known management research such as The Mythical Man Month has influenced Agile’s goal to keep team sizes small (e.g., 5-9 people for scrum). So when new work comes, adding people is generally not a scalable option.

So, how to reconcile the enterprise’s needs with the Agile team’s needs?

One approach would be to ensure that every team pays an ‘enterprise’ tax – i.e., in prioritising backlog items, at least, say, 20% of work-in-progress items must be for the benefit of other teams. (Needless to say, such work should be done in such a way as to preserve product architectural integrity.)

20% may seem like a lot – especially when there is so much work to be done for immediate priorities – but it cuts both ways. If *every* team allows 20% of their backlog to be for other teams, then every team has the possibility of using capacity from other teams – in effect, increasing their capacity by much more than they could do on their own. And by doing so they are helping achieve enterprise goals, reducing overall complexity and maximising reuse – resulting in a reduction in project schedule over-runs, higher quality resulting architecture, and overall reduced cost of change.

Slack does not mean Under-utilisation

The concept of ‘Slack’ is well described in the book ‘Slack: Getting Past Burn-out, Busywork, and the Myth of Total Efficiency‘. In effect, in an Agile sense, we are talking about organisational slack, and not team slack. Teams, in fact, will continue to be 100% utilised, as long as their backlog consists of more high-value items then they can deliver. The backlog owner – e.g., scrum master – can obviously embed local team slack into how a particular team’s backlog is managed.

Implications for Project & Financial Management

Project managers are used to getting funding to deliver a project, and then to be able to bring all those resources to bear to deliver that project. The problem is, that is neither agile, nor does it scale – in an enterprise environment, it leads to increasingly complex architectures, resulting in projects getting increasingly more expensive, increasingly late, or delivering increasingly poor quality.

It is difficult for a project manager to accept that 20% of their budget may actually be supporting other (as yet unknown) projects. So perhaps the solution here is to have Enterprise Architecture account for the effective allocation of that spending in an agile way? (i.e., based on how teams are prioritising and delivering those enterprise items on their backlog). An idea worth considering..

Note that the situation is a little different for planned cross-business initiatives, where product owners must actively prioritise the needs of those initiatives alongside their local needs. Such planned work does not count in the 20% enterprise allowance, but rather counts as part of how the team’s cost to the enterprise is formally funded. It may result in a temporary increase in resources on the team, but in this case discipline around ‘staff liquidity’ is required to ensure the team can still function optimally after the temporary resource boost has gone.

The challenge regarding project-oriented financial planning is that, once a project’s goals have been achieved, what’s left is the team and underlying architecture – both of which need to be managed over time. So some dissociation between transitory project goals and longer term team and architecture goals is necessary to manage complexity.

For smaller, non-strategic projects – i.e., no incoming enterprise dependencies – the technology can be maintained on a lights-on basis.

Enterprise architecture can be seen as a means to asses the relevance of a team’s work to the enterprise – i.e., managing both incoming and outgoing team dependencies.  The higher the enterprise relevance of the team, the more critical the team must be managed well over time – i.e., team structure changes must be carefully managed, and not left entirely to the discretion of individual managers.

Conclusion

By ensuring that every project that purports to be Agile has a mandatory allowance for enterprise resource requirements, teams can have confidence that there is a route for them to get their dependencies addressed through other agile teams, in a manner that is independent of annual budget planning processes or short-term individual business priorities.

The effectiveness of this approach can be governed and evaluated by Enterprise Architecture, which would then allow enterprise complexity goals to be addressed without concentrating such spending within the central EA function.

In summary, to effectively scale agile, an effective (and possibly rethought) enterprise architecture capability is needed.

Scaled Agile needs Slack

Making good architectural moves

[tl;dr In Every change is an opportunity to make the ‘right’ architectural move to improve complexity management and to maintain an acceptable overall cost of change.]

Accompanying every new project, business requirement or product feature is an implicit or explicit ‘architectural move’ – i.e., a change to your overall architecture that moves it from a starting state to another (possibly interim) state.

The art of good architecture is making the ‘right’ architectural moves over time. The art of enterprise architecture is being able to effectively identify and communicate what a ‘right’ move actually looks like from an enterprise perspective, rather than leaving such decisions solely to the particular implementation team – who, it must be assumed, are best qualified to identify the right move from the perspective of the relevant domain.

The challenge is the limited inputs that enterprise architects have, namely:

  • Accumulated skill/knowledge/experience from past projects, including any architectural artefacts
  • A view of the current enterprise priorities based on the portfolio of projects underway
  • A corporate strategy and (ideally) individual business strategies, including a view of the environment the enterprise operates in (e.g., regulatory, commercial, technological, etc)

From these inputs, architects need to guide the overall architecture of the enterprise to ensure every project or deliverable results in a ‘good’ move – or at least not a ‘bad’ move.

In this situation, it is difficult if not impossible to measure the actual success of an architecture capability. This is because, in many respects, the beneficiaries of a ‘good’ enterprise architecture (at least initially) are the next deliverables (projects/requirements/features), and only rarely the current deliverables.

Since the next projects to be done is generally an unknown (i.e., the business situation may change between the time the current projects complete and the time the next projects start), it is rational for people to focus exclusively on delivering the current projects. Which makes it even more important the current projects are in some way delivering the ‘right’ architectural moves.

In many organisations, the typical engagement with enterprise architecture is often late in the architectural development process – i.e., at a ‘toll-gate’ or formal architectural review. And the focus is often on ‘compliance’ with enterprise standards, principles and guidelines. Given that such guidelines can get quite detailed, it can get quite difficult for anyone building a project start architecture (PSA) to come up with an architecture that will fully comply: the first priority is to develop an architecture that will work, and is feasible to execute within the project constraints of time, budget and resources.

Only then does it make sense to formally apply architectural constraints from enterprise architecture – at least some of which may negatively impact the time, cost, resource or feasibility needs of the project – albeit to the presumed benefit of the overall landscape. Hence the need for board-level sponsorship for such reviews, as otherwise the project’s needs will almost always trump enterprise needs.

The approach espoused by an interesting new book, Chess and the Art of Enterprise Architecture, is that enterprise architects need to focus more on design and less on principles, guidelines, roadmaps, etc. Such an approach involves enterprise architects more closely in the creation and evolution of project (start) architectures, which represents the architectural basis for all the work the project does (although it does not necessarily lay out the detailed solution architecture).

This approach is also acceptable for planning processes which are more agile than waterfall. In particular, it acknowledges that not every architectural ‘move’ is necessarily independently ‘usable’ by end users of an agile process. In fact, some user stories may require several architectural moves to fully implement. The question is whether the user story is itself validated enough to merit doing the architectural moves necessary to enable it, as otherwise those architectural moves may be premature.

The alternative, then, is to ‘prototype’ the user story,  so users can evaluate it – but at the cost of non-conformance with the project architecture. This is also known as ‘technical debt’, and where teams are mature and disciplined enough to pay down technical debt when needed, it is a good approach. But users (and sometimes product owners) struggle to tell the difference between an (apparently working) prototype and a production solution that is fully architecturally compliant, and it often happens that project teams move on to the next visible deliverable without removing the technical debt.

In applications where the end-user is a person or set of persons, this may be acceptable in the short term, but where the end-user could be another application (interacting via, for example, an API invoked by either a GUI or an automated process), then such technical debt will likely cause serious problems if not addressed. At the various least, it will make future changes harder (unmanaged dependencies, lack of automated testing), and may present significant scalability challenges.

So, what exactly constitutes a ‘good’ architectural move? In general, this is what the project start architecture should aim to capture. A good basic principle could be that architectural commitments should be postponed for as long as possible, by taking steps to minimise the impact of changed architectural decisions (this is a ‘real-option‘ approach to architectural change management). Over the long term, this reduces the cost of change.

In addition, project start architectures may need to make explicit where architectural commitments must be made (e.g., for a specific database, PaaS or integration solution, etc) – i.e., areas where change will be expensive.

Other things the project start architecture may wish to capture or at least address (as part of enterprise design) could include:

  • Cataloging data semantics and usage
    • to support data governance and big data initiatives
  • Management of business context & scope (business area, product, entity, processes, etc)
    • to minimize unnecessary redundancy and process duplication
  • Controlled exposure of data and behaviour to other domains
    • to better manage dependencies and relationships with other domains
  • Compliance with enterprise policies around security and data management
    • to address operational risk
  • Automated build, test & deploy processes
    • to ensure continued agility and responsiveness to change, and maximise operational stability
  • Minimal lock-in to specific solution architectures (minimise solution architecture impact on other domains)
    • to minimize vendor lock-in and maximize solution options

The Chess book mentioned above includes a good description of a PSA, in the context of a PRINCE2 project framework. Note that the approach also works for Agile, but the PSA should set the boundaries within which the agile team can operate: if those boundaries must be crossed to deliver a user story, then enterprise design architects should be brought back into the discussion to establish the best way forward.

In summary, every change is an opportunity to make the ‘right’ architectural move to improve complexity management and to maintain an acceptable overall cost of change.

Making good architectural moves

Agile quantitative analytics in Financial Services

[tl;dr technology is empowering quantitative analysts to much more on their own. IT organisations will need to think about how to make these capabilities available to their users, and how to incorporate it into IT strategies around big-data, cloud computing, security and data governance.]

The quest for positive (financial) returns in investments is helping drive considerable innovation in the space of quantitative analytics. This, coupled with the ever-decreasing capital investment required to do number-crunching, has created demand for ‘social’ analytics – where algorithms are shared and discussed amongst practitioners rather than kept sealed behind the closed doors of corporate research & trading departments.

I am not a quant, but have in the past built systems that provided an alternate (to Excel) vehicle  for quantitative research analysts to capture and publish their models. While Excel is hard to beat for experimenting with new ideas, from a quantitative analyst perspective, it suffers from many deficiencies, including:

  • Spreadsheets get complex very quickly and are hard to maintain
  • They are not very efficient for back-end (server-side) use
  • They cannot be efficiently incorporated into scalable automated workflows
  • Models cannot be distributed or shared without losing control of the model
  • Integrating spreadsheets with multiple large data sets can be cumbersome and memory inefficient at best, and impossible at worst (constrained by machine memory limits)

QuantCon was created to provide a forum for quantitative analysts to discuss and share tools and techniques for  quantitative research, with a particular focus on the sharing and distribution of models (either outcomes or logic, or both).

Some key themes from QuantCon which I found interesting were:

  • The emergence of social analytics platforms that can execute strategies on your venue of choice (e.g. quantopian.com)
  • The search for uncorrelated returns & innovation in (algorithmic) investment strategies
  • Back-testing as a means of validating algorithms – and the perils of assuming backtests would execute at the same prices in real-life
  • The rise of freely available interactive model distribution tools such as the Jupyter project (similar to Mathematica Notebooks)
  • The evolution of probabilistic programming and machine learning – in particular the PyMC3 extensions to Python
  • The rise in the number of free and commercial data sources (APIs) of data points (signals) that can be included in models

From an architectural perspective, there are some interesting implications. Specifically:

  • There really is no limit to what a single quant with access to multiple data sources (either internal or external) and access to platform- or infrastructure-as-a-service capabilities can do.
  • Big data technologies make it very easy to ingest, transform and process multiple data sources (static or real-time) at very low cost (but raise governance concerns).
  • It has never been cheaper or easier to efficiently and safely distribute, publish or share analytics models (although the tools for this are still evolving).
  • The line between the ‘IT developer’ and the user has never been more blurred.

Over the coming years, we can expect analytics (and business intelligence in general) capabilities to become key functions within every functional domain in an organisation, integrated both into the function itself through feedback loops, as well as for more conventional MIS reporting. Some of the basic building blocks will be the same as they are today, but the key characteristic is that users of these tools will be technologists specifically supporting business needs – i.e., part of ‘the business’ and not part of ‘IT’.

In the past, businesses have been supported by vendors providing expensive but easy-to-use tools to allow non-technical people to work with large datasets. The IT folks were very specifically supporting the core data warehouse & business intelligence infrastructure, or provided technical support for the development of particular reports. In these cases, the clients were typically non-technical, and the platform could only evolve as quickly as the software  vendors evolved.

The emerging (low-cost) tools for quantitative analytics will give rise to a post-Excel world of innovation, scale and distribution that will empower users, give rise to whole new business models, and be itself a big driver into how enterprise IT defines its role in the agile, unbundled, decentralised and as-a-service technology landscape of the near future.

Traditional IT organisations often saw the ‘application development’ functions as business aligned, and were comfortable with the client-oriented nature of providing technical infrastructure services to development teams. However, internal development teams supporting other (business-aligned) development teams is a fairly new concept, and will likely best be done by external specialist providers. This is a good example of where IT’s biggest role (apart from governance) in the future is in sourcing relevant providers, and ensuring business technologists are able to do their job effectively and efficiently in that environment.

In summary, technology is empowering quantitative analysts to much more on their own. IT organisations will need to think about how to make these capabilities available to their users, and how to incorporate it into IT strategies around big-data, cloud computing, security and data governance.

Agile quantitative analytics in Financial Services