Becoming a financial ‘super-power’ through emerging technologies

Recently, the Tabb Forum published an article (login needed) proposing 4 key emerging technology strategies that would enable market participants to keep pace with a trading environment that is constantly changing.

Specifically, the emerging technologies are:

  • AI for Risk Management
  • Increased focus on data and analytics
  • Accepting public cloud
  • Being open to industry standardization

It is worth noting that in the author’s view, these emerging technologies are seen as table stakes, rather than differentiators – i.e., that the market will pivot around growing use of these technologies, rather than individual firms having (temporary) advantage by using them. In the long term, this seems plausible, but does not (IMO) preclude opportunistic use cases in the short/medium term, and firms which can effectively use these technologies will basically acquire banking ‘super-powers’.

AI for Risk Management

The key idea here is that AI could be used to augment current risk management processes, and offer new insights into where market participants may be carrying risk, or provide new insights into who to trade with and what to trade.

Current risk management processes are brute force, involving complex calculations with multiple inputs, and with many outputs for different scenarios. In addition, human judgement is needed to apply various adjustments (known as XVA) to model-computed valuations to account for trade-specific context.

For AI to be used effectively for risk management, certain key technical capabilities need to be in place – specifically:

  • Data lineage, semantics and quality management
  • Feedback loops between pre-trade, trade and post-trade analytics

Many financial firms are already addressing data lineage, semantics and quality management through meeting compliance with regulations such as BCBS239. However, these capabilities need to be infused into a firm’s architecture (processes as well as technology) for it to be useful for AI use cases. Currently, the tools available are in generally not highly integrated with each other or with systems that depend on them, and human processes around these tools are still maturing.

With respect to developing machine learning models, an AI system needs to understand what outcomes happened in the past in order to make predictions or suggestions. For most traders today, such knowledge is encapsulated in complex spreadsheets that are amended by them over time as new insights are discovered by the trader. These spreadsheets are often proprietary to the trader, and over time become increasingly unmaintainable as calculations become interdependent on each other, and changing one calculation has a high chance of breaking another. This impedes a traders ability to keep their models aligned with their understanding of the markets.

Clearly another approach is needed. A key challenge is how to augment a trader’s capabilities with AI-enabled tooling, without at the same time suggesting the trader is himself/herself surplus to requirements. (This is a challenge all AI solutions face.)

One approach would require that at some level AI algorithms are biased towards individual traders decision-making and learning processes, and tying the use of such algorithms to the continued employment of the trader.

Brute-force AI learning based on all data passing through pre-trade (including market data), trade and post-trade systems is possible, but the skills in selecting critical data points for different contexts are at least as valuable as basic trading skills, and the infrastructure cost of doing this is still considerable.

Increased Focus on Data and Analytics

The key point being made by the author is that managing data should be a key strategic function, rather than being left to individual areas to manage on a per-application basis.

Again, this is tied to efforts relating to data lineage, semantics and quality. Efforts in this space can be directed to specific areas (such as risk management), but every function has growing needs for analytics that traditional warehouse and analytics based solutions cannot keep pace with – especially if, as is increasingly the case, every functional domain wishes to have their own agenda to introduce AI/machine-learning into their processes to improve customer experience and/or regulatory compliance.

As Risk Management increasingly consumes more data points from across the firm to refine risk predictions, it is not unreasonable for the Risk Management function to take leadership of the requirements and/or technology for data management for a firm’s broader needs. However, significant investment is required to make data management tools and infrastructure available as an easy-to-use service across multiple domains, which is essentially what is required. Hence, the third key emerging technology..

Accept the public cloud

The traditional way of developing technology has been to

  1. identify the requirements
  2. propose an architecture
  3. procure and provision the infrastructure
  4. develop and/or procure the software
  5. test and deploy
  6. iterate

Each of these processes take a considerable amount of time in traditional organizations. Anything learned after deploying the software and getting user feedback has to necessarily be addressed with the previously defined architecture and infrastructure – often incurring technical debt in the process.

Particularly for data and analytics use cases, this process is unsustainable. Rather than adapting applications to the infrastructure (as this approach requires), the infrastructure should be adaptable to the application, and if it is proven to be inappropriate, it must be disposed of without any concern re ‘sunk cost’ or ‘amortization’ etc.

The public cloud is the only way to viably evolve complex data and analytics architectures, to ensure infrastructure is aligned with application needs, and minimize technical debt through constantly reviewing and aligning the infrastructure with application needs.

The discovery of which data management and analytics ‘services’ need to be built and made available to users across a firm is, today, a process of learning and iteration (in the true ‘agile’ sense). Traditional solutions preclude such agility, but embracing public cloud enables it.

One point not raised in the article is around the impact of ‘serverless’ technologies. This could be a game-changer for some use cases. While serverless in general could be taken to represent the virtualization of infrastructure, serverless specifically addresses solutions where development teams do not need to manage *any* infrastructure (virtual or otherwise) – i.e., they just consume a service, and costs are directly related to usage, such that zero usage = zero costs.

‘Serverless’ is not necessarily restricted to public cloud providers – a firm’s internal IT could provide this capability. As standards mature in this space, firms should start to think about how client needs could be better met through adoption of serverless technologies, which would require a substantial rethink of the traditional software development lifecycle.

Be open-minded about industry standardization

Conversation around industry standards today is driven by blockchain/distributed ledger technology and/or the efficient processing of digital assets. Industry standardization efforts have always been a feature in capital markets, mostly driven by the need to integrate with clearing houses and exchanges – i.e., centralized services. Firms generally view their data and processing models as proprietary, and fundamentally are resistant to commoditization of these. After all, if all firms ran the same software, where would their differentiation be?

The ISDA Common Domain Model (CDM) seems to be quite different, in that it is not specifically driven by the need to integrate with a 3rd party, but rather with the need for every party to integrate with every other party – e.g, in an over-the-counter model.

Historically, the easiest way to regulate a market is to introduce a clearing house or regulated intermediary that ensure transparency to the regulators of all market activity. However, this is undesirable for products negotiated directly between trading parties or for many digital products, so some way is needed to square this circle. Irrespective of the underlying technology, only a common data model can permit both market participants and regulators to have a shared view of transactions. Blockchain/DLT may well be an enabling technology for digitally cleared assets, but other crypto-based solutions which ensure only the participants and the regulators have access to commonly defined critical data may arise.

Initially, integrations and adaptors for the CDM will need to be built with existing applications, but eventually most systems will need to natively adopt the CDM. This should eventually give rise to ‘derivatives-as-a-service’ platforms, with firms differentiating based on product innovations, and potentially major participants offering their platforms to other firms as a service.

Conclusion

Firms which can thread all four of these themes together to provide a common platform will indeed, in my view, have a major advantage over firms which do not. It is evident that with judicious use of public cloud, use of strong common data models to drive platform evolution, use of shared services across business functions for data management and analytics, and use of AI/machine learning for augmenting users work in different domains (starting with trading), all combine to yield extraordinary benefits. The big question is how quickly this will happen, and how to effectively balance investment in existing application portfolios vs new initiatives that can properly leverage these technologies, even as they continue to evolve.

Becoming a financial ‘super-power’ through emerging technologies

What I realized from studying AWS Services & APIs

[tl;dr The weakest link for firms wishing to achieve business agility is principally based around the financial and physical constraints imposed by managing datacenters and infrastructure. The business goals of agile, devops and enterprise architecture are fundamentally unachievable unless these constraints can be fully abstracted through software services.]

Background

Anybody who has grown up with technology with the PC generation (1985-2005) will have developed software with a fairly deep understanding of how the software worked from an OS/CPU, network, and storage perspective. Much of that generation would have had some formal education in the basics of computer science.

Initially, the PC generation did not have to worry about servers and infrastructure: software ran on PCs. As PCs became more networked, dedicated PCs to run ‘server’ software needed to be connected to the desktop PCs. And folks tasked with building software to run on the servers would also have to buy higher-spec PCs for server-side, install (network) operating systems, connect them to desktop PCs via LAN cables, install disk drives and databases, etc. This would all form part of the ‘waterfall’ project plan to deliver working software, and would all be rather predictable in timeframes.

As organizations added more and more business-critical, network-based software to their portfolios, organization structures were created for datacenter management, networking, infrastructure/server management, storage and database provisioning and operation, middleware management, etc, etc. A bit like the mainframe structures that preceded the PC generation, in fact.

Introducing Agile

And so we come to Agile. While Agile was principally motivated by the flexibility in GUI design offered by HTML (vs traditional GUI design) – basically allowing development teams to iterate rapidly over, and improve on, different implementations of UI – ‘Agile’ quickly became more ‘enterprise’ oriented, as planning and coordinating demand across multiple teams, both infrastructure and application development, was rapidly becoming a massive bottleneck.

It was, and is, widely recognized that these challenges are largely cultural – i.e., that if only teams understood how to collaborate and communicate, everything would be much better for everyone – all the way from the top down. And so a thriving industry exists in coaching firms how to ‘improve’ their culture – aka the ‘agile industrial machine’.

Unfortunately, it turns out there is no silver bullet: the real goal – organizational or business agility – has been elusive. Big organizations still expend vast amounts of time and resources doing small incremental change, most activity is involved in maintaining/supporting existing operations, and truly transformational activities which bring an organization’s full capabilities together for the benefit of the customer still do not succeed.

The Reality of Agile

The basic tenet behind Agile is the idea of cross-functional teams. However, it is obvious that most teams in organizations are unable to align themselves perfectly according to the demand they are receiving (i.e., the equivalent of providing a customer account manager), and even if they did, the number of participants in a typical agile ‘scrum’ or ‘scrum of scrums’ meeting would quickly exceed the consensus maximum of about 9 participants needed for a scrum to be successful.

So most agile teams resort to the only agile they know – i.e., developers, QA and maybe product owner and/or scrum-master participating in daily scrums. Every other dependency is managed as part of an overall program of work (with communication handled by a project/program manager), or through on-demand ‘tickets’ whereby teams can request a service from other teams.

The basic impact of this is that pre-planned work (resources) gets prioritized ahead of on-demand ‘tickets’ (excluding tickets relating to urgent operational issues), and so agile teams are forced to compromise the quality of their work (if they can proceed at all).

DevOps – Managing Infrastructure Dependencies

DevOps is a response to the widening communications/collaboration chasm between application development teams and infrastructure/operations teams in organizations. It recognizes that operational and infrastructural concerns are inherent characteristics of software, and software should not be designed without these concerns being first-class requirements along with product features/business requirements.

On the other hand, infrastructure/operations providers, being primarily concerned with stability, seek to offer a small number of efficient standardized services that they know they can support. Historically, infrastructure providers could only innovate and adapt as fast as hardware infrastructure could be procured, installed, supported and amortized – which is to say, innovation cycles measured in years.

In the meantime, application development teams are constantly pushing the boundaries of infrastructure – principally because most business needs can be realized in software, with sufficiently talented engineers, and those tasked with building software often assume that infrastructure can adapt as quickly.

Microservices – Managing AppDev Team to AppDev Team Dependencies

While DevOps is a response to friction in application development and infrastructure/operations engagement, microservices can be usefully seen as a response to how application development team can manage dependencies on each other.

In an ideal organization, an application development team can leverage/reuse capabilities provided by another team through their APIs, with minimum pre-planning and up-front communication. Teams would expose formal APIs with relevant documentation, and most engagement could be confined to service change requests from other teams and/or major business initiatives. Teams would not be required to test/deploy in lock-step with each other.

Such collaboration between teams would need to be formally recognized by business/product owners as part of the architecture of the platform – i.e., a degree of ‘mechanical sympathy’ is needed by those envisioning new business initiatives to know how best to leverage, and extend, software building blocks in the organization. This is best done by Product Management, who must steward the end-to-end business and data architecture of the organization or value-stream in partnership with business development and engineering.

Putting it all together

To date, most organizations have been fighting a losing battle. The desire to do agile and devops is strong, but the fundamental weakness in the chain is the ability for internal infrastructure providers and operators to move as fast as software development teams need them to – issues as much related to financial management as it is to managing physical buildings, hardware, etc.

What cloud providers are doing is creating software-level abstractions of infrastructure services, allowing the potential of agile, devops and microservices to begin to be realized in practice.

Understanding these services and abstractions is like re-learning the basic principles of Computer Science and Engineering – but through a ‘service’ lens. The same issues need to be addressed, the same technical challenges exist. Except now some aspects of those challenges no longer need to be solved by organizations (e.g., how to efficiently abstract infrastructure services at scale), and businesses can focus on the designing the infrastructure services that are matched with the needs of application developers (rather than a compromise).

Conclusion

The AWS Service Catalog and APIs is an extraordinary achievement (as is similar work by other cloud providers, although they have yet to achieve the catalog breadth that AWS has). Architects need to know and understand these service abstractions and focus on matching application needs with business needs, and can worry less about the traditional constraints infrastructure organizations have had to work with.

In many respects, the variations between these abstractions across providers will vary only in syntax and features. Ultimately (probably at least 10 years from now) all commodity services will converge, or be available through efficient ‘cross-plane’ solutions which abstract providers. So that is why I am choosing to ‘go deep’ on the AWS APIs. This is, in my opinion, the most concrete starting point to helping firms achieve ‘agile’ nirvana.

What I realized from studying AWS Services & APIs