Why IoT is changing Enterprise Architecture

[tl;dr The discipline of enterprise architecture as an aid to business strategy execution has failed for most organizations, but is finding a new lease of life in the Internet of Things.]

The strategic benefits of having an enterprise-architecture based approach to organizational change – at least in terms of business models and shared capabilities needed to support those models – have been the subject of much discussion in recent years.

However, enterprise architecture as a practice (as espoused by The Open Group and others) has never managed to break beyond it’s role as an IT-focused endeavor.

In the meantime, less technology-minded folks are beginning to discuss business strategy using terms like ‘modularity’, which is a first step towards bridging the gap between the business folks and the technology folks. And technology-minded folks are looking at disruptive business strategy through the lens of the ‘Internet of Things‘.

Business Model Capability Decomposition

Just like manufacturing-based industries decomposed their supply-chains over the past 30+ years (driving an increasingly modular view of manufacturing), knowledge-based industries are going through a similar transformation.

Fundamentally, knowledge based industries are based on the transfer and management of human knowledge or understanding. So, for example, you pay for something, there is an understanding on both sides that that payment has happened. Technology allows such information to be captured and managed at scale.

But the ‘units’ of knowledge have only slowly been standardized, and knowledge-based organizations are incentivized to ensure they are the only ones to be able to act on the information they have gathered – to often disastrous social and economic consequences (e.g., the financial crisis of 2008).

Hence, regulators are stepping into to ensure that at least some of this ‘knowledge’ is available in a form that allows governments to ensure such situations do not arise again.

In the FinTech world, every service provided by big banks is being attacked by nimble competitors able to take advantage of new, more meaningful technology-enabled means of engaging with customers, and who are willing to make at least some of this information more accessible so that they can participate in a more flexible, dynamic ecosystem.

For these upstart FinTech firms, they often have a stark choice to make in order to succeed. Assuming they have cleared the first hurdle of actually having a product people want, at some point, they must decide whether they are competing directly with the big banks, or if they are providing a key part of the electronic financial knowledge ecosystem that big banks must (eventually) be part of.

In the end, what matters is their approach to data: how they capture it (i.e., ‘UX’), what they do with it, how they manage it, and how it is leveraged for competitive and commercial advantage (without falling foul of privacy laws etc). Much of the rest is noise from businesses trying to get attention in an increasingly crowded space.

Historically, many ‘enterprise architecture’ or strategy departments fail to have impact because firms do not treat data (or information, or knowledge) as an asset, but rather as something to be freely and easily created and shunted around, leaving a trail of complexity and lost opportunity cost wherever it goes. So this attitude must change before ‘enterprise architecture’ as a concept will have a role in boardroom discussions, and firms change how they position IT in their business strategy. (Regulators are certainly driving this for certain sectors like finance and health.)

Internet of Things

Why does the Internet Of Things (IoT) matter, and where does IoT fit into all this?

At one level, IoT presents a large opportunity for firms which see the potential implied by the technologies underpinning IoT; the technology can provide a significant level of convenience and safety to many aspects of a modern, digitally enabled life.

But fundamentally, IoT is about having a large number of autonomous actors collaborating in some way to deliver a particular service, which is of value to some set of interested stakeholders.

But this sounds a lot like what a ‘company’ is. So IoT is, in effect, a company where the actors are technology actors rather than human actors. They need some level of orchestration. They need a common language for communication. And they need laws/protocols that govern what’s permitted and what is not.

If enterprise architecture is all about establishing the functional, data and protocol boundaries between discrete capabilities within an organization, then EA for IoT is the same thing but for technical components, such as sensors or driverless cars, etc.

So IoT seems a much more natural fit for EA thinking than traditional organizations, especially as, unlike departments in traditional ‘human’ companies, technical components like standards: they like fixed protocols, fixed functional boundaries and well-defined data sets. And while the ‘things’ themselves may not be organic, their behavior in such an environment could exhibit ‘organic’ characteristics.

So, IoT and and the benefits of an enterprise architecture-oriented approach to business strategy do seem like a match made in heaven.

The Converged Enterprise

For information-based industries in particular, there appears to be an inevitable convergence: as IoT and the standards, protocols and governance underpinning it mature, so too will the ‘modular’ aspects of existing firms operating models, and the eco-system of technology-enabled platforms will mature along with it. Firms will be challenged to deliver value by picking the most capable components in the eco-system around which to deliver unique service propositions – and the most successful of those solutions will themselves become the basis for future eco-systems (a Darwinian view of software evolution, if you will).

The converged enterprise will consist of a combination of human and technical capabilities collaborating in well-defined ways. Some capabilities will be highly human, others highly technical, some will be in-house, some will be part of a wider platform eco-system.

In such an organization, enterprise architects will find a natural home. In the meantime, enterprise architects must choose their starting point, behavioral or structural: focusing first on decomposed business capabilities and finishing with IoT (behavioral->structural), or focusing first on IoT and finishing with business capabilities (structural->behavioral).

Technical Footnote

I am somewhat intrigued at how the OSGi Alliance has over the years shifted its focus from basic Java applications, to discrete embedded systems, to enterprise systems and now to IoT. OSGi, (disappointingly, IMO), has had a patchy record changing how firms build enterprise software – much of this is to do with a culture of undisciplined dependency management in the software industry which is very, very hard to break.

IoT raises the bar on dependency management: you simply cannot comprehensively test software updates to potentially hundreds of thousands or millions of components running that software. The ability to reliably change modules without forcing a test of all dependent instantiated components is a necessity. As enterprises get more complex and digitally interdependent, standards such as OSGi will become more critical to the plumbing of enabling technologies. But as evidenced above, for folks who have tried and failed to change how enterprises treat their technology-enabled business strategy, it’s a case of FIETIOT – Failed in Enterprise, Trying IoT. And this, indeed, seems a far more rational use of an enterprise architect’s time, as things currently stand.

 

 

 

 

 

 

Why IoT is changing Enterprise Architecture

APIs, more APIs, and .. crypto-currencies

I recently attended #APIconUK, a conference on APIs organised by the folks at Programmable Web. This event was aimed at developers (bottom-up API development) rather than more traditional enterprise customers (top-down SOA-types), and as such was very poorly represented by enterprise-minded folks. Which, IMO, is a bit short-sighted.

The conference was good – a nice mix of hands on, technical and strategic presentations with plenty of time to chat to people between sessions.

Sadly I couldn’t make all the sessions, but key themes I picked up were:

  • Emphasis on treating APIs as a product
  • Computational APIs using Wolfram Alpha’s Mathematica
  • The rise of Linked Data
  • API Security
  • Payment APIs (Stripe)
  • Monetisation of APIs using crypto-currencies
  • Mobile developers & APIs

Adding all these together, what does it really mean?

  • APIs must be treated as first-class citizens with respect to technology management..inventoried/marketed, governed and supported in much the same way applications have been in the past.
  • It is possible to rapidly build useful services using sophisticated but usually not enterprise-standard tools like Mathematica, but to expose the end result as a RESTful API, which the enterprise IT teams can then re-implement in standard technologies. This allows consumers of the API to gain early benefit, and allow enterprise IT time to implement it in a sustainable,medium-term cost effective way.
  • Linked Data – the idea of attaching semantics to API information – is key for Machine Learning. APIs have the opportunity to become dynamic and self-forming when coupled with semantic information via Linked Data standards. Machine Learning is, in turn, key to automate or significantly improve processes that have relied on traditional BI reporting methods.
  • API security is a huge concern, but lots of vendors and solutions are beginning to appear in this space. In essence, it is quick and easy for anyone to create an API (just build a RESTful interface), but API management tools can help control who accesses that API, etc. It also imposes a level of governance to avoid proliferation of dubiously useful APIs.
  • Payment APIs are the next big thing. Stripe demonstrated a live build-it-from-scratch example of how to use their payment APIs, and it is remarkably simple. It doesn’t even need a client side SDK, as you can invoke the API via tools like CURL etc. An excellent example of how a simple API can hide the complexity of the back-end processing that must go on to make it appear simple..
  • Monetisation of APIs could be huge to incentivise the development of reusable services in direct response to demand. This is a significant topic, but essentially the current API model relies on many (internet-based) API providers ultimately making money from advertising, and not from the API itself. Introducing an API currency which is exchanged whenever external APIs are used, or external services use your API, is an intriguing concept, which may be useful even in a ‘traditional’ corporate environment, and would allow service providers to monetise their services directly without having to rely on ‘eyeball’ oriented revenue.
  • Mobile developers today still tend to build the back-end in a way that is highly optimised for the mobile application – in other words, any APIs built (RESTful or otherwise) tend to be designed for the specific needs of the mobile app, and not provided as a general service. Mobile developers have the opportunity to take the lead in ensuring that any mobile app results in the publishing of one or more enterprise-relevant services that other (mobile) app developers could use. Clearly this has an impact on how systems are designed and built, which is likely why the focus on mobile hasn’t produced good enterprise-relevant APIs.

Where does this leave the enterprise CTO? Here’s my take:

1) Mobile apps is where a lot of attention is these days..make sure every mobile project catalogs its data and makes an effort to publish reusable APIs. Do not let every mobile app have its own back-end server implementation, unless it is essentially just an orchestration layer.

2) Encourage developers to build and catalog APIs..even creating a simple, human-discoverable internal catalog like the one Programmable Web has would be a good start.

3) Understand your data from a business perspective, and document it using semantic web standards. This not only helps with IT planning, but is also directly usable in API and machine-learning initiatives, through standards like RDF and JSON-LD. Data architects are the best folks to tackle this.

4) Use API Management tools to help manage and control the creation and use of APIs.

5) Adopt principles like those proposed by the Reactive Development Movement to ensure that architectures behind APIs meet basic architectural standards, so they have some hope of coping with unplanned growth and utilisation.

6) Any app that does Payments should use a well-defined Payments API. It’s beyond the right time to develop internal Payment API standards for apps that have payment processing requirements..the increasing use of alternative currencies will expedite the need for this, as many traditional or legacy systems will not be able to cope with these currencies, so the API should hide this. Plus. it will help manage the transition from current state to target application landscape for payments-related applications. A model-driven approach to payments is key to this, especially as international payments standards take time to evolve.

7) APIs can enable innovative (read: non-standard) use of IT solutions, as long as the innovation can be encapsulated behind standard APIs. This gives the enterprise IT folks time to plan how to support the API, but still allow (limited) business adoption of the API for immediate business value.

With respect to how this all relates to SOA-type enterprise integration strategies, I’m still forming a view. Specifically, the internet itself does not have the concept of a ‘bus’. Someday it possibly will. But this is one space where the innovation still needs to come from enterprises, which still struggle with implementing successful SOA architectures that extend much beyond departmental boundaries.

Finally, the relationship between APIs and crypto-currencies is potentially so huge, that I will cover that another day..

APIs, more APIs, and .. crypto-currencies

Why modularity?

Modularity isn’t new: like a number of techniques developed in the 60’s and 70’s, modularity is seeing a renaissance with the advent of cloud computing (i.e., software-defined infrastructure).

Every developer knows the value of modularity – whether it is implemented through objects, packages, services or APIs. But only the most disciplined development teams actually maintain modular integrity over time. This is because it is not obvious initially what modules are needed: usually code evolves, and only when recurring patterns are identified is the need for modules identified. And then usually some refactoring is required, which takes time and costs money.

For project teams intent on meeting a project deadline, refactoring is often seen as a luxury, and so stovepipe development continues, and continues. And, of course, for outsourced bespoke solutions, it’s not necessarily in the interest of the service provider to take the architectural initiative.

In the ‘old-school’ way of enterprise software development, this results in an obvious scenario: a ‘rats nest’ of system integrations and stove-pipe architectures, with a high cost of change, and correspondingly high total cost of ownership – and all the while with diminishing business value (related to the shrinking ability of businesses to adapt quickly to changing market needs).

Enter cloud-computing (and social, big-data, etc, etc). How does this effect the importance of modularity?

Cloud-computing encourages horizontal scaling of technology and infrastructure. Stove-pipe architectures and large numbers of point-to-point integrations do not lend themselves to scalable architectures, as not every component of a large monolithic application can be or should be scaled horizontally, and maintenance costs are high as each component has to be maintained/tested in lock-step with other changes in the application (since the components weren’t designed specifically to work together).

In addition, the demands/requirements of vertical integration encourage monolithic application development – which usually implies architectures which are not horizontally scalable and which are therefore expensive to share in adaptable, agile ways. This gets progressively worse as businesses mature and the business desire for vertical integration conflicts with the horizontal needs of business processes/services which are shared by many business units.

 

The only way, IMHO, to resolve this apparent conflict in needs is via modular architectures – modularity in this sense representing the modular view of the enterprise, which can then be used to define boundaries around which appropriate strategies for resolving the vertical/horizontal conflicts can be resolved.

Modules defined at the enterprise level which map to cloud infrastructures provide the maximum flexibility: the ability to compose modules in a multitude of ways vertically, and to scale them out horizontally as demand/utilisation increases.

Current approaches to enterprise architecture create such maps (often called ‘business component models’), but to date it has been notoriously difficult to translate this into a coherent technical strategy – previous attempts such as SOA provide mixed results, especially where it matters: at enterprise scale.

There are many disciplines that must come together to solve this vertical/horizontal integration paradox, starting with enterprise architecture, but leaning heavily on data architecture, business architecture, and solution architecture – not to mention project management, business analysis, software development, testing, etc.

I don’t (yet!) have the answers to resolving this paradox, but technologies like infrastructure-as-a-service, OSGi, API and API management, big data and the impact mobile has on how IT is commissioned and delivered all have a part to play in how this will play out in the enterprise.

Enterprises that realise the opportunity will have a huge advantage: right now, the small players have the one thing they don’t have: agility. But the small players don’t have what the big players have: lots and lots of data. The commercial survivors will be the big players that have figured out how they can leverage their data while rediscovering agility and innovation. And modularity will be the way to do it.

 

Why modularity?

The vexing question of the value of (micro)services

In an enterprise context, the benefit and value of developing distributed systems vs monolithic systems is, I believe, key to whether large organisations can thrive/survive in the disruptive world of nimble startups nipping at their heels.

The observed architecture of most mature enterprises is a rats nest of integrated systems and platforms, resulting in high total cost of ownership, reduced ability to respond to (business) change, and significant inability to leverage the single most valuable assets of most modern firms: their data.

In this context, what exactly is a ‘rats nest’? In essence, it’s where dependencies between systems are largely unknown and/or ungoverned, the impact of changes between one system and another is unpredictable, and where the cost of rigorously testing such impacts outweigh the economic benefit of the business need requiring the change in the first place.

Are microservices and distributed architectures the answer to addressing this?

Martin Fowler recently published a very insightful view of the challenges of building applications using a microservices-based architecture (essentially where more components are out-of-process than are in-process):

http://martinfowler.com/articles/distributed-objects-microservices.html

The led to a discussion on LinkedIn prompted by this post, questioning the value/benefit of serviced-based architectures:

Fears for Tiers – Do You Need a Service Layer?

The upshot is, there is no agreed solution pattern that addresses the challenges of how to manage the complexity of the inherently distributed nature of enterprise systems, in a cost effective, scalable way. And therein lies the opportunities for enterprise architects.

It seems evident that the existing point-to-point, use-case based approach to developing enterprise systems cannot continue. But what is the alternative? 

These blog posts will delve into this topic a lot more over the coming weeks, with hopefully some useful findings that enterprises can apply with some success.. 

 

The vexing question of the value of (micro)services