Why IoT is changing Enterprise Architecture

[tl;dr The discipline of enterprise architecture as an aid to business strategy execution has failed for most organizations, but is finding a new lease of life in the Internet of Things.]

The strategic benefits of having an enterprise-architecture based approach to organizational change – at least in terms of business models and shared capabilities needed to support those models – have been the subject of much discussion in recent years.

However, enterprise architecture as a practice (as espoused by The Open Group and others) has never managed to break beyond it’s role as an IT-focused endeavor.

In the meantime, less technology-minded folks are beginning to discuss business strategy using terms like ‘modularity’, which is a first step towards bridging the gap between the business folks and the technology folks. And technology-minded folks are looking at disruptive business strategy through the lens of the ‘Internet of Things‘.

Business Model Capability Decomposition

Just like manufacturing-based industries decomposed their supply-chains over the past 30+ years (driving an increasingly modular view of manufacturing), knowledge-based industries are going through a similar transformation.

Fundamentally, knowledge based industries are based on the transfer and management of human knowledge or understanding. So, for example, you pay for something, there is an understanding on both sides that that payment has happened. Technology allows such information to be captured and managed at scale.

But the ‘units’ of knowledge have only slowly been standardized, and knowledge-based organizations are incentivized to ensure they are the only ones to be able to act on the information they have gathered – to often disastrous social and economic consequences (e.g., the financial crisis of 2008).

Hence, regulators are stepping into to ensure that at least some of this ‘knowledge’ is available in a form that allows governments to ensure such situations do not arise again.

In the FinTech world, every service provided by big banks is being attacked by nimble competitors able to take advantage of new, more meaningful technology-enabled means of engaging with customers, and who are willing to make at least some of this information more accessible so that they can participate in a more flexible, dynamic ecosystem.

For these upstart FinTech firms, they often have a stark choice to make in order to succeed. Assuming they have cleared the first hurdle of actually having a product people want, at some point, they must decide whether they are competing directly with the big banks, or if they are providing a key part of the electronic financial knowledge ecosystem that big banks must (eventually) be part of.

In the end, what matters is their approach to data: how they capture it (i.e., ‘UX’), what they do with it, how they manage it, and how it is leveraged for competitive and commercial advantage (without falling foul of privacy laws etc). Much of the rest is noise from businesses trying to get attention in an increasingly crowded space.

Historically, many ‘enterprise architecture’ or strategy departments fail to have impact because firms do not treat data (or information, or knowledge) as an asset, but rather as something to be freely and easily created and shunted around, leaving a trail of complexity and lost opportunity cost wherever it goes. So this attitude must change before ‘enterprise architecture’ as a concept will have a role in boardroom discussions, and firms change how they position IT in their business strategy. (Regulators are certainly driving this for certain sectors like finance and health.)

Internet of Things

Why does the Internet Of Things (IoT) matter, and where does IoT fit into all this?

At one level, IoT presents a large opportunity for firms which see the potential implied by the technologies underpinning IoT; the technology can provide a significant level of convenience and safety to many aspects of a modern, digitally enabled life.

But fundamentally, IoT is about having a large number of autonomous actors collaborating in some way to deliver a particular service, which is of value to some set of interested stakeholders.

But this sounds a lot like what a ‘company’ is. So IoT is, in effect, a company where the actors are technology actors rather than human actors. They need some level of orchestration. They need a common language for communication. And they need laws/protocols that govern what’s permitted and what is not.

If enterprise architecture is all about establishing the functional, data and protocol boundaries between discrete capabilities within an organization, then EA for IoT is the same thing but for technical components, such as sensors or driverless cars, etc.

So IoT seems a much more natural fit for EA thinking than traditional organizations, especially as, unlike departments in traditional ‘human’ companies, technical components like standards: they like fixed protocols, fixed functional boundaries and well-defined data sets. And while the ‘things’ themselves may not be organic, their behavior in such an environment could exhibit ‘organic’ characteristics.

So, IoT and and the benefits of an enterprise architecture-oriented approach to business strategy do seem like a match made in heaven.

The Converged Enterprise

For information-based industries in particular, there appears to be an inevitable convergence: as IoT and the standards, protocols and governance underpinning it mature, so too will the ‘modular’ aspects of existing firms operating models, and the eco-system of technology-enabled platforms will mature along with it. Firms will be challenged to deliver value by picking the most capable components in the eco-system around which to deliver unique service propositions – and the most successful of those solutions will themselves become the basis for future eco-systems (a Darwinian view of software evolution, if you will).

The converged enterprise will consist of a combination of human and technical capabilities collaborating in well-defined ways. Some capabilities will be highly human, others highly technical, some will be in-house, some will be part of a wider platform eco-system.

In such an organization, enterprise architects will find a natural home. In the meantime, enterprise architects must choose their starting point, behavioral or structural: focusing first on decomposed business capabilities and finishing with IoT (behavioral->structural), or focusing first on IoT and finishing with business capabilities (structural->behavioral).

Technical Footnote

I am somewhat intrigued at how the OSGi Alliance has over the years shifted its focus from basic Java applications, to discrete embedded systems, to enterprise systems and now to IoT. OSGi, (disappointingly, IMO), has had a patchy record changing how firms build enterprise software – much of this is to do with a culture of undisciplined dependency management in the software industry which is very, very hard to break.

IoT raises the bar on dependency management: you simply cannot comprehensively test software updates to potentially hundreds of thousands or millions of components running that software. The ability to reliably change modules without forcing a test of all dependent instantiated components is a necessity. As enterprises get more complex and digitally interdependent, standards such as OSGi will become more critical to the plumbing of enabling technologies. But as evidenced above, for folks who have tried and failed to change how enterprises treat their technology-enabled business strategy, it’s a case of FIETIOT – Failed in Enterprise, Trying IoT. And this, indeed, seems a far more rational use of an enterprise architect’s time, as things currently stand.

 

 

 

 

 

 

Why IoT is changing Enterprise Architecture

Why modularity?

Modularity isn’t new: like a number of techniques developed in the 60’s and 70’s, modularity is seeing a renaissance with the advent of cloud computing (i.e., software-defined infrastructure).

Every developer knows the value of modularity – whether it is implemented through objects, packages, services or APIs. But only the most disciplined development teams actually maintain modular integrity over time. This is because it is not obvious initially what modules are needed: usually code evolves, and only when recurring patterns are identified is the need for modules identified. And then usually some refactoring is required, which takes time and costs money.

For project teams intent on meeting a project deadline, refactoring is often seen as a luxury, and so stovepipe development continues, and continues. And, of course, for outsourced bespoke solutions, it’s not necessarily in the interest of the service provider to take the architectural initiative.

In the ‘old-school’ way of enterprise software development, this results in an obvious scenario: a ‘rats nest’ of system integrations and stove-pipe architectures, with a high cost of change, and correspondingly high total cost of ownership – and all the while with diminishing business value (related to the shrinking ability of businesses to adapt quickly to changing market needs).

Enter cloud-computing (and social, big-data, etc, etc). How does this effect the importance of modularity?

Cloud-computing encourages horizontal scaling of technology and infrastructure. Stove-pipe architectures and large numbers of point-to-point integrations do not lend themselves to scalable architectures, as not every component of a large monolithic application can be or should be scaled horizontally, and maintenance costs are high as each component has to be maintained/tested in lock-step with other changes in the application (since the components weren’t designed specifically to work together).

In addition, the demands/requirements of vertical integration encourage monolithic application development – which usually implies architectures which are not horizontally scalable and which are therefore expensive to share in adaptable, agile ways. This gets progressively worse as businesses mature and the business desire for vertical integration conflicts with the horizontal needs of business processes/services which are shared by many business units.

 

The only way, IMHO, to resolve this apparent conflict in needs is via modular architectures – modularity in this sense representing the modular view of the enterprise, which can then be used to define boundaries around which appropriate strategies for resolving the vertical/horizontal conflicts can be resolved.

Modules defined at the enterprise level which map to cloud infrastructures provide the maximum flexibility: the ability to compose modules in a multitude of ways vertically, and to scale them out horizontally as demand/utilisation increases.

Current approaches to enterprise architecture create such maps (often called ‘business component models’), but to date it has been notoriously difficult to translate this into a coherent technical strategy – previous attempts such as SOA provide mixed results, especially where it matters: at enterprise scale.

There are many disciplines that must come together to solve this vertical/horizontal integration paradox, starting with enterprise architecture, but leaning heavily on data architecture, business architecture, and solution architecture – not to mention project management, business analysis, software development, testing, etc.

I don’t (yet!) have the answers to resolving this paradox, but technologies like infrastructure-as-a-service, OSGi, API and API management, big data and the impact mobile has on how IT is commissioned and delivered all have a part to play in how this will play out in the enterprise.

Enterprises that realise the opportunity will have a huge advantage: right now, the small players have the one thing they don’t have: agility. But the small players don’t have what the big players have: lots and lots of data. The commercial survivors will be the big players that have figured out how they can leverage their data while rediscovering agility and innovation. And modularity will be the way to do it.

 

Why modularity?

The high cost of productivity in Java OSGi

How hard should it be to build a simple web page that says ‘Hello World’?

If you are using Java and OSGi, it seems that this is a non-trivial exercise – at least if you want to use tools for making your life easier..but, when you are through the pain, the end result seems to be quite good, although the key decider will be whether there is some longer-term benefit to be had by going through such a relatively complex process.

My goal was to build an OSGi-enabled ‘Hello World’ web application that allowed me to conduct the whole dev lifecycle within my Eclipse environment. Should have been simple, but it turned out trying to persuade Eclipse to put the right (non-Java resource files) in the right place in JAR files, and getting the right bundles included to use them, was a challenge. Also, when things didn’t work as expected, trying to resolve apparently relevant error messages was a huge waste of time.

In particular, the only tools I am using at this point in time are:

  • Eclipse with Bndtools plugin
  • Git
  • An empty bundle repository, apart from that provided by default bndtools

So, the specific challenges were:

Web Application Bundles (WABs) + Servlets

  • To use the embedded jetty bundle (i.e., Java http server) to serve servlets, you need to include a web.xml in the jar file, in the WEB-INF folder. Turns out to do that, you need to put the WEB-INF/web.xml in the same src directory of your servlet java file, and manually add ‘-wab: src/<servlet src dir>’ to the bnd.bnd source. This tells bndtools to generate a JAR structure compatible with web applications.
  • The key pointer to this was in the bdntools docs:
  • Once that was done, eclipse generated the correct jar structure.

Including the right bundle dependencies

  • There are a number of resources to describe how to build a servlet with OSGi. Having started with the example in the ‘Enterprise OSGi in Action’ book (which precedes bndtools, and saves time by using an Apache Aries build but which includes a lot of unnecessary bundles), I decided to go back to basics and start from the beginning. I found the link below:
  • Basically, just use the ‘Apache Felix 4 with Web Console and Gogo’ option (provided within bndtools, and included as part of the default bndtools bundle repository), and add Apache Felix Whiteboard. It took a bit of effort to get this working: in the end I had to download the latest Felix jar download for all its components, and add them to my local bdntools repo.
  • The standard bndtools Felix components did not seem to work, and I think some of this may be due to old versions of the Felix servlet and jetty implementation in the bndtools repo (javax.servlet v2.5 vs 3.0, jetty 2.2 vs 2.3) conflicting with some newer bundles which support the Felix whiteboard functionality. So to use the whiteboard functionality, ensure you have the latest Felix bundles in your local bndtools repo. (Download from Felix website and add them via Eclipse Repositories view to the ‘Local’ repository.)

Error messages which (again) were not really what the problem was.

  • When I first built the application, the jetty server said ‘not found’ for the servlet URL I was building, and this message appeared in the log:
    • NO JSP Support for /fancyfoods.web, did not find org.apache.jasper.servlet.JspServlet
  • This was followed by a message suggesting that the server *had* processed the web.xml file.
  • After exploring what this error was (through judicious googling) it turns out Felix does not include JSP support by default, but this is not relevant to implementing the servlet example. In fact, this error was still displayed even after I got the application working correctly via the methods described above.

In conclusion: the learning curve has been high, and there’s a lot of feeling in the dark just because of the sheer volume of material out there one needs to know to become expert in this. But once environment/configuration issues have been sorted out, in principle, productivity should improve a lot.

However, I am not done yet: I suspect there will be much more challenges ahead to get blueprint and JPA working in my Eclipse environment. 

 

The high cost of productivity in Java OSGi

The vexing question of the value of (micro)services

In an enterprise context, the benefit and value of developing distributed systems vs monolithic systems is, I believe, key to whether large organisations can thrive/survive in the disruptive world of nimble startups nipping at their heels.

The observed architecture of most mature enterprises is a rats nest of integrated systems and platforms, resulting in high total cost of ownership, reduced ability to respond to (business) change, and significant inability to leverage the single most valuable assets of most modern firms: their data.

In this context, what exactly is a ‘rats nest’? In essence, it’s where dependencies between systems are largely unknown and/or ungoverned, the impact of changes between one system and another is unpredictable, and where the cost of rigorously testing such impacts outweigh the economic benefit of the business need requiring the change in the first place.

Are microservices and distributed architectures the answer to addressing this?

Martin Fowler recently published a very insightful view of the challenges of building applications using a microservices-based architecture (essentially where more components are out-of-process than are in-process):

http://martinfowler.com/articles/distributed-objects-microservices.html

The led to a discussion on LinkedIn prompted by this post, questioning the value/benefit of serviced-based architectures:

Fears for Tiers – Do You Need a Service Layer?

The upshot is, there is no agreed solution pattern that addresses the challenges of how to manage the complexity of the inherently distributed nature of enterprise systems, in a cost effective, scalable way. And therein lies the opportunities for enterprise architects.

It seems evident that the existing point-to-point, use-case based approach to developing enterprise systems cannot continue. But what is the alternative? 

These blog posts will delve into this topic a lot more over the coming weeks, with hopefully some useful findings that enterprises can apply with some success.. 

 

The vexing question of the value of (micro)services

BndTools – a tool for building enterprise OSGi applications

Enterprise OSGi is complex, but the potential value is enormous, especially when the OSGi standards can be extended to cloud-based architectures. More on this later.

In the meantime, building OSGi applications is a challenge, as it is all about managing *and isolating* dependencies. Enter bndtools, an Eclipse plug-in designed to make developing and testing OSGi components (bundles) easy.

The tutorial for bndtools can be found here, which I recently completed:

http://bndtools.org/tutorial.html

It is quite short, and gets you up and going very quickly with an OSGi application.

Now that I’ve got the basics of bndtools sorted, I’m going to go back to see if I can rebuild my “Enterprise OSGi in Action” project using it, so I can stop using my rather poor shell scripts..that should make it much easier/quicker to get through the remaining sections of the book, in terms of the code/build/test cycle.

The bndtools makes use of a remote repository which includes packages needs to build OSGi applications..I need to understand more about this, so I can figure out how and where various packages and bundles needed to build and run OSGi applications should come from, and how to manage changes to these over time. The ‘Enterprise OSGi in Action’ project leans heavily on the Apache Aries Blog Assembley set of components, which is great to get things going, but in practice, many of the components in the Apache Aries distribution are not needed to do the examples in the book. 

Since I don’t like waste (it leads to unnecessary complexity down the road..), I want to see if I can build the examples with only the specific components needed.

BndTools – a tool for building enterprise OSGi applications

Enterprise OSGi in Action – a tutorial in enterprise OSGi

Having developed an active interest in Enterprise OSGi, I decided to leap in and work my way through the ‘Enterprise OSGi in Action’ book by Tim Ward and Holly Cummins, published by Manning Publications. (manning.com, btw, is a great resource for technical hands-on books delivered electronically..many of them will feature in this blog.)

More details about *why* I am interesting in Enterprise OSGi is the subject for another blog post…enough, onto my progress so far.

I am already some weeks through this project, and have been using it to refresh my java development skills, which, I must admit, are somewhat rusty after some years in the technology project management space. This blog will touch upon some topics which may be obvious to every-day developers, but for folks who cut their teeth in C/C++ and Java in the 90’s and didn’t keep up programming into the 2000’s, some topics can be quite a revelation.

So, here’s what I’ve got so far:

  • I’ve completed to Chapter 3 of the book (covering a simple web-based application with persistence)
  • I’ve created a repository on GitHub:
  •  I’ve loaded the project into Eclipse for compilation error checking etc, but not for building
  • I’ve written some simple shell scripts to do a *very* crude compile/build/deploy process – this is what folks were doing prior to ‘make’, never mind Nexus/Maven/etc!

What to do next:

  • Properly understand what I did in the first 3 chapters..! 🙂
  • Find out why I keep getting Derby/Blueprint errors when my application starts up – this appears to be a timing thing, but seems to be very sensitive and not particularly robust, as I have had it working previously..
  • Improve the toolchain process – use Eclipse, bndtools, Jensen and Maven/Nexus to automate the compile/build/deploy process.
  • Complete the following few chapters – delving more deeper into the ‘enterprise’ aspects of OSGi
  • Build something ‘real’ with the technology.

Note that the ‘Enterprise OSGi in Action’ book is now a couple of years old..however, its concepts are still correct and apply to the latest specs of OSGi. I believe that with the correct application of the ‘bndtools’ toolset, building this example project would be a whole lot easier, hence a goal of this project is to do just that. Without bndtools, some deep technical understanding of what is going on is needed, which for many devs will be too much: they just want to get productive already…

If anybody else is exploring or experimenting with OSGi, feel free to comment. This is a learning exercise, so there are no stupid questions. (After having searched a few times in stackoverflow.com for problems which I attributed to my lack of recent hands-on experience, I’ve found that most of my stupid questions have already been asked by others and invariably answered.)

For those interested, I am doing most of my work on a MacBook Pro running Mac OS X Mavericks. I’ve installed Homebrew for some basic tools, but otherwise I’m developing in a standard Mac environment. 

Enterprise OSGi in Action – a tutorial in enterprise OSGi