Why modularity?

Modularity isn’t new: like a number of techniques developed in the 60’s and 70’s, modularity is seeing a renaissance with the advent of cloud computing (i.e., software-defined infrastructure).

Every developer knows the value of modularity – whether it is implemented through objects, packages, services or APIs. But only the most disciplined development teams actually maintain modular integrity over time. This is because it is not obvious initially what modules are needed: usually code evolves, and only when recurring patterns are identified is the need for modules identified. And then usually some refactoring is required, which takes time and costs money.

For project teams intent on meeting a project deadline, refactoring is often seen as a luxury, and so stovepipe development continues, and continues. And, of course, for outsourced bespoke solutions, it’s not necessarily in the interest of the service provider to take the architectural initiative.

In the ‘old-school’ way of enterprise software development, this results in an obvious scenario: a ‘rats nest’ of system integrations and stove-pipe architectures, with a high cost of change, and correspondingly high total cost of ownership – and all the while with diminishing business value (related to the shrinking ability of businesses to adapt quickly to changing market needs).

Enter cloud-computing (and social, big-data, etc, etc). How does this effect the importance of modularity?

Cloud-computing encourages horizontal scaling of technology and infrastructure. Stove-pipe architectures and large numbers of point-to-point integrations do not lend themselves to scalable architectures, as not every component of a large monolithic application can be or should be scaled horizontally, and maintenance costs are high as each component has to be maintained/tested in lock-step with other changes in the application (since the components weren’t designed specifically to work together).

In addition, the demands/requirements of vertical integration encourage monolithic application development – which usually implies architectures which are not horizontally scalable and which are therefore expensive to share in adaptable, agile ways. This gets progressively worse as businesses mature and the business desire for vertical integration conflicts with the horizontal needs of business processes/services which are shared by many business units.


The only way, IMHO, to resolve this apparent conflict in needs is via modular architectures – modularity in this sense representing the modular view of the enterprise, which can then be used to define boundaries around which appropriate strategies for resolving the vertical/horizontal conflicts can be resolved.

Modules defined at the enterprise level which map to cloud infrastructures provide the maximum flexibility: the ability to compose modules in a multitude of ways vertically, and to scale them out horizontally as demand/utilisation increases.

Current approaches to enterprise architecture create such maps (often called ‘business component models’), but to date it has been notoriously difficult to translate this into a coherent technical strategy – previous attempts such as SOA provide mixed results, especially where it matters: at enterprise scale.

There are many disciplines that must come together to solve this vertical/horizontal integration paradox, starting with enterprise architecture, but leaning heavily on data architecture, business architecture, and solution architecture – not to mention project management, business analysis, software development, testing, etc.

I don’t (yet!) have the answers to resolving this paradox, but technologies like infrastructure-as-a-service, OSGi, API and API management, big data and the impact mobile has on how IT is commissioned and delivered all have a part to play in how this will play out in the enterprise.

Enterprises that realise the opportunity will have a huge advantage: right now, the small players have the one thing they don’t have: agility. But the small players don’t have what the big players have: lots and lots of data. The commercial survivors will be the big players that have figured out how they can leverage their data while rediscovering agility and innovation. And modularity will be the way to do it.


Why modularity?

The high cost of productivity in Java OSGi

How hard should it be to build a simple web page that says ‘Hello World’?

If you are using Java and OSGi, it seems that this is a non-trivial exercise – at least if you want to use tools for making your life easier..but, when you are through the pain, the end result seems to be quite good, although the key decider will be whether there is some longer-term benefit to be had by going through such a relatively complex process.

My goal was to build an OSGi-enabled ‘Hello World’ web application that allowed me to conduct the whole dev lifecycle within my Eclipse environment. Should have been simple, but it turned out trying to persuade Eclipse to put the right (non-Java resource files) in the right place in JAR files, and getting the right bundles included to use them, was a challenge. Also, when things didn’t work as expected, trying to resolve apparently relevant error messages was a huge waste of time.

In particular, the only tools I am using at this point in time are:

  • Eclipse with Bndtools plugin
  • Git
  • An empty bundle repository, apart from that provided by default bndtools

So, the specific challenges were:

Web Application Bundles (WABs) + Servlets

  • To use the embedded jetty bundle (i.e., Java http server) to serve servlets, you need to include a web.xml in the jar file, in the WEB-INF folder. Turns out to do that, you need to put the WEB-INF/web.xml in the same src directory of your servlet java file, and manually add ‘-wab: src/<servlet src dir>’ to the bnd.bnd source. This tells bndtools to generate a JAR structure compatible with web applications.
  • The key pointer to this was in the bdntools docs:
  • Once that was done, eclipse generated the correct jar structure.

Including the right bundle dependencies

  • There are a number of resources to describe how to build a servlet with OSGi. Having started with the example in the ‘Enterprise OSGi in Action’ book (which precedes bndtools, and saves time by using an Apache Aries build but which includes a lot of unnecessary bundles), I decided to go back to basics and start from the beginning. I found the link below:
  • Basically, just use the ‘Apache Felix 4 with Web Console and Gogo’ option (provided within bndtools, and included as part of the default bndtools bundle repository), and add Apache Felix Whiteboard. It took a bit of effort to get this working: in the end I had to download the latest Felix jar download for all its components, and add them to my local bdntools repo.
  • The standard bndtools Felix components did not seem to work, and I think some of this may be due to old versions of the Felix servlet and jetty implementation in the bndtools repo (javax.servlet v2.5 vs 3.0, jetty 2.2 vs 2.3) conflicting with some newer bundles which support the Felix whiteboard functionality. So to use the whiteboard functionality, ensure you have the latest Felix bundles in your local bndtools repo. (Download from Felix website and add them via Eclipse Repositories view to the ‘Local’ repository.)

Error messages which (again) were not really what the problem was.

  • When I first built the application, the jetty server said ‘not found’ for the servlet URL I was building, and this message appeared in the log:
    • NO JSP Support for /fancyfoods.web, did not find org.apache.jasper.servlet.JspServlet
  • This was followed by a message suggesting that the server *had* processed the web.xml file.
  • After exploring what this error was (through judicious googling) it turns out Felix does not include JSP support by default, but this is not relevant to implementing the servlet example. In fact, this error was still displayed even after I got the application working correctly via the methods described above.

In conclusion: the learning curve has been high, and there’s a lot of feeling in the dark just because of the sheer volume of material out there one needs to know to become expert in this. But once environment/configuration issues have been sorted out, in principle, productivity should improve a lot.

However, I am not done yet: I suspect there will be much more challenges ahead to get blueprint and JPA working in my Eclipse environment. 


The high cost of productivity in Java OSGi

The vexing question of the value of (micro)services

In an enterprise context, the benefit and value of developing distributed systems vs monolithic systems is, I believe, key to whether large organisations can thrive/survive in the disruptive world of nimble startups nipping at their heels.

The observed architecture of most mature enterprises is a rats nest of integrated systems and platforms, resulting in high total cost of ownership, reduced ability to respond to (business) change, and significant inability to leverage the single most valuable assets of most modern firms: their data.

In this context, what exactly is a ‘rats nest’? In essence, it’s where dependencies between systems are largely unknown and/or ungoverned, the impact of changes between one system and another is unpredictable, and where the cost of rigorously testing such impacts outweigh the economic benefit of the business need requiring the change in the first place.

Are microservices and distributed architectures the answer to addressing this?

Martin Fowler recently published a very insightful view of the challenges of building applications using a microservices-based architecture (essentially where more components are out-of-process than are in-process):


The led to a discussion on LinkedIn prompted by this post, questioning the value/benefit of serviced-based architectures:

Fears for Tiers – Do You Need a Service Layer?

The upshot is, there is no agreed solution pattern that addresses the challenges of how to manage the complexity of the inherently distributed nature of enterprise systems, in a cost effective, scalable way. And therein lies the opportunities for enterprise architects.

It seems evident that the existing point-to-point, use-case based approach to developing enterprise systems cannot continue. But what is the alternative? 

These blog posts will delve into this topic a lot more over the coming weeks, with hopefully some useful findings that enterprises can apply with some success.. 


The vexing question of the value of (micro)services

BndTools – a tool for building enterprise OSGi applications

Enterprise OSGi is complex, but the potential value is enormous, especially when the OSGi standards can be extended to cloud-based architectures. More on this later.

In the meantime, building OSGi applications is a challenge, as it is all about managing *and isolating* dependencies. Enter bndtools, an Eclipse plug-in designed to make developing and testing OSGi components (bundles) easy.

The tutorial for bndtools can be found here, which I recently completed:


It is quite short, and gets you up and going very quickly with an OSGi application.

Now that I’ve got the basics of bndtools sorted, I’m going to go back to see if I can rebuild my “Enterprise OSGi in Action” project using it, so I can stop using my rather poor shell scripts..that should make it much easier/quicker to get through the remaining sections of the book, in terms of the code/build/test cycle.

The bndtools makes use of a remote repository which includes packages needs to build OSGi applications..I need to understand more about this, so I can figure out how and where various packages and bundles needed to build and run OSGi applications should come from, and how to manage changes to these over time. The ‘Enterprise OSGi in Action’ project leans heavily on the Apache Aries Blog Assembley set of components, which is great to get things going, but in practice, many of the components in the Apache Aries distribution are not needed to do the examples in the book. 

Since I don’t like waste (it leads to unnecessary complexity down the road..), I want to see if I can build the examples with only the specific components needed.

BndTools – a tool for building enterprise OSGi applications

Private vs Public Repositories in the Enterprise (Enterprise Open Source Part 2)

Why would anyone want to have a private repository (a-la bitbucket), and not make their code available to anyone to see/enhance/critique (a-la GitHub)?

In the public internet, the reasons are legion: plenty of folks are learning and need somewhere safer to put the output of their mediocre, laughable scratchings than on their local disk. Or they foolishly believe that keeping their code secret is a sustainable commercial advantage.

In enterprises, the reasons are more nuanced: you are hired as an (expensive) professional developer, and if you publish your code for others to see (*internally*), folks may question your expertise if they think the code is shoddy or poorly structured. Or you may personally be unhappy with the code, but nevertheless the code is being used to support a valid business need or purpose. etc, etc.

In an enterprise, private repositories actually represent a risk to the firm: if you are maintaining code that is personal to you (or to a small team), then you are providing a service, but not delivering a product. In other words, the value is in you as a person (or the team) and not in the technology you produce as such. There are many examples of this in organisations with a key-man dependency that cannot be broken without paying a king’s ransom. The same theory applies to vendors who maintain proprietary code: make no mistake, they are providing a service, not delivering a product. And so you have a fixed dependency on the service provider, which is an on-going cost and risk, depending on the ‘stability’ of the service provider.

For many organisations, this works fine: banks have been using this model for years, both for traders and for technologists. But it is unsustainable: when people feel their efforts to maintain their service is not being rewarded, they will leave, and then you have an impaired service which could take some time to return to previous levels (if ever).

It should also be noted that in a service-based technology environment, in the event of the demise of a firm (i.e., the people are all fired), the value of the remaining (software) assets are significantly diminished if all the code is tied up in private repositories that have never seen the light of day. So from a ‘living will’ perspective, there is a lot of value in open source.

I believe an enterprise can only move to a product-based environment, where there is sustainable value in the technology itself, when source code is made open (internally), with all the cultural implications of this.

So when would private repositories in an enterprise environment be acceptable? Well, in principle anyone can keep a private repository on a local protected shared drive on a corporate network. But ignoring that, (and apart from code that is genuinely commercially sensitive) firms may explicitly state that private repositories can be used for learning or otherwise developing skills that are not intended for commercial use by the firm. So if you leave, the firm is not exposed.

The rule of thumb should be that all productionised code that is not classified as Confidential should be made (internally) open. And a strict governance process should be in place to ensure that Confidential code is really confidential – i.e., that there is a legitimate business reason why the business users would not want anyone but their own team to see the code, and that only the sensitive code is classified as such, and not (for example) the entire application on the basis it contains a single module with sensitive code. This will be a business decision, and will reflect how much any given business unit in an organisation feel they are part of a wider concern, or just out for their own PnL. 

My bet is that in successful enterprises in the future, business units will be comfortable for other business units to see and leverage their code. But for many organisations today, such collaboration is a major cultural challenge, which IT organisations should help change.

Private vs Public Repositories in the Enterprise (Enterprise Open Source Part 2)

Enterprise Open Source

I recently came across this article, advocating the use of open source within organisations:

Why Your Company Needs To Write More Open Source Software

There are many reasons why going open-source is good for organisations, and almost as many reasons why its not likely to happen any time soon. However, an interim ‘enterprise open source’ approach may be useful – enterprise software written using open-source governance and change control principles, but where the open-ness is restricted to enterprise use.

Why would enterprises want to do this? 

Simply put, technology reuse across enterprises is going to be very, very difficult (if not impossible) to develop in an economically viable way without it. And there are many other benefits, such as giving developers stewardship of business software, encouraging developers to build software that they know will have other eyes reviewing it, improving build/compile automation processes, improving deployment processes, reducing key man dependencies, reducing vendor lock-in, reducing the need for formal code reviews, providing a longer-term ROI for investment in software, etc.

So while not everything in an Enterprise Open Source effort could (or should) be reused, the mere act of building software in this way will improve how software development is perceived and treated in firms that are technology enabled, but are not technology vendors.

Many developers are reluctant to build software that can be exposed to others for critique – this is natural when developers are forced in many circumstances to cut corners to meet project deliverables, even at the expense of technical debt. It’s time to put the technical debt in the spotlight: maybe you don’t have the time to fix it, because it doesn’t need to be fixed for your project. But someone else may think its worth fixing for their project, vs the cost of developing something entirely new.

From an audit and control perspective, I suspect that in well written code, there should be very little sensitive information in code that cannot be shared at the enterprise level: most likely, any such sensitive information should be captured in its own repository, with appropriate controls to restrict access – i.e., a separation of concerns, architecturally.

I believe a top-down sponsored approach to enterprise open source is necessary to allow the development of truly reusable services (encapsulating data and function in one or more modules), modules (encapsulating function only across multiple packages) and packages (encapsulating functional building blocks for modules in the form of JARs etc).

Without it, effective technology reuse will rely only on top-down governance – a level of control and oversight that is expensive to provide and is mostly ineffective, especially for larger organisations. Rather, developers need to be recognised for the code they steward,  and the code they reuse/contribute to, and this combined with a top-down reuse strategy should yield the best outcome.

Would any CTO stick their neck out and advocate this approach?

Note – the above is separate from the issue of whether firms that use open source should contribute back to the community. In particular Amazon has drawn some attention regarding its official policy relating to employees contributions to the open source community. This may be the subject of a future blog post..

Enterprise Open Source

OSGi Blueprint travails

So, OSGi is supposed to make it easier to isolate problems and therefore quicker to resolve. Alas, that is not the case yet – although this may yet change as I start to apply all the tools that are rapidly maturing in this space.

The problem is my application (the ‘fancyfoods’ website, from the ‘Enterprise OSG in Action’ tutorial) is failing to startup correctly, to wit:

  1. The servlet that displays the special offers that I know is available from the ‘fancyfoods’ store is saying there is none available.
  2. A number of exception errors in the console window I use to launch the OSGi application.

To date, the book has been accurate in terms of the key source code and configurations it requires. So in all probability, the error was likely due to my very novice development/test environment configuration.

So, nothing to do but pick through the exceptions and try to decipher it.

It starts with this beauty, which basically says the first operation which attempts to access the persistent store is failing (it does a count to see if any rows exist in the database):

[Blueprint Extender: 2] ERROR org.apache.aries.blueprint.container.BlueprintContainerImpl – Unable to start blueprint container for bundle fancyfoods.persistence

org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to intialize bean populator


Caused by: java.lang.VerifyError: Expecting a stackmap frame at branch target 41

Exception Details:


    fancyfoods/persistence/FoodImpl.<clinit>()V @32: ifnull


    Expected stackmap frame at this location.


[Blueprint Extender: 1] WARN Transaction – Error ending association for XAResource org.apache.derby.jdbc.EmbedXAResource@e5a209e; transaction will roll back. XA error code: 100

As some background, Derby is an SQL-based database written in Java that implements JPA (Java Persistence API). It is used as the reference persistence implementation service for the ‘fancyfoods’ applications, as a simple substitute for mySQL or Oracle, etc.

After spending *way* too much time trying to figure out this error and verifying the various configuration files etc were correct (and in the process understanding a whole lot more about OSGi Blueprint), it turns out that the error was quite mundane, and indeed related to my dev/test environment, which uses the Mac OS X version of Java 1.7.

The turning point was this post from stackoverflow.com:


Adding the ‘-XX:-UseSplitVerifier’ to my OSGi run script fixed the problem.

Along the way I, I investigated and eliminated all kinds of debug messages that were suggestive of problems but seemed to be relatively normal, or were just symptoms of the original error – for example, mysterious unbinding of services seemed to precede the exception being thrown. I suspect now that this was due to the multi-threaded nature of OSGi, and once the exception was thrown, various activities kicked in that displayed messages before the thread with the exception displayed its message.

Trying to solve this highlighted a few things:

  1. OSGi is powerful, but when services don’t work as expected, it is devilishly difficult to trace through a potentially complex web of dependencies to figure out where things have gone wrong. Blueprint in particular is a challenge: being declarative, tracing cause and effect can be non-trivial.
  2. Getting your dev/build environment right is soooo key..spend time on this, it is worth it, and shouldn’t change much once the basic architecture is in place.
  3. The ‘services’ mindshift in Java coding is quite significant: I haven’t got there yet, and I think this is at least partially because Java is basically an imperative language, and services are more declarative in nature, so it requires the two concepts to co-habitate. More sophisticated tools are needed to support this, especially in an enterprise setting.

Next up for OSGi: finish chapter 3 (implementing distributed and multipart transactions). Hopefully there won’t be any showstoppers to slow things down in the meantime..

OSGi Blueprint travails

The Learning CTO – Topics of Interest

What are the subjects/topics that are worthy of some discussion/observations in this blog?

The criteria for what is deemed ‘interesting’ will likely evolve, but right now, the key drivers are:

  • Technologies being used by kids straight out of college (or not)
  • Technologies that can be used in place of traditional enterprise vendor supplier solutions
  • Technologies that require a fundamental shift in thinking about technology problems (e.g., functional vs imperative programming, etc)
  • Technologies that improve agility, productivity and complexity management – and hence the cost of ownership of technology
  • Technologies that empower users (even as users become more technically savvy)
  • Technologies that will build the (business) platforms of the future

So, some topics of particular interest to me at the moment:

  • Enterprise-scale Service Architectures
  • Microservices & modularity
  • Containers
  • Big Data & NoSQL
  • Streaming data
  • Machine Learning
  • DevOps
  • Domain-specific languages
  • Functional languages of all forms
  • Mobile technologies
  • Crypto-currencies/ledgers
  • Technology governance

The challenge with technology today is that new technologies span many traditional organisational boundaries: in large enterprises, it is not obvious which part of the organization should govern, drive or innovate in these topics. They are not traditional infrastructure, operating system, middleware or database domains. In fact, I intensely dislike the term ‘middleware’ – it pushes a whole bunch of innovative technology into a commodity space, whereas in fact it is the space that provides the competitive advantage  in a world where business ideas are a dime a dozen.

So I will be posting observations on all these topics. All of these topics will be as a result of my own hands-on experiences with these domains, or from working directly with folks who have direct experience in applying these technologies to problem domains.

As developing code is not my day job, I cannot claim to be an expert on any of these topics. Rather, I hope to provide a perspective that may be interesting to folks for whom these topics maybe of particular interest, whether from a hands-on perspective or from a CTO/management perspective.

And, crucially, as these topics overlap to many extents, understanding how these may all come together (or not) to deliver truly innovative and game-changing technology solutions is a useful side benefit!

The Learning CTO – Topics of Interest

Enterprise OSGi in Action – a tutorial in enterprise OSGi

Having developed an active interest in Enterprise OSGi, I decided to leap in and work my way through the ‘Enterprise OSGi in Action’ book by Tim Ward and Holly Cummins, published by Manning Publications. (manning.com, btw, is a great resource for technical hands-on books delivered electronically..many of them will feature in this blog.)

More details about *why* I am interesting in Enterprise OSGi is the subject for another blog post…enough, onto my progress so far.

I am already some weeks through this project, and have been using it to refresh my java development skills, which, I must admit, are somewhat rusty after some years in the technology project management space. This blog will touch upon some topics which may be obvious to every-day developers, but for folks who cut their teeth in C/C++ and Java in the 90’s and didn’t keep up programming into the 2000’s, some topics can be quite a revelation.

So, here’s what I’ve got so far:

  • I’ve completed to Chapter 3 of the book (covering a simple web-based application with persistence)
  • I’ve created a repository on GitHub:
  •  I’ve loaded the project into Eclipse for compilation error checking etc, but not for building
  • I’ve written some simple shell scripts to do a *very* crude compile/build/deploy process – this is what folks were doing prior to ‘make’, never mind Nexus/Maven/etc!

What to do next:

  • Properly understand what I did in the first 3 chapters..! 🙂
  • Find out why I keep getting Derby/Blueprint errors when my application starts up – this appears to be a timing thing, but seems to be very sensitive and not particularly robust, as I have had it working previously..
  • Improve the toolchain process – use Eclipse, bndtools, Jensen and Maven/Nexus to automate the compile/build/deploy process.
  • Complete the following few chapters – delving more deeper into the ‘enterprise’ aspects of OSGi
  • Build something ‘real’ with the technology.

Note that the ‘Enterprise OSGi in Action’ book is now a couple of years old..however, its concepts are still correct and apply to the latest specs of OSGi. I believe that with the correct application of the ‘bndtools’ toolset, building this example project would be a whole lot easier, hence a goal of this project is to do just that. Without bndtools, some deep technical understanding of what is going on is needed, which for many devs will be too much: they just want to get productive already…

If anybody else is exploring or experimenting with OSGi, feel free to comment. This is a learning exercise, so there are no stupid questions. (After having searched a few times in stackoverflow.com for problems which I attributed to my lack of recent hands-on experience, I’ve found that most of my stupid questions have already been asked by others and invariably answered.)

For those interested, I am doing most of my work on a MacBook Pro running Mac OS X Mavericks. I’ve installed Homebrew for some basic tools, but otherwise I’m developing in a standard Mac environment. 

Enterprise OSGi in Action – a tutorial in enterprise OSGi