Friday, May 27, 2016

Microservices Granularity for the Internet of Things


In a 2014 blog posting, Martin Fowler discussed issues around creating networked services in Microservices and the First Law of Distributed Objects.   One of his points is that networked services should generally be more coarse-grained than local (in-process) services. The reason for this is that distribution always has costs (bandwidth, performance), and these costs easily can become large with fine-grained remote calls.

But as Fowler points out, there are good reasons (e.g. complexity) to make a networked API as fine-grained as possible.   How granular/coarse should IoT microservices be?  In his blog posting, Fowler suggested that granularity was an open question, and that experience with different systems with different levels of microservices granularity would provide eventual insight.

I agree with Fowler's view that experience is necessary to decide on 'appropriate' granularity.   I think it's particularly true for the Internet of Things, where multiple people and organizations are attempting attempting to create consistent abstractions for the relatively-limited input and output capabilities exposed by newly networked devices...aka 'things'.

But when actually defining remote services, often the first thing done is to bind the service to a particular transport+protocol+impl framework (e.g. https+json+jersey).   Once bound to a transport, the service API may become very difficult to refactor and version.  This is especially true once a service has been deployed, but frequently deployment is the only way to get enough real experience to be more (or less) granular!

One way to provide flexibility...and allow future change to a service is to remain as transport-independent as possible.   As described by this article, new standards such as OSGi Remote Services/RSA and ECF's modular implementation makes it possible to design and refactor services independent of the transport.  Such independence will make it easier to update the granularity of a microservice when necessary.

Monday, May 16, 2016

Network Dynamics and Micro Services

One of the most challenging aspects of building networked applications is dealing with network dynamics. Networks and endpoints go down, sometimes come back up, and this implies that consumers accessing these services have to respond as these changes occur.

This will be even more true for the Internet of Things (IoT), where a wide variety of devices and a wide variety of networks will be involved to support the use of a micro service. Through no design or programming fault, IoT services and the applications that depend upon them will be less reliable.

How should micro-service consumers respond to failure? That's a good question, as the answer clearly depends upon the application-level needs and requirements.

For example, once loaded an html web page does not need to know/respond to the failure of the web server or the dropping (or changing due to mobility) of the network connecting the browser to the web server. If the user clicks on a link to present another page the load of the page will fail, but for browsing web pages that's a completely acceptable strategy for handling network failure.

On the other hand, consider an IoT application where a real-time data stream is collected from a sensor device. In such a case it might make more sense to have strategy for responding to network and/or device failure such as switching to a backup, or perhaps presenting to a user or admin that the data stream is temporarily unavailable. The larger point is that consumers of a micro service will differ in their requirements for responding to network failures.

What does any of this have to do with micro services? Frequently it falls to the application to not only define a strategy for application-level failure handling, but also to implement the networking code to detect failure and to use this detection to allow an application to implement a failure-handling strategy. This networking code can be a very difficult thing to create, especially if it has to meet multiple service and application-level requirements.

There are now specifications allowing the excellent dynamics support in OSGi Services to be used for Remote Services. The OSGi Service Registry, was designed to support dynamic within-process services. This allows applications to respond to services that come and go dynamically without having to create all the software infrastructure to do so reliably. Further, there are now OSGi specifications for Remote Services, and these allow the same dynamics support to be used to respond to network dynamics. Since the OSGi service registry is standardized, applications can also use (rather than build) convenient frameworks like Declarative Services/SCR or Spring/Blueprint to respond to network-induced service changes.

In short, the OSGi service registry and Remote Services provide standardized support for micro services dynamics without being bound by implementation to a specific protocol/transport, or even language.

Wednesday, May 11, 2016

ECF Remote Services using Google RPC and Protocol Buffers

ECF's implementation of OSGi Remote Services/Remote Service Admin (RS/RSA) has a modular architecture, allowing the easy creation and use of new distribution providers. Having multiple distribution providers enables transport-independent remote services.

A new ECF distribution provider is now available, based upon Google RCP and Protocol Buffers 3. Protocol Buffers is a popular serialization approach for remote services, because it's high performance, open, lightweight, and supports usage across multiple languages.

This new tutorial shows an example of how grpc/protocol buffers and OSGI Remote Services can now be used together to define, implement, discover, and consume dynamic transport-independent remote services.