This week, I thought we would discuss a bit about Microservices. These are a technical architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack. These services are generally individually contained within a complete, self-sustaining ecosystem, like the cool EcoSphere I have on my home office desk. It is a complete sealed world that gets everything it needs from an energy source – in the case of the EcoSphere, that is sunlight. In regard to a microservice, it just needs compute power to live.

To understand the shift to microservices, let’s talk about what would be considered a “monolithic” service architecture pattern first. Web services have been around for quite some time, as well as the principals of a Service Oriented Architecture (SOA) where services are developed as access points to consume persistent or cached data across the application. This method extracted the data access layer from within the application to an exposed layer that could be called by multiple, self-defined external sources. These services required an underlying architecture to execute – they needed compute power and an application platform to execute. If compute power and application platform sound unfamiliar, I recommend checking out the Building Systems with LEGOs session as a refresher. All services were hosted on a single (or load balanced) compute and application processing platform. This was a significant step in developing decoupled applications because now applications that needed to integrate did not need a common data persistence layer to share they could just talk to each other across the services. All was good in the forest until platform upgrades and maintenance was required. Now, not only did the maintenance affect the web service platform, but also every connecting platform.

Enter the microservice approach. If you could create a separate eco-system for each of your services that is self-sustaining and living off of just compute power, then you can tinker with each individual service without risking the entire platform. Like Thanksgiving dinner, you have the turkey on a serving plate, the green bean casserole in a dish, the mashed potatoes in another. You take a little of each of those and throw them on your plate and you have a dinner solution that you stuff your face with, then get really sleepy and then pass out watching football. Microservices are generally enabled through the principals of “containerization” from providers like Docker, Kubernetes or Mesos which we will discuss in detail in a future blog. For now think of containerization as the EcoSphere. The container contains everything it needs to operate: it has an operating system, and application platform, logging, monitoring, etc. all within the sphere. It can be passed around compute platforms, it can be taken out of the inventory and worked on and redeployed without impacting any of its friends. That becomes very important in the world of rapid development and continuous integration/continuous deployment principals. To really dig in, Martin Fowler (as usual) has a lot to say about them…

That said, the microservice architecture is not a silver bullet. It has several drawbacks. Moreover, when using this architecture there are numerous issues that you must address.  Microservices can require an increased memory consumption from the compute platform. The microservice architecture replaces N monolithic application instances with NxM services instances. If each service runs in its own JVM (or equivalent), which is usually necessary to isolate the instances, then there is the overhead of M times as many JVM runtimes. In production, there is also the operational complexity of deploying and managing a system comprised of many different service types. These are a couple of examples, but there can be many more.