Architecting for speed: How agile innovators accelerate growth through microservices

Strategy & Architecture


Cutthroat competition requires enterprises to transform how they build and deliver software

In a world where software has become the key differentiator, enterprises are forced to transform the way they build, ship and run software in order to stay in the game. Competitive pressure requires applications to be rapidly and continuously upgraded with nonstop availability, and companies that lack the capacity to experiment, innovate and get new features out quickly, will be at disadvantage.

This is driving many IT organisations to adopt the software design model known as microservices architecture, which quickly is gaining traction as a new way to think about structuring applications, and is changing the fundamentals of enterprise application management.

“Many enterprises still sit on large, monolithic applications with huge unmanageable codebases that have become increasingly difficult to maintain.”

Managing complex enterprise systems is a challenge and many enterprises still sit on large, monolithic applications with huge unmanageable codebases that have become increasingly difficult to maintain. Adopting a microservices architecture where applications instead are built as suites of services enables them to not only become more agile but also to cut costs and increase stability.

The problem with traditional software architecture and monolithic applications

Enterprise applications have traditionally been built as monoliths with all functionality in a single process. These monoliths cause increasing frustration, especially as more applications are being deployed to the cloud. Monolithic applications are fine for small scale teams and projects but become problematic at larger scale when many teams are involved because they don’t allow organisations to improve functionality rapidly enough.

  • As developers write code to add new features, the codebase grows, and over time it becomes hard to know where a change should be made because the codebase has become so large. Code related to similar functions begin to proliferate, making fixing bugs or implementations complicated. Doing continuous updates requires an exponentially growing amount of coordination.
  • New developers must often spend months learning the codebase before they can begin to work because it requires an understanding of how any change will affect the entire codebase, and a broken update can cause the entire application to crash. Even seasoned teams worry about making changes or adding code that unexpectedly might disrupt operation—why even simple changes can be debated endlessly.
  • Change cycles are tied together and a change made to only a small part of the application requires the entire monolith to be rebuilt and deployed. Updates therefore become more painful and less frequent, contributing further to the fragility of the application. This is why many enterprise IT organisations are spending weeks carefully coding and testing before any updates can be rolled out to end users.

When things go wrong (which they often do), the blame game starts, and the business starts to search for outsourcing alternatives because they’re losing trust in IT’s ability.

The rise of microservices

Microservices has emerged as a solution to many of the problems associated with large, monolithic applications. It’s the first architectural style of the DevOps era and embraces the engineering practices of Continuous Delivery. It’s also an example of evolutionary architecture, which supports incremental change as a first principle.

“Microservices has emerged as a solution to many of the problems associated with large, monolithic applications.”

Successfully implemented, microservices can deliver substantial increases in the speed and agility in which companies build and deploy software. The cost required to deliver an application is typically much less and systems become more resilient. Development time can go from months to weeks. Businesses such as Dropbox, GE and Goldman Sachs have seen development lead times cut by 75% after transitioning to microservices.

Microservices as an evolutionary architecture

Conventional wisdom asserts that architectural elements are difficult to change. However, the world of software isn’t static but evolves continuously why a fixed view of architecture will be short lived. Architecture is abstract until operationalised and it’s hard to determine the long-term viability of any architecture until it’s been implemented and upgraded.

Evolutionary architecture designs for incremental change as a first principle. Change is difficult to anticipate and has traditionally been expensive to accommodate later on, but when evolutionary change is built into the architecture, change becomes easier and less costly—allowing changes to development practices, release practices and general agility.

A key benefit of evolutionary architectures is the ability to experiment. Several versions of a service can exist within the ecosystem, which allows for experimentation and gradual replacement of existing functionality. This means organisations can spend less time speculating about story backlogs and instead engage in hypothesis driven development.

What are microservices?

The microservices architectural style is not an agreed architecture with a defined set of frameworks. In short it can be described as a conceptual approach to developing applications as suites of small, loosely coupled and composable services working together, where each microservice is a complete autonomous process that can be programmed in any language and managed separately.

Each microservice is designed to perform a single function representing a small business capability and talk to other microservices via language-agnostic APIs. The services are independently deployable and scalable. In a conventional architecture, a single database supports one monolithic application. In microservices architecture, each service has its own database. Together, these services and their databases deliver the user experience.

Let’s look at some of the characteristics of microservices in more detail:

Microservices are autonomous

The core idea of the microservice architecture is that any one service is autonomous and can be changed and deployed independently, without needing to change any other part of the system. Therefore, microservices must have minimal dependency on one another (loose coupling). To prevent the applications that consume their data from binding too tightly to them, and make it easier to roll out future changes, services should expose only necessary information. A loosely coupled service knows as little as needed about the services with which it collaborates. This means integration styles should be avoided that tightly binds one service to another and cause changes inside the service to require a change to consumers (the “calling” program). The number of different types of calls from one service to another should also be limited.

Microservices communicate via language-agnostic APIs

Microservices communicate with each other through language and platform-agnostic application programming interfaces (APIs), which means that they can be developed in any programming language that is best suited for the task at hand.

Microservices perform a single business function

A key characteristic of microservices is that they focus on doing one thing well and have clearly defined interfaces (bounded context). Keeping the service focused on an explicit boundary, namely a specific product feature, makes it is clear where code lives for a given piece of functionality, while avoiding that the service grows too large. Every component of a microservice should be focused on the same task. This makes it easier to modify the behaviour of a part of the system, since it reduces the number of places where code needs to be updated to a single service, and the change can be released promptly. When a behaviour requires changes in several places, many services must be released at once to deliver that change, which takes more time and involves more risk.

Microservices are small

Codewise, microservices are usually around 100 lines, but can be up to 1000. However, rather than count lines of code when identifying the scope of microservices, it’s better to look at the functionality the service needs to provide, and how it relates to other services. Microservices should be “minimal but complete”. A general rule is that a microservice should be small enough to be managed by a small team, and its conceptual model should be able to “fit in your head”. If the codebase feels too big, breaking it down makes sense. A microservice should be the responsibility of exactly one team, but one team can be responsible for several services.

Microservices are managed by decentralised, cross functional product teams

The ideal organisation for microservices has small, empowered teams where each team is responsible for a business function composed of a number of microservices which can be independently deployed. Teams handle all aspects of software development for their microservices, from conception through deployment, and therefore feature all roles necessary to deliver these services, such as product owners, architects, developers, quality engineers and operational engineers. This organisation design is consistent with Conway’s law, which says that the interface structure of a software system will reflect the social structure of the organisation that produced it.

Monolithic vs microservices architecture

Monolithic vs microservices architecture

The building blocks of microservices

The idea of microservices isn’t new. Google, Amazon and Facebook have been using microservices for more than ten years. Each time a search is performed on Google, about 70 microservices are called out before results are returned. Enterprises tried to achieve the benefits of microservices through an approach called service-oriented-architecture (SOA), which mainly failed due to poor governance, design and the lack of critical building blocks required for large-scale adoption. However, with the rise of DevOps and the development of the three key building blocks required, the benefits of microservices are now accessible to everyone. These building blocks are:

  • Containers
  • Scalable cloud infrastructure
  • APIs
The building blocks of microservices

The building blocks of microservices

Containers: the launchpad for microservices

Just like containers revolutionised the shipping business, software containers have formed a standardised frame for all services, transforming how developers build and deploy applications. Containers reduce complexity and enable developers to scale applications both faster and cheaper. Docker has today become synonymous with the container technology. In the future, every large application will be a distributed system comprising dozens of microservices running in their own containers.

Containers package a piece of software in a complete filesystem comprising everything it requires to run, which ensures it will run the same every time, independent of the environment it is running in. By containerising the application platform and its dependencies, variations in OS distributions and underlying infrastructure are abstracted away. This makes it easier to move applications from the workstation where they are coded to the clusters of computers that serve those applications to users. Compared with virtual machines, containers have a very small footprint, which means that a single server can host far more containers than VMs. They can also be launched in mere seconds, compared to several minutes for virtual machines.

Scalable cloud infrastructure

Virtualisation as we have known it can longer keep pace with the demands of microservices and next-generation applications, delivering the necessary resources on demand to effectively scale and operate services. With DevOps emerging as a new way of managing development and operations, virtualisation is also losing one of its principal benefits: running different guest operating systems on the same physical server.

The fundamental principle of virtualisation: dividing a large and costly server into numerous virtual machines, is making way for new cloud architectures that are reimagining how data centres work. Rather than split the resources of individual servers, a software layer aggregates all the servers in a data centre into one massively scaled virtual computer to run highly distributed applications.

These emerging data centres are made up of inexpensive commodity parts with small, inexpensive servers. They lack the computing power of traditional state-of-the-art data centres but due to their large number, the result is much more powerful. The new level of abstraction makes an entire data centre seem like one huge supercomputer but the system really consists of millions of microservices inside their own Linux-based containers— bringing the advantages of multitenancy, isolation and resource control throughout every container.

As scaling requirements grow and applications increase in complexity each day, every developer and IT organisation will soon be affected by these changes. Every data centre—public, private or hybrid—will soon be adopting hyperscale cloud architectures. This is bringing new cloud economics and cloud scale to enterprise computing, and enabling new kinds of businesses previously not possible. The data centre operating system allows developers to build distributed applications easier and safer without limiting themselves to the plumbing or limitations of the machines, and without being obliged to desert their preferred tools.

This new intelligent software layer will relieve IT organisations, often seen as innovation bottlenecks, from manually configuring and managing individual applications and machines, and instead enable them to spend more efforts on agility and efficiency—becoming less of maintainers and operators and more of strategic users.

Tradtional server virtualization vs hyperscaled cloud infrastructure

Tradtional server virtualization vs hyperscaled cloud infrastructure

APIs: from technical need to business model driver

An application programming interface (API) is a toolset of protocols and routines exposing the functionality of a service or application to others, allowing them to communicate. APIs make it much simpler for developers to combine data from different sources to build applications.

APIs are far from new, but have in recent years grown significantly both in number and sophistication, as an increasing number of companies see how they can help companies innovate faster and lead to new products and new customers. This fast adoption of APIs has established a standardised format for communication, which has enabled microservices.

APIs started as an enabler for things companies wanted to do, but have today expanded from a technical need to become a business priority. They are now a critical ingredient of running a digital business and a key driver of business strategies, enabling businesses to accelerate new service development and offerings. The number of public APIs are exploding with businesses across industries embracing APIs and opening them up to outside developers.

APIs can expand the reach of a company’s key assets, letting them be shared, reused or resold as a new revenue stream. From a business perspective, few software companies can today grow without publishing an API of their service. This means that developers all over the world can build new, interconnected technologies and services that augment the software and take it in novel directions.

The benefits of microservices

As organisations scale both technology and staff it becomes much more difficult to manage monolithic code bases. By decomposing applications into microservices, organisations will see benefits such as faster development cycles, improved productivity, superior scalability and supreme system availability.

“By decomposing applications into microservices, organisations will see benefits such as faster development cycles, improved productivity, superior scalability and supreme system availability.”

Ease of deployment

The traditional software development approach of releasing an application every couple of months is not fast enough for digital businesses that need to deploy new releases many times per week or even per day. Due to their high change costs, monolithic applications are not well suited for rapid release cycles. Releasing just a small change to a million-lines-code monolithic application requires that the whole application is deployed, which means these happen infrequently. As a result, changes usually build up between releases, leading to more comprehensive releases. And the longer between releases, the higher the risk that something goes wrong.

With microservices, developers can change the code in one service and deploy it separately from the other parts of the system. This allows them to faster deploy code and get new functionality out to customers sooner. If a problem should arise, it can swiftly be isolated to a single service so a fast rollback easily can be made. This speed of deployment with reduced risk is a main reason why organisations like Amazon and Netflix use microservices architectures, ensuring they remove as many barriers as possible to get software out the door.

Superior scalability

Enterprises that want to move fast and stay innovative in hypercompetitive markets require an architecture that can scale. The scaling demands for various components within an application normally differ. Scaling monolithic applications is highly inefficient as the only way of doing this is often to clone the entire application, even though the various parts are not equally load intensive. As this doesn’t represent the actual demands on the system, it leads to wasted computing power and resources.

Microservices on the other hand, allow efficient selective scaling of application components and can scale up or down independently, without affecting the rest of the system. Rather than scaling up with bigger machines or more copies of the entire application, it’s possible to scale out with duplicate copies of the heaviest-used parts. Consequently, a microservices architecture means more efficient use of code and underlying infrastructure, resulting in cost savings reducing the amount of infrastructure required to run a given application.

Scaling monolithic applications and microservices (Adapted from image by Martin Fowler)

Scaling monolithic applications and microservices (Adapted from image by Martin Fowler)

Technology Diversity

Trying out new technologies always involves some risk. With a monolithic application, if developers wish to test a new programming language, database or framework, every change will affect a significant part of the entire system. Microservices allow organisations to take advantage of new technology more quickly as it’s possible to embrace different technologies; mixing multiple languages, development frameworks and data-storage technologies (polyglot).

As the system consists of multiple, collaborating services that communicate across language-agnostic protocols, an application can consist of microservices running on distinctive platforms (Java, Ruby, Node, Go etc.), thereby taking advantage of the specific advantages of each platform. This means organisations can use different technologies inside each one and pick the right tool for each job, rather than having to select a standardised one-size-fits-all approach. This said, some organisations still choose to place constraints on language choices. Netflix and Twitter, for example, mostly use the Java Virtual Machine as a platform because they know the reliability and performance of that system.

Moreover, developers have many places where they can try out a new piece of technology. They can pick a service that is lower risk and use the technology there, to limit potential negative impact.

Resilience

When a monolithic application fails, everything stops working. But since microservices are autonomous, there is much less risk that they cause a system failure if they break down. Loose coupling with other services together with bounded context, limits its failure domain, making the system more resilient to disruptions. The failure of a single instance of a microservice has minimal impact on the application. And if all instances of a microservice should go down, it won’t cause the entire application to crash but only impacts the function of this particular microservice, so users can continue to use the rest of the application. Apart from a more stable system, increased resilience also facilitates experimentation and innovation as it makes failing less risky.

“When a monolithic application fails, everything stops working. But since microservices are autonomous, there is much less risk that they cause a system failure if they break down.”

Organisational agility

Almost every IT organisation with legacy is experiencing the problems associated with large teams and codebases. The problems are often exacerbated when the team is distributed. With conventional application architecture and monolithic code, one large team of engineers works on one big piece of code and are grouped according to technical expertise. They often step on each other’s feet and the speed of development slows exponentially with the growth of the code monolith.

With microservices architecture, applications are built by small, decentralised and autonomous development teams. Every team is trusted to make their own decisions about the software they produce and is dedicated to a single service that can be updated independently. This encourages ownership of particular functions and makes changes both easier and faster as there are fewer bottlenecks and less resistance to change so decisions can be made quicker.

Moreover, small teams working on small codebases have a better overview of the code, which makes it easier for new developers to understand the functionality of a service. And because each piece of the application is independent of the rest of the stack, programmers don’t have to understand how everything else works so there is no need for lengthy integration work across development teams.

Monolith vs Microservice development lifecycle

Monolith vs Microservice development lifecycle

Replaceability

The barriers to replacing a microservice or removing it are very low as the codebase seldom is longer than a few hundred lines and the team managing the service has a good understanding of the code.

Reusability

As businesses shift from thinking in terms of narrow channels to holistic concepts of customer engagement, they have to consider the multitude of ways they want to integrate capabilities for the web and native applications on desktop, mobile and wearables—and they need the right architectures for this. Microservices open opportunities for functionality to be reused and consumed in different ways for different purposes, which is important with regard to how consumers use the software. Each microservice acts like a Lego block that can be plugged into an application stack so organisations can assemble them to build applications catering to a variety of use cases. And as circumstances change, it’s possible to rebuild things in new ways.

Challenges with microservices

While microservices provide many benefits, they also come with costs. Microservices is no silver bullet and they have all the associated complexities of distributed systems. While the services themselves are simple, there is a great deal of complexity at a higher level with regard to managing these services and orchestrating business processes. Every organisation and system is different and a number of factors will determine whether or not microservices are right for yours. However, with the right automation and tools, most drawbacks can be addressed.

Distribution

A microservices architecture necessitates a distributed system which can be complex and hard to manage. Services interact using an inter-process communication mechanism. Remote calls are slow and can fail at any moment why developers should design for failure. So as not to lose performance, most microservice systems use asynchronous communication, which is difficult to get right, and is harder to debug. Also, with more components exchanging information, it leads to increased network communication and risk for network congestion, reinforcing the need for reliable and fast network connections.

Operational complexity

Microservices introduce additional operational complexity as there are many more moving parts that have to be managed and monitored. You have a multitude of individual services to build, test, deploy and run, possibly in polyglot languages and environments. This overhead will increase exponentially as you add more microservices to your application.

While monitoring each individual service instance is easier (because they are doing less), the cost to monitor the entire application is higher and it’s difficult to debug behaviour that spans many services. Well defined service boundaries will improve this issue, while ill-chosen boundaries will make it worse.

Testing within each service is straightforward as the test surface is smaller and developers can test services locally, but testing the entire application as a whole and how services function with one another can be complicated.

Substantial DevOps Skills Required

Handling the complexity involved in maintaining a microservices based application with lots of services that continuously are being redeployed requires many new skills and tools. It reinforces the necessity of using Continuous Delivery and DevOps. Without the automation and collaboration that Continuous Delivery promotes, controlling a swarm of services is simply not possible. You’ll also need a DevOps culture with deep collaboration between developers, operations and all others involved in software delivery. Experienced DevOps teams with a high level of expertise that can keep the microservices up and available are hard to find.

“Without the automation and collaboration that Continuous Delivery promotes, controlling a swarm of services is simply not possible.”

Eventual Consistency

Microservices introduce eventual consistency issues due to decentralised data management. As microservices entails updating multiple resources, developers need to be aware of this and work out how to identify when things become out of sync. Business logic risk acting on inconsistent information, and when this occurs it can be difficult to identify where the problem is as the inconsistency window will have closed by the time you try to diagnose the issue.

API mismatch

Services rely on the interfaces between them to communicate and for this to work flawlessly, messages must have a format and semantics that every interface can read. As all services are interrelated, modifying one service’s interface can introduce errors such as message format differences between API versions. To avoid this, you need to adjust all the other interfaces as well so other services can understand that change. This requires many changes that need to be released in a coordinated fashion, although some changes can be avoided with backwards compatibility approaches.

Security

Security has been named one of the biggest barriers to container adoption and cloud-hosted microservices present new security challenges that must be addressed to avoid vulnerabilities, including:

  • More attack surfaces: With more moving parts, there are more potential vulnerabilities to exploit for attackers.
  • Less internal consistency: Microservices allow developers to shift programming languages and frameworks. However, any time something changes, new vulnerabilities may appear.
  • Existing tools don’t address securing microservices: Most existing security tools were created before microservices made their way to the enterprise. New alternatives are emerging, but off-the-shelf security tools are still limited, which means companies have to be particularly careful when securing their microservices based applications.
  • New trust relationships: With containerised infrastructure, you can easily download and deploy container images from public repositories at no cost. However, this means you’re incorporating third-party software into your stack and you can never guarantee the security of code that you’re not in control of.

These challenges mean architects have to rethink how to secure applications. But with the proper strategies, security risks can be mitigated:

  • Secure the internal environment: Make sure your hosting environment is as secure as it can be. For a Docker cloud environment, this implies limiting access to the cloud host and configuring Docker to prevent public internet connections except where needed.
  • Use security scanners: Tools such as Docker Security Scanning helps finding and fixing security vulnerabilities inside containers.
  • Use access control: Apply access-control limitations at multiple levels of the software stack to mitigate security risks.
  • Assure communication: Make sure everyone involved in building and deploying software doesn’t operate in silos but communicate continuously so they’re aware of potential security implications with changes.

As more and more enterprises start using microservices, security will eventually get easier to manage. Microservices can actually also improve security through isolation between application components—mitigating the risk that a breach in one component enables attackers to compromise the whole stack. They also provide resilience against DDoS attacks as containers allow superior flexibility and scalability.

Should your business use microservices?

While many teams have found microservices to be a superior approach, others have found them to be a burden that reduce productivity. Like any architectural style, microservices have both advantages and downsides. To make a correct decision and determine if microservices is right for them, companies have to understand the trade-offs and apply them to their specific context. Not every application is complicated enough to warrant being broken into microservices and in many use cases the complexity of microservices may hamper the team’s productivity.

“Not every application is complicated enough to warrant being broken into microservices.”

The microservices premium is the cost organisations pay in reduced productivity to learn and manage microservices, and this cost will only be justified for complex applications. As a software system increases in complexity, the premium is outweighed by the fact that microservices alleviate the productivity loss caused by mounting complexity.

There comes a point when an application becomes complex enough or the number of engineers working on it grows past 50-75 that the benefits of a microservices architecture begin to take off. Adopters like GE, Goldman Sachs and Airbnb say it’s been well worth the tradeoff and that the benefits far outweigh the costs. As more organisations adopt the microservices model, improved tools will also emerge to manage the increasing complexity.

The microservices premium (Image adapted from martinfowler.com)

The microservices premium (Image adapted from martinfowler.com)

Adopting microservices

It can be dispiriting for DevOps advocates observing companies like Spotify and SoundCloud that successfully have adopted microservices and want to implement them in their own enterprise setting. If you’re not a startup or a born digital company, you won’t be designing a microservice based system from scratch and likely have a legacy of culture, organisation, processes, tools, services and architecture standing in the way. Applications are likely too tightly coupled to the rest of the architecture to be independently developed, deployed and qualified.

Embracing microservices requires significant planning, discipline and coordination with new toolsets and a change in team dynamics that will affect all aspects of the organisation. To execute successfully, the approach requires considerable organisational and cultural adjustment.

Before you take a microservices system into production, you need to make sure you have certain key prerequisites in place. If your organisation doesn’t have these capabilities, you need to obtain them before you put a microservice application into production. The capabilities are highly relevant for monolithic systems too. In fact, one can argue that DevOps, Continuous Delivery and automation are more important than microservices.

Prerequisites

Rapid Provisioning: You need the capability to spin up a new server in a few hours. This fits in with cloud computing, but can be accomplished without a complete cloud service. Rapid provisioning requires automation. Being able to automate your systems and push code updates regularly are critical to deal with the complexity you will incur with microservices architecture.

Monitoring: With many loosely-coupled services working together in production, things will inevitably break in ways that are hard to notice in test environments. Microservices require a far more comprehensive monitoring effort to uncover problems quickly than monoliths do. If an unexpected problem appears, you need to ensure you can quickly rollback. The microservices monitoring landscape is currently fragmented; there is no clear winner and some companies are building their own products.

Rapid deployment: With many services to manage, you have to be able to deploy them quickly—both to test environments and to production. This will typically comprise a deployment pipeline that can execute in a few hours at maximum. The objective should be to fully automate it, although some manual intervention will probably be required at the first stage.

DevOps and Continuous Delivery: DevOps is a key enabler for microservices and provides the framework for developing, deploying, and managing the container ecosystem. Close collaboration between developers and operations is a must to make sure that provisioning and deployment can be done quickly. It’s also important to make sure you can respond quickly when monitoring signals an issue. Any incident management has to involve the development team and operations—both in resolving the immediate problem and the root-cause analysis to make sure the underlying issues are fixed.

Culture: There is need for strong organisational support and an overall shift in the IT culture to succeed with microservices. Organisations have to increase their tolerance for risk and learn to fail gracefully, fast and often to achieve success. The culture must also embrace automation of deployment and testing. For developers, operations and testing engineers with long experience in monoliths, a microservices-based system is a new reality and they will need time to manage this shift.

Organisation: The most challenging aspects of moving from a monolith to microservices are the organisational changes required, such as building services teams that own all aspects of their microservices. Small, agile teams who can integrate their work frequently are an important precursor to microservices. A cornerstone involves promoting collective code ownership and fostering software craftsmanship.

You also need to shift towards product centred teams, and organising your development environment so developers can effortlessly switch between multiple repositories, libraries and languages.

Governance: As microservices involve a variety of technologies and platforms, centralised governance isn’t optimal. The microservices community favours decentralised governance as its developers strive to produce useful tools that can then be used by others to solve the same problems. Netflix, for example, encourages developers to save time by using code libraries established by others.

However, you must design and manage your evolution to microservices, or the result will be an uncontrollable sprawl. It’s key to have a cross-functional team in charge of the development, maintenance and operation of microservices that’s distinct from those managing the monolithic application. This team should understand how the components fit together, control architectural decisions, guide the creation of new services and ensure standards adoption. To avoid creating duplicates of services, create a shared repository of all services for teams to use in development.

Microservice architecture also favours decentralised data management. Monolithic systems use one logical database across different applications. But in a microservice application, every service normally manages its own database.

Security: More surfaces and complexity raise security requirements. You need to think about how you’ll authenticate who can speak to whom and identify illicit traffic. It also involves taking decisions on who has authority to work on certain services, if all services are to be used for all tasks in the organisation, and how shared services will be billed and managed.

Moving from monolith to microservices

With the above foundational capabilities in place, your organisation is ready for a first system using a handful of microservices.

  1. Before splitting out services, it’s key that you understand your system’s domain and what it does, to be able to identify proper bounded contexts for services. Getting service boundaries wrong can result in having to make costly changes in service-to-service collaboration later on.
  2. Begin by identifying non-critical functionality in the monolithic application that is rather loosely coupled with the rest of the system. Take one piece at a time and break it off. Once the piece is working, move to the next. A more forceful approach can lead to lost functionality and make it hard to diagnose problems. Another option is to dictate that all new functionality is built as microservices.
  3. Move gradually to develop an understanding of your organisation’s ability to change. Allow the new paradigm to settle and give the organisation time to grow new capabilities, build the tooling and processes necessary to manage microservices well, learn about keeping the new system healthy and ensuring the DevOps collaboration is working well before you ramp up your number of services.
  4. Begin with some coarse-grained but self-contained services. You can fine-grain the services as the implementation progresses. Coarse-grained services result in fewer network calls between the monolith and the microservices.
  5. Minimise changes being made to the monolith for supporting the transition in order to lower maintenance cost and reducing the impact of the migration.
  6. The monolithic application must be adapted to support feature toggling—enabling you to switch between using the new service and the code in the monolithic application. You need to have a fallback and prepare for a smooth load transfer to the new service. Iteratively deprecate similar functionalities in the monolith.
  7. To reduce the development and operational costs of the migration, the patterns employed by the microservices should be appropriate to the monolith’s architecture.
  8. A major challenge is to design and develop the integration between the current system and the new microservices. When part of the application is redesigned with microservices, it’s common to write glue code to support communication with the new services. An API gateway can combine several individual service calls into one coarse-grained service, to reduce the cost of integrating with the monolithic red.
  9. The new microservices should be self-contained with their own runtime environment. The services should also be deployed on infrastructure that is physically or logically isolated from the monolith.
  10. Create standardised service templates that form boilerplates for development and contain common elements required to support the microservices-based application, such as monitoring, log collection, metrics and safety mechanisms. The boilerplate should account for polyglot technologies comprising application servers, database, programming language etc. This will lower the ramp up time for development teams and create standardisation for Operations.
  11. Going further than a handful of services will require more effort, including tracing business transactions through multiple services and automating your provisioning and deployment by completely adopting Continuous Delivery.

When is the microservice transformation done?

Creating and maintaining an information system is never done and this is also valid for systems built with microservices. Architects and developers can sometimes spend a lot of time attempting to identify the “ideal” solution or implementation model for their system design, which rarely works. One advantage of microservices is that change over time isn’t as costly or risky as it might be in tightly coupled large-scope release models. Perfecting the system is unviable as it will always be a moving target, and the moment you reach a final state you’ll start to accumulate technical debt—progressing towards an outdated system that will be difficult to change. Working according to the microservices model implies many small releases over time and you’ll continuously be changing and improving something. This means there will be many milestones along the way that together add up to major changes over time.

Greenfield Projects

For greenfield development, Martin Fowler and Sam Newman generally recommends starting new products as monoliths and once you have a sense for how the product will be used, you can decompose from there. This is because it’s hard to know how best to divide up a monolith until its usage has been observed and it’s easier to chunk up something you already have. However, some disagrees with this approach, while stressing that it’s absolutely key to know the domain very well before starting.

Tools

This article will not go into the technical implementation details of microservices. However, Matt Miller at Sequoia Capital has produced an excellent map of the Microservices Ecosystem, containing both commercial and open source projects important to microservices today. Toolsets that need to be added to the stack includes orchestration, monitoring, security, inter-service communications, API Management etc. The most straightforward way to get started is with Docker, which is an attractive platform for both startups and enterprises due to the available number of tools and the ecosystem around it. You can register for a hosted container service such as Google Container Engine or Amazon EC2 Container Service to obtain a sense of what it’s like to deploy and manage containerised applications.

Sequoia microservices ecosystem map

Sequoia microservices ecosystem map

How Spotify and Netflix are leveraging microservices to move faster and stay innovative

Spotify and Netflix have both successfully have implemented microservices at scale. While they have some common success factors, there is no one answer for every company and before shifting to microservices, you should look at both cultural, organisational and process changes.

Spotify

Spotify has been using microservices at scale with thousands of running instances for some years. With 90 teams, 600 developers, and 5 development offices on 2 continents building the same product, Spotify has to minimise dependencies and avoid synchronisation issues. Their microservices are built in very loosely coupled architectures with no strict dependencies between individual components.

Spotify works with autonomous full-stack DevOps teams called Squads, consisting of back-end developers, front-end developers, testers, a UI designer, and a product owner. Teams have the freedom to act on their own but to prevent repetition of work, it is well-defined what teams should be working on. Each team has its own mission, which doesn’t overlap with that of other teams. Guilds help people in different teams learn from one another.

Team autonomy is balanced with common requirements on what services should do. Spotify lets teams devise solutions to their problems but also making it easy to follow their evolved best practices. Spotify doesn’t enforce rules, but makes certain things cumbersome not to do. Teams can challenge common ways of working if they want, but they have to work harder to make it happen.

Developers deploy services themselves and are in charge of their own operations. With teams having operational responsibility, poorly written code gets fixed very soon, as they are the ones who have to wake up at night to deal with incidents.

The microservices architecture allows Spotify to have a number of services down at the same time without users even noticing it. The system is built on the assumption that services can fail at any time, so individual services that could be failing can’t kill the user experience.

“The microservices architecture allows Spotify to have a number of services down at the same time without users even noticing it.”

Microservices requires good documentation and discovery tools and Spotify uses a home-grown system overview tool for service discovery and documentation showing all available microservices.

To improve the problems with latency that arise when services are calling many other services, Spotify has built “view aggregation services” that gather the data required to populate a client view on the server side, which diminishes the amount of calls the client has to make to the server.

Apart from increased agility due to decentralisation and decoupling, Spotify testifies that their microservices are both easier to test, deploy and monitor than their monolithic predecessor.

Netflix

Netflix’s website actually comprises hundreds of microservices hosted in the cloud where every service is managed by a DevOps team. Where most companies would need months, Netflix can deploy new code into the production environment within hours due to their high level of automation, and their cloud-based IT architecture allows developers to launch hundreds of changes a day.

Developers automatically build pieces of code into deployable web images without having to request resources from IT operations. As these images are updated with new features or services, they’re integrated with existing infrastructure using a web-based platform on which infrastructure clusters are created.

Testing is cautiously performed with a subset of users in the production environment. When the web images are live, part of the traffic is routed from previous versions to them with a load-balancing technology. Automated monitoring guarantees that if anything fails, traffic is re-routed to older versions, and the new images are rolled back.

About the authors

Jesper Nordström is a digital strategist, emerging technology analyst and head of group marketing at 3gamma. With a cross-disciplinary background, he has extensive experience working at the intersection between business, IT and design – helping companies gain competitive edge by leveraging digital technologies. Areas of expertise include digital transformation, innovation strategy and emerging technologies. Jesper holds dual degrees in engineering and business management.


Tomas Betzholtz is an IT Management consultant within the Emerging Technology capability at 3gamma. He has over 20 years of experience from the IT industry focusing on IT architecture and transformation. He has experience from a varying set of companies, from manufacturing to finance and insurance where he has helped IT organisations improve their architectural skills to enable business efficiency.


Related Articles


Interview with risk and assurance expert Guy Cullom on the General Data Protection Regulation and what it means for organisations

Assurance & Compliance

The new GDPR is only two years away, and the implications for businesses, especially those who operate multi-nationally could be immense. The organisations that get out in front are likely to gain the advantage and avoid the last minute panic that will surely engulf some industries in late 2017.


Moving to DevOps: improving IT performance through better collaboration

Operations

DevOps is a powerful approach to software development, especially for organisations facing a high volume of rapid change or wanting to move from infrequent large releases to more frequent…


Embedding compliance: How to integrate Sarbanes-Oxley in your projects

Assurance & Compliance, Risk Management

Internal controls are incredibly important to business operations but are often seen as something abstract and separate while they in fact should be part of business as usual and all ongoing develop­ment activities. Trying to resolve and remedy a lack of internal controls as a separate, post-event activity is not only risky – it’s also expensive. Control and assurance must be based on the business risk, be in line with external rules and regulations and be built in from the start.


When agile development and stable operations come to equal terms

Operations

An increasingly complex and fast-paced digital environment combined with a great need for a never-failing operations delivery creates problems for many IT organisations. To meet at least a…


Creating a solid foundation through cost-effective risk management

IT projects and programmes that deliver late, over budget, out of scope and without intended benefits are familiar occurrences for many companies. Such failures affect the entire organisation…