Intro To Service Meshes For The Busy Developer
Service Meshes are that hot new shit. You hear about them on podcasts and at conferences and it starts to make you feel like you should be using the pattern. In the rest of this post I will break down the main components of the service mesh pattern, how it works at a high-level, and how it affects your application.
Basics of a Service Mesh
The Service Mesh pattern is used in distributed systems to help manage the complexity of service to service interaction. A service mesh provides developers with service discovery, secure communication, and circuit breakers. To do this, the boilerplate code a developer team would usually write is moved into a single proxy service which is then attached to each application being written. This move to a proxy allows teams to standardize the proxy code and takes away all the boring work.
Service meshes have two key components: the data plane and the control plane.
The data plane
The data plane provides features like:
- request routing
- circuit breaking
- secure communication
- health-checking other nodes
These features are implemented as a proxy that is attached to the main service in a Sidecar Pattern.
One of the top-players in this space is Envoy though you could also use Nginx.
The control plane
The control plane gives the operators the ability to:
- set routes for service discovery
- set load balancer controls
- change settings for authentication and authorization (at a service-to-service request level)
- provide a single place for telemetry
These operations may be controlled by a human who is running a set of scripts to trigger deployments or it could be handled by a control plane UI like Istio or Kuma. Keep in mind that both products wrap up the control and data planes into a package.
How does this affect an application?
You’re a developer. You have an app in a container. How will being deployed in a service mesh affect your application?
The affect a service mesh has should be very small. The pattern is there to help with debugging, tuning, performance, and operations. That being said, how the proxy sidecar is configured will have the biggest effect. The most common configuration is to use
127.0.0.1 as the hostname.
Here is a patch that shows how you might be calling other applications and how you would call them with a proxy sidecar:
Do you need a service mesh?
Questions to ask yourself:
- Do I have a lot of microservices?
- Are my microservices slowly becoming hard to trace?
- Are my microservices in different languages?
- Do I lose sleep trying to figure how to debug my new microservice?
If you answered “yes” to most of these questions, you may want to take a deeper look at setting up a Service Mesh for your company or project. You could even build up your service mesh over time by layering in functionality. A good place to start is a proxy sidecar to get a lot of stability features while also cleaning up routing and snowflakes.
Ready to go deeper? These are some of the resources I used to learn the Service Mesh pattern: