The microservices architectural style simplifies implementing individual services. However, connecting, monitoring and securing hundreds or even thousands of microservices is not simple. Secondly, as services architecture becomes more heterogeneous, it becomes more difficult (or impractical) to restrict service implementations to specific libraries, frameworks or even languages. Kubernetes has become the defacto for container deployments nowadays and Service Mesh is an important component of the same to solve the above problem. In this multi-part article we are going to delve into details like what is Service Mesh? How important is Istio in this? What are the features and advantages of Istio Service Mesh? How to install and configure Istio in Google Kubernetes (GKE) and AWS ? with details on Sample use cases. This article is targeted at beginners and intermediate users of Kubernetes and Istio.
Service Mesh is an inter-service communication infrastructure for microservices application. If you come from Traditional world of integration and Enterprise Service Bus (ESB) there are some similarities between the same. Though there are some similarities between ESB, service mesh is different because it is implemented as infrastructure outside of your applications. Rather than coding that remote communication management directly into your apps, they can instead utilize a series of interconnected mesh where that logic can be decoupled from your apps and reduce burden from developers. A Service mesh usually has Service discovery, Resiliency features – retries, timeouts, deadlines, etc , Prevention of cascading failures through circuit breakers, Load balancing algorithms, request routing for canary releases, and ability to perform encryption, authentication and authorization, Rich sets of metrics to provide instrumentation at the service-to-service layer. Service Mesh is usually implemented through the proxy service, called a side car. Some of the famous Service Mesh are
Linkerd – open-source project sponsored by Buoyant. It was the first product to popularize the “service mesh” term. Linkerd is designed as a powerful, multi-platform, feature-rich service mesh that can run anywhere. Built on Twitter’s Finagle library, Linkerd is written in Scala and
runs on the JVM. It can scale up to manage tens of thousands of requests per second, per instance.
Envoy – open-source project by Matt Klein and the team at Lyft. It is written as a high-performance C++ application proxy designed for modern cloud-native services architectures. Envoy is designed to be used either as a standalone proxying layer or as a universal data plane for service mesh architectures. Envoy has a diverse community made up of contributors who use it in production.
Istio – was initially launched with backing by Google, IBM, and Lyft, which donated its Envoy proxy that makes the network transparent to applications. We will see in detail about this Service Mesh Platform in this article.
Istio is an open platform which provides easy way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio is platform independent and supports on-premise, Kubernetes, Mesos and other environments. Istio target was to provide developers with visibility into microservices without the need to change application code. The platform sits at the network level and accelerates microservices development and maintenance. This allows for the decoupling of management from application development. Currently Istio supports service deployment on Kubernetes, services registered with Consul and Services running individual machines.
Istio is also being used as a building block for the Google Knative project. Google announced Knative – an open source set of components that allows for the building and deployment of container-based serverless applications that can be transported between cloud providers. Knative runs as an abstraction layer on top of Istio and Kubernetes. It uses parts of Istio for secure socket layer (SSL) and transport layer security (TSL).
Istio Service mesh provides the following functionalities,
- Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.
- Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
- A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
- Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
Istio consists of a data plane and the control plane.
The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices along with Mixer, a general-purpose policy and telemetry hub. Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh.
The control plane manages and configures the proxies to route traffic. Additionally, the control plane configures Mixers to enforce policies and collect telemetry.
Pilot: provides routing rules and service discovery information to the Envoy proxies.
Mixer: collects telemetry from each Envoy proxy and enforces access control policies.
Istio-Auth: provides “service to service” and “user to service” authentication and can convert unencrypted traffic to TLS based between services.
- Istio provides most of the functionalities of the service mesh.
- Less effort from the developer to get the features of the Istio Service mesh. Istio can automatically inject itself into the network path between services.
- Istio provides many integrations and customizations.
- Istio is platform independent.
- Istio can be deployed on multiple clouds for redundancy.