Istio Introduction
In this tutorial, we are going to discuss about one of the most popular service mesh solution Istio introduction and how Istio can be used in Kubernetes cluster.
For now, I’m not going to go deep. There’s a lot of Istio architecture to get through, because Istio is really a collection of different tools and frameworks, all packaged together.
What is Istio?
Istio is an open-source implementation of the service mesh originally developed by IBM, Google, and Lyft. It can layer transparently onto a distributed application and provide all the benefits of a service mesh like traffic management, security, and observability.
It’s designed to work with a variety of deployments, like on-premise, cloud-hosted, in Kubernetes containers, and in services running on virtual machines. Although Istio is platform-neutral, it’s quite often used together with microservices deployed on the Kubernetes platform.
I guess there’s probably a thousand different ways that service meshes could be implemented. You’re here for Istio on this tutorial. So I’m now going to tell you how Istio implements a service mesh.
How Istio implements a Service mesh
Let’s start with there number micro services running in your Kubernetes cluster.
The trick that Istio pulls off is for each of the pods in your system, which will probably in general just have a single container inside them.
Istio is going to inject, or add, its own container. This container is called a proxy, and it is just a container.
Now let’s look at the scenario where one container is going to make a network call to other container.
Well, in Istio, things are set up so that the network request from the container is going to be rooted or routed to its proxy. And it’s here (proxy) that that mesh logic can be implemented.
Please not that, in this tutorial, I’m not going to get into really deep details of Istio. That would be for an advanced Istio internals tutorials (Will discuss in future tutorials).
But, just in case you’re curious, Istio will have done some IP tables configuration on the container here. So the container thinks it’s making a remote call but actually, it’s just calling the proxy.
So the proxy then is responsible for relaying that call to the target pod’s proxy. Again, there could be some mesh logic here. But ultimately, the target container is going to receive that call.
And it’s possible that this container, it’s results of this request, needs to call a container in another POD.
So the story would continue, that request would be routed through this proxy, to the new target proxy, and then on to the target container.