TCP Traffic Shifting


TCP Traffic Shifting

In this tutorial, we are going to discuss about TCP traffic shifting. In this tutorial i am going to use sample sleep application.

Let us understand how the TCP Traffic Shifting works. For that I’m going to create a dedicated namespace and create the required components within it. Let me go ahead and execute the required commands.

First I’m going to check what are all the namespaces we do have with the following command

root@cluster-node:~/istio-1.10.0# kubectl get namespaces --show-labels
NAME              STATUS   AGE   LABELS
kube-system       Active   12d   
kube-public       Active   12d   
kube-node-lease   Active   12d   
metallb-system    Active   10d   app=metallb
istio-system      Active   10d   
default           Active   12d   istio-injection=enabled

This is to identify whether the istio injection enabled or not. So these are all the namespaces I have. Now I’m going to create a namespace called istio-io-tcp-traffic-shifting.

root@cluster-node:~/istio-1.10.0# kubectl create namespace istio-io-tcp-traffic-shifting
namespace/istio-io-tcp-traffic-shifting created

So the namespace is created. Now I’m going to enable istio using the label istio-injection=enabled by adding it into this namespace.

root@cluster-node:~/istio-1.10.0# kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled --overwrite
namespace/istio-io-tcp-traffic-shifting labeled

root@cluster-node:~/istio-1.10.0# kubectl get namespaces --show-labels
NAME                            STATUS   AGE     LABELS
kube-system                     Active   12d     
kube-public                     Active   12d     
kube-node-lease                 Active   12d     
metallb-system                  Active   10d     app=metallb
istio-system                    Active   10d     
default                         Active   12d     istio-injection=enabled
istio-io-tcp-traffic-shifting   Active   3m22s   istio-injection=enabled
Deploy Sleep container

Now within this I’m going to have the sleep container, so that I can use that container to execute the commands. I will apply the sleep.yaml file within the namespace istio-io-tcp-traffic-shifting.

root@cluster-node:~/istio-1.10.0# kubectl apply -f samples/sleep/sleep.yaml -n istio-io-tcp-traffic-shifting
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created

The service account, service and the deployment will get created, now within the same namespace I’m going to create the tcp echo service. So this is the echo service that we will be creating it.

Basically it will be listening to 9000, 9001, 9002 ports and it’s going to create 2 versions of deployment. That is tcp eco-version 1 and tcp echo-version 2. The only difference between these 2 is it’s going to return the word one for version 1 and two for version 2 whenever I am doing netcat against this particular service.

Basically the argument that I’m passing will be prefixed with this particular word corresponding to the version that I am calling.

Let me go ahead and execute this particular yaml file within the same namespace.

root@cluster-node:~/istio-1.10.0# kubectl apply -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
service/tcp-echo created
deployment.apps/tcp-echo-v1 created
deployment.apps/tcp-echo-v2 created

So it’s going to create the deployment for version 1 and deployment for version 2 and the service corresponding to it. Now I need to identify the tcp ingress port. This is very similar to the way how we identified the ingress port for http.

Export Ingress Port

So now let me go ahead and get the tcp ingress port using following command

root@cluster-node:~/istio-1.10.0# export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}')

Let me echo this particular port.

root@cluster-node:~/istio-1.10.0# export $TCP_INGRESS_PORT
31392

So this is the port at which the tcp is listening to within the Ingress controller. Let me go ahead and echo the hostname into ingress host.

$ echo $INGRESS_HOST
192.168.1.50

Now I’m going to create the virtual service so that all the traffic to the tcp echo service will get routed only to the version 1.

root@cluster-node:~/istio-1.10.0# kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
gateway.networking.istio.io/tcp-echo-gateway created
destinationrule.networking.istio.io/tcp-echo-destination created
virtualservice.networking.istio.io/tcp-echo created

So within the yaml file I will be having the virtual service where It is going to route the traffic only to the subset version 1. So the virtual service tcp echo will get created and that is going to listen to the port 31400.

And the gateway is going to route the traffic from all the host into the port 31400. The destination rule going to route it to both the versions but the virtual service It is going to route the destination only to this particular subset version 1.

Now let us understand how the request going to get transferred from the node port to the Ingress Gateway and then to the tcp gateway to the virtual service of this particular service.

Now I am going to get the pod with the label, app=sleep within this specific namespace. And I will get the corresponding name.

root@cluster-node:~/istio-1.10.0# kubectl get pod -l app=sleep -n istio-io-tcp-traffic-shifting -o jsonpath={.items..metadata.name}
sleep-8f796hf5i-pq6fhu2

So sleep-8f796hf5i-pq6fhu2 is the name of the pod that got created within this namespace. And I’m going to use this particular pod to do the curl.

Netcat against the host and port

So if I do a netcat against the ingress host and the TCP Ingress port using the command netcat, that should give the response back. Based on the text that I am providing. Let me say hello. That will give the response one hello. Let me say again hello. Now I’m getting the response one hello.

root@cluster-node:~/istio-1.10.0# nc $INGRESS_HOST $TCP_INGRESS_PORT
hello
one hello 
again hello
one hello again

Because we made the TCP echo to route the traffic only to the subset one. Now, I can use the kubectl exec command to execute the netcat using the sleep container.

Let me go ahead and execute this so that we can have a better understanding on this particular command. So within the sleep pod I’m going to do the exec statement and execute it using the sleep container within the namespace following specific command I’m going to execute.

root@cluster-node:~/istio-1.10.0# kubectl exec sleep-8f796hf5i-pq6fhu2 -c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date;) | nc $INGRESS_HOST $TCP_INGRESS_PORT"
one Sun Jul 25 12:05:12 UTC 2021

Basically I’m going to pipe the current date, using the netcat command in to the ingress host and the ingress port. So it’s going to give the response back by prefixing one to particular date.

The reason, because all the requests going to get routed only to the version 1 subset of this virtual service. Now I can execute the same thing within a for loop. Let me go ahead and do that.

root@cluster-node:~/istio-1.10.0# for i in {1..10}; do \
kubectl exec sleep-8f796hf5i-pq6fhu2 -c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date;) | nc $INGRESS_HOST $TCP_INGRESS_PORT"\
done
one Sun Jul 25 12:05:12 UTC 2021
one Sun Jul 25 12:05:13 UTC 2021
one Sun Jul 25 12:05:14 UTC 2021
one Sun Jul 25 12:05:15 UTC 2021
one Sun Jul 25 12:05:16 UTC 2021
one Sun Jul 25 12:05:17 UTC 2021
one Sun Jul 25 12:05:18 UTC 2021
one Sun Jul 25 12:05:19 UTC 2021
one Sun Jul 25 12:05:20 UTC 2021
one Sun Jul 25 12:05:21 UTC 2021

So I’m going to have a for loop where it’s going to get executed from 1 to 10. Within that, I’m going to do this kubectl, the same command that I executed earlier, between each and every command. I will be having sleep one which will sleep for 1 second. So this is going to route the traffic. All the 10 requests will get routed only to the version 1 subset.

Shift traffic

Now comes the interesting part where I am going to shift 20 percent of the traffic to version 2. Let me go ahead and execute the yaml file where I’m going to shift the traffic 20 percent to version 2.

root@cluster-node:~/istio-1.10.0# kubectl apply -f samples/tcp-echo/tcp-echo-20-v2.yaml -n istio-io-tcp-traffic-shifting
virtualservice.networking.istio.io/tcp-echo configured

So this is the yaml file that got executed where I’m going to route the traffic where weight-age 80 percent to version 1 and 20 percent to version 2.

Now with this particular change, Let me go ahead and execute the same looping 10 times. Now along with one I should get 20 percent of the time the response 2 as well.

root@cluster-node:~/istio-1.10.0# for i in {1..10}; do \
kubectl exec sleep-8f796hf5i-pq6fhu2 -c sleep -n istio-io-tcp-traffic-shifting -- sh -c "(date;) | nc $INGRESS_HOST $TCP_INGRESS_PORT"\
done
one Sun Jul 25 12:05:12 UTC 2021
one Sun Jul 25 12:05:13 UTC 2021
two Sun Jul 25 12:05:14 UTC 2021
one Sun Jul 25 12:05:15 UTC 2021
one Sun Jul 25 12:05:16 UTC 2021
two Sun Jul 25 12:05:17 UTC 2021
one Sun Jul 25 12:05:18 UTC 2021
one Sun Jul 25 12:05:19 UTC 2021
two Sun Jul 25 12:05:20 UTC 2021
one Sun Jul 25 12:05:21 UTC 2021

So we’re able to route the traffic to different version of the virtual service without making any change to the actual code.

This is the biggest advantage of istio where I can inject any configuration into the proxy and accordingly I can route the traffic. The same way I can go ahead and update this particular weightage within the istio or through the yaml file within the console and make the required changes.

This is very similar to http traffic shifting. And here I will be using it for all TCP ports. Mostly I will be using this particular traffic shifting for canary deployment. If I have any risky deployment, I’ll be making that as a separate version And route only a percentage of the traffic.

Once that particular version is stabilized, I would be routing all the traffic to the newer version of the application.

TCP Traffic Shifting
Scroll to top