Container Network Interface


Container Network Interface

In this tutorial, we are going to discuss about Container network interface (CNI). So far, we saw how network namespace work as in how to create an isolated network namespace environment within our system.

We discussed how to connect multiple such namespaces through a bridge network, how to create Virtual cables, or pipes with virtual interfaces on either end, and then how to attach each end to a namespace and the bridge.

We then discussed how to assign IP and bring them up. And finally enable NAT or IP Masquerade for external communication etc. We then saw how Docker did it for its bridge networking option. It was pretty much the same way except that it uses different naming patterns.

Container solutions to solve Network challenges

Well other container solutions solves the networking challenges in kind of the same way. Like rocket or Mesos Containerizer or any other solutions that work with containers and requires to configure networking between them like Kubernetes.

If we are all solving the same networking challenges, by researching and finally identifying a similar approach with our own little minor differences why code and develop the same solution multiple times? Why not just create a single standard approach that everyone can follow?

So we take all of these ideas from the different solutions and move all the networking portions of it into a single program or code and since this is for the bridge network we call it bridge.

So we created a program or a script that performs all the required tasks to get the container attached to a bridge network.

For example you could run this program using its name bridge and specify that you want to add this container to a particular network namespace.

The bridge program takes care of the rest so that the container runtime environments are relieved of those tasks.

$ bridge add 2e34dcf34 /var/run/netns/2e34dcf34

For example, whenever rkt or Kubernetes creates a new container, they call the bridge plugin and pass the container id and namespace to get networking configured for that container.

So what if you wanted to create such a program for yourself? May be for a new networking type. If you where doing so. What arguments and commands should it support.

How do you make sure the program you create will work correctly with these run times. How do you know container run times like kubernetes or rkt will invoke your program correctly.

Container Network Interface (CNI)

We need some standards defined for above problems. A standard that defines, how a program should look, how container runtime will invoke them so that everyone can adhere to a single set of standards and develop solutions that work across runtime. That’s where container network interface comes in.

The CNI is a set of standards that define how programs should be developed to solve networking challenges in a container runtime environment. The programs are referred to as plugins.

In this case bridge program that we have been referring to is a plugin for CNI. CNI defines how the plugin should be developed and how container run times should invoke them.

CNI defines a set of responsibilities for container run times and plugin. For container runtimes CNI specifies that it is responsible for creating a network namespace for each container.

It should then identify the networks the container must attach to container runtime must then invoke the plugin when a container is created using the ADD command and also invoke the plugin when the container is deleted using the Del command.

It also specifies how to configure in network plugin on the container runtime environment using a JSON file.

On the plugin side, it defines that the plugin should support Add, Del and check command line arguments and that these should accept parameters like container and network namespace.

The plugin should take care of assigning IP addresses to the PODs and any associated routes required for the containers to reach other containers in the network.

At the end the results should be specified in a particular format. As long as the container runtime and plugins adhere to these standards they can all live together in harmony.

Supported Plugins

Any runtime should be able to work with any plugin. CNI comes with a set of supported plugins already. Such as bridge, VLAN, IPVLAN, MACVLAN, one for windows. As well as IPAM plugins like host-local and dhcp.

There are other plugins available from third party organizations as well. Some examples are weave, flannel, cilium, Vmware NSX, Calico, Infoblox etc.

All of these container runtimes implement CNI standards. So any of them can work with any of these plugins. But there is one that is not in this list. That is docker.

Docker does not implement CNI. Docker has its own set of standards known as CNM which stands for Container Network Model which is another standard that aims at solving container networking challenges similar to CNI but with some differences.

Due to the differences these plugins don’t natively integrate with Docker. Meaning you can’t run a docker container and specify the network plugin to use is CNI and specify one of these plugins but that doesn’t mean you can’t use Docker with CNI at all. You just have to work around it yourself.

For example create a docker container without any network configuration and then manually invoke the the bridge plugin yourself. That is pretty much how Kubernetes does it.

When Kubernetes creates docker containers it creates them on the none network. It then invokes the configured CNI plugins who takes care of the rest of configuration.

Container Network Interface
Scroll to top