Kubernetes and NFV

Asifiqbal Pathan
3 min readJun 6, 2021

Kubernetes is getting adopted by several Telco Providers mainly for 5G deployments where the network functions are containerized. One of the important features that is needed here is to deploy the Kubernetes pods with multiple interfaces to support typical requirements such as separation of management, control and data traffic, multi-tenancy, segmentation, redundancy and performance.

The CNI (container network interface) specification for Kubernets provides a generic plugin-based networking solution to configure network interfaces in Linux containers. In the default networking configuration of the Kubernetes cluster, a pod is exposed by only a single interface. The CNI plugins have capabilities to support chained plugins or delegation to other plugins to achieve the necessary functionality. The Network CNI-plugin or the Main CNI-plugin sets up the networking for the primary interface. Some of the implementations to create an interface in the container are as follows:

  • bridge : Creates a bridge, adds the host and the container to it.
  • ipvlan : Adds an ipvlan interface in the container
  • macvlan : Creates a new MAC address, forwards all traffic to that to the container
  • ptp : Creates a veth pair.
  • vlan: Allocates a vlan device.
  • loopback: Creates a loopback interface

The Main CNI-plugin then chains the IPAM CNI-Plugin to allocate IP addresses. The point to be noted here is that the pod is limited to using one interface and all traffic goes through it. This interface also ties into consuming the built-in Kubernetes networking constructs like Service and exposing the service externally.

Primary Interface of the Kubernetes POD
k8s Services — External IP via Load Balancer

The core or reference CNI plugins are leveraged by several vendors such as Calico or Weave to provide what can be called as Meta CNI-Plugins and include additional networking features and functionalities such as running a routing protocol like BGP, building an overlay network and zero trust segmentation.

Coming back to the requirement of having more than one networking interface for the Kubernetes pod, the concept of CNI plugin chaining and delegation can be extended to achieve this. One such implementation is the Multus CNI Plugin which is a Meta CNI-plugin and it invokes one or more CNI plugins to allow the creation and control of multiple interfaces in a Kubernetes pod.

In the figure below, Calico CNI-Plugin manages the “eth0” interface of the pod. Typically, the default route in the pod is configured via this network. Subsequently the other plugins are invoked sequentially to create additional interfaces in the pod as shown by the “net1” and “net2” interfaces.

Another requirement in the 5G world is the need to support SR-IOV (Single Root IO Virtualization) for the CNFs. For this DPDK (Data Plane Development Kit) libraries and drivers should be supported by the CNF as well as the CNI-plugin. The SR-IOV CNI plugin can be invoked by the Multus CNI plugin and it enables the K8s pods to attach to an SR-IOV Virtual Function (VF) that is carved out of the Physical NIC (PNIC).

Multus Meta CNI Plugin

The net1 and net2 interfaces are outside the scope of built-in Kubernetes networking but they can interact with the physical fabric in different ways such as establishing BGP peering sessions to exchange routing information. This is illustrated below where similar to the BGP components of the Calico Main CNI-plugin, the CNFs are also able to individually establish BGP peering with the network fabric.

Integration between CNF and Network Fabric

To conclude, the combination of Kubernetes and Meta CNI Plugins like Multus and other CNI plugins like SR-IOV, the fundamental requirements for CNFs can be satisfied enabling the adoption of Cloud Native architecture for 5G deployments.

--

--