Kubernetes Security

This tutorial is adapted from Web Age course Kubernetes for Developers Training.

1.1 Security Overview

Security is critical to production deployments. Kubernetes offers several features to secure your environment:

authentication

authorization

  • ABAC
  • RBAC

Role, ClusterRole, RoleBinding, ClusterRoleBinding

network policies

1.2 API Server

Kubernetes has a built-in API server that provides access to objects, such as nodes, pods, deployments, services, secrets, config maps, and namespaces. These objects are exposed via simple REST API through which basic CRUD operations are performed. API Server acts as the gateway to the Kubernetes platform. Components such as kubelet, scheduler, and controller access the API via the API Server for orchestration and coordination. The distributed key/value database, etcd, is accessible only through the API Server. In the Kubernetes API, most resources are represented and accessed using a string representation of their object name, such as pods for a Pod. Some Kubernetes APIs involve a subresource, such as the logs for a Pod. A request for a Pod’s logs looks like:

GET /api/v1/namespaces/{namespace}/pods/{name}/log

1.3 API & Security

Both the kubectl CLI tool and the web portal talks to the API Server. Before an object is accessed or manipulated within the Kubernetes cluster, the request needs to be authenticated by the API Server. The REST endpoint uses TLS based on the X.509 certificate to secure and encrypt the traffic. The CA certificate and client certificate information is stored in ~/.kube/config.

You can view the file using any text editor or you can also view it by running the following command:

kubectl config view

1.4  ~/.kube/config

Sample ~/.kube/config file

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/test/.minikube/ca.crt
    server: https://192.168.99.100:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /Users/test/.minikube/client.crt
    client-key: /Users/test/.minikube/client.key

1.5 ~/.kube/config (contd.)

The file ca.crt represents the CA used by the cluster. The client.crt and client.key files map to the user minikube that is the default cluster-admin. Kubectl uses these certificates and keys from the current context to encode the request.

 

1.6 Kubernetes Access Control Layers

When a valid request hits the API Server, it goes through three stages before it is either allowed or denied.

  • Authentication
  • Authorization
  • Admission Controller

1.7 Authentication

After the request gets past TLS, it passes through the authentication phase that involves authentication modules. Authentication modules are configured by the administrator during the cluster creation process. Examples of authentication modules: client certificates, password, plain tokens, bootstrap tokens, and JWT tokens (used for service accounts). Details of authentication modules are available on the Kubernetes website: https://kubernetes.io/docs/reference/access-authn-authz/authentication/

Client certificates are the default and most common scenario. External authentication mechanisms provided by OpenID, Github, or even LDAP can be integrated with Kubernetes through one of the authentication modules.

1.8 Authorization

After authentication, the next step is to determine whether the operation is allowed or not. 

For authorizing a request, Kubernetes looks at three aspects:

  • the username of the requester – extracted from the token embedded in the header
  • the requested action – one of the HTTP verbs like GET, POST, PUT, DELETE mapped to CRUD operations
  • the object affected by the action – one of the valid Kubernetes objects such as a pod or a service.

Kubernetes determines the authorization based on an existing policy. By default, Kubernetes follows the philosophy of closed-to-open, which means an explicit allow policy is required to even access the resources. Like authentication, authorization is configured based on one or more modes/modules, such as:

  • RBAC
  • ABAC

1.9 ABAC Authorization

Attribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted to users through the use of policies that combine attributes. ABAC uses a policy file where one JSON object is listed per line. Each line in the JSON policy file is a policy object.

If you are using the Minikube distribution, you can enable ABAC authorization like this:

minikube start --extra-config=apiserver.AuthorizationMode=ABAC --extra-config=apiserver.AuthorizationPolicyFile=/path/to/your/abac/policy.json

1.10 ABAC – Policy Format

Versioning properties:

  • apiVersion: “abac.authorization.kubernetes.io/v1beta1”
  • kind: “Policy”

spec: property set to a map with the following properties:

Subject-matching properties:

  • user: “userName
  • group: “groupName” | system:authenticated | system:unauthenticated

Resource-matching properties:

  • apiGroup: “*” | “extensions
  • namespace: “*” | “your_custom_namespace”
  • resource: “*” | “pods” | “deployments” | “services“, …

Non-resource-matching properties:

  • nonResourcePath: “/version” | “*”

readonly: true | false, type boolean, when true, means that the Resource-matching policy only applies to get, list, and watch operations, Non-resource-matching policy only applies to get operation.

1.11 ABAC – Examples

Alice can do anything to all resources:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "alice", "namespace": "*", "resource": "*", "apiGroup": "*"}}

The Kubelet can read any pods:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "pods", "readonly": true}}

The Kubelet can read and write events:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "kubelet", "namespace": "*", "resource": "events"}}

Bob can just read pods in namespace “projectCaribou”:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user": "bob", "namespace": "projectCaribou", "resource": "pods", "readonly": true}}

Anyone can make read-only requests to all non-resource paths:

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:unauthenticated", "readonly": true, "nonResourcePath": "*"}}

1.12  RBAC Authorization

Role-based access control (RBAC) is a method of regulating access to a computer or network resources based on the roles of individual users within your organization.

RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.

RBAC authorization involves the following resources:

  • Role
  • CluserRole
  • RoleBinding
  • ClusterRoleBinding

RBAC is the default authorization mode. If you want to explicitly specify this mode, you can use the following command with the Minikube distribution:

minikube start --extra-config=apiserver.Authorization.Mode=RBAC

1.13 Role and ClusterRole

An RBAC Role or ClusterRole contains rules that represent a set of permissions.

Role – always sets permissions within a particular namespace. When you create a Role, you have to specify the namespace it belongs in. Treat it as a project-based role where a user will have to access to a specific namespace.

ClusterRole – is a non-namespaced resource. Use it to create admin users who can define permissions on namespaced resources and be granted within an individual namespace(s). It defines permissions on cluster-scoped resources, such as nodes. For example, you can use a ClusterRole to allow a particular user to run kubectl get pods –all-namespaces. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can’t be both.

 

1.14 Role – Example

Here’s an example Role in the “marketing” namespace that can be used to grant read access to pods:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: marketing
  name: marketing-pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

 

1.15 ClusterRole – Example

Here is an example of a ClusterRole that can be used to grant read access to nodes:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: nodes-reader
rules:
- apiGroups: [""]
  #
  # at the HTTP level, the name of the resource for accessing Secret
  # objects is "nodes"
  resources: ["nodes"]
  verbs: ["get", "watch", "list"]

 

1.16 RoleBinding and ClusterRoleBinding

A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted. A RoleBinding grants permissions within a specific namespace whereas ClusterRoleBinding grants that access cluster-wide. A RoleBinding may reference any Role in the same namespace. If you want to bind a ClusterRole to all the namespaces in your cluster, you use a ClusterRoleBinding.

1.17 RoleBinding – Example

Here is an example of a RoleBinding that grants the “pod-reader” Role to the user “alice” within the “sales” namespace. This allows “alice” to read pods in the “default” namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: sales
subjects:
- kind: User
  name: alice
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

1.18 ClusterRoleBinding – Example

The following ClusterRoleBinding allows any user in the group “manager” to read deployments in any namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-deployment-global
subjects:
- kind: Group
  name: manager 
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: deployment-reader
  apiGroup: rbac.authorization.k8s.io

1.19 Authorization Modes – Node

A special-purpose authorization mode that grants permissions to kubelets based on the pods they are scheduled to run. To learn more about using the Node authorization mode

1.20 Authorization Modes – ABAC

In Attribute-based access control (ABAC), access rights are granted to users through the use of policies that combine attributes. The policies can use any type of attributes (user attributes, resource attributes, object, environment attributes, etc). To enable ABAC mode, specify –authorization-policy-file=SOME_FILENAME and –authorization-mode=ABAC on startup.

1.21 Admission Controller

After authorization, the request goes through the final stage: Admission Controller. Admission controllers limit requests to create, delete, modify, or connect to (proxy). They do not support read requests. For example, an admission control module may be used to enforce the pulling of images policy each time a pod is created. There are various admission controllers compiled into the kube-apiserver binary. Here are some of them:

  • AlwaysPullImages: When this admission controller is enabled, images are always pulled before starting containers, which means valid credentials are required
  • CertificateApproval: This admission controller observes requests to ‘approve’ CertificateSigningRequest resources

For more details, refer to Kubernetes doc: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/

1.22 Network Policies

Network policies are the equivalent of a firewall that specify how groups of pods are allowed to communicate with each other and other network endpoints. Each network policy has a podSelector field, which selects a group of pods. When a pod is selected by a network policy, the network policy is applied to it. Each network policy also specifies a list of allowed (ingress and egress) connections. When the network policy is created, all the pods that it applies to are allowed to make or accept the connections listed in it. If no network policies are applied to a pod, then no connections to or from it would be permitted. Network policies require a network plugin that enforces network policies. Although Kubernetes allows the creation of network policies they aren’t enforced unless a plugin is installed and configured. There are various plugins, such as Calico, Cilium, Kube-router, Romana, and, Weave Net.

1.23 Network Policies – Examples

You can apply various network policies, such as:

  • Limit access to services
  • Pod isolation
  • Allow internet access for pods
  • Allow pod-to-pod communication within the same or different namespaces.

You can get various useful network policy recipes available from the following sites:

https://github.com/ahmetb/kubernetes-network-policy-recipes

https://github.com/stackrox/network-policy-examples

1.24 Network Policies – Pod Isolation

Pods are “isolated” if at least one network policy applies to them; if no policies apply, they are “non-isolated”. Network policies are not enforced on non-isolated pods. This behavior exists to make it easier to get a cluster up and running a user who does not understand network policies can run their applications without having to create one. It’s recommended you start by applying a “default-deny-all” network policy. The effect of the default-deny-all policy specification is to isolate all pods, which means that only connections explicitly listed by other network policies will be allowed.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress

Notes

Since network policies are namespaced resources, you will need to create this policy for each namespace. You can do so by running kubectl -n <namespace> create -f <filename> for each namespace.

1.25 Network Policies – Internet Access for Pods

With just the default-deny-all policy in place in every namespace, none of your pods will be able to talk to each other or receive traffic from the Internet. For most applications to work, you will need to allow some pods to receive traffic from outside sources.

The following network policy allows traffic from all sources for pods having the custom networking/allow-internet-access=true label:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: internet-access
spec:
  podSelector:
    matchLabels:
      networking/allow-internet-access: "true"
  policyTypes:
  - Ingress
  ingress:
  - {}

1.26  Network Policies – New Deployments

When you create new deployments, they will not be able to talk to anything by default until you apply a network policy. You can create custom network policies that allow deployments/pods labeled networking/allow-all-connections=true to talk to all other pods in the same namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-from-new
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          networking/allow-all-connections: "true"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-to-new
spec:
  podSelector:
    matchLabels:
      networking/allow-all-connections: "true"
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector: {}

1.27 Summary

In this tutorial, you learned the following:

  • Security Overview
  • Accessing the API
  • Authentication
  • Authorization
  • ABAC and RBAC
  • Admission Controller
  • Network Policies

Kubernetes Architecture

This tutorial is adapted from Web Age course Docker and Kubernetes Administration.

1.1 Architecture Diagram

In this tutorial, we will review various parts of the following architecture diagram:

Kubenetes Architecture

1.2 Components

Cluster – Includes one or more master and worker nodes

Master – Manages nodes and pods

(worker) Node – a physical, virtual or cloud machine

Pod – A group of one or more containers, created and managed by Kubernetes

Container – Are most commonly Docker containers where application processes are run

Volume – A directory of data accessible to containers in a pod. It shares a lifetime with the pod it works with.

Namespace – A virtual cluster. Allows for multiple virtual clusters within a physical one.

1.3 Kubernetes Cluster

A Kubernetes cluster is a set of machines(nodes) used to run containerized applications. To do work a cluster needs to have at least one master node and one worker node. The Master node determines where and what is run on the cluster. Worker nodes contain pods that contain containers. Containers hold execution environments where work can be done. A cluster is configured via the kubectl command-line interface or by the Kubernetes API.

1.4 Master Node

 The Master node manages worker nodes.

The master node includes several components:

Kube-APIServer – traffic enters the cluster here

Kube-Controller-Manager – runs the cluster’s controllers

Etcd – Maintains cluster state, provides key-value persistence

Kube Scheduler – schedules activities to worker nodes

Clusters can have more than one master node

Clusters can have only one active master node

 

1.5 Kube-Control-Manager

The Kube-Control-Manager (part of the Master Node) manages the following controllers:

Node controller

Replication controller

Endpoints controller

Service account controller

Token controller

All these controller operations are compiled into a single application. The controllers are responsible for the configuration and health of the cluster’s components.

1.6 Nodes

A node consists of a physical, virtual, or cloud machine where Kubernetes can run Pods that house containers. Clusters have one or more nodes. Nodes can be configured manually through kubectl. Nodes can also self-configure by sending their information to the Master when they start up. Information about running nodes can be viewed with kubectl.

Notes

Other components found on the worker node include:

kubelet – interacts with the master node, manages containers and pods on the node

kube-proxy – responsible for network configuration

container runtime – responsible for running containers in the pods (typically Docker)

 

1.7 Other Components

Pods – Logical container for runtime containers

Containers – Pods typically contain Docker runtime containers holding OS images and applications. Work is run in containers.

 

1.8 Interacting with Kubernetes

All user interaction goes through the master node’s api-server. kubectl provides a command-line interface to the API. Control of Kubernetes can also be done through the Kubernetes Dashboard (web UI).

 

1.9 Summary

In this tutorial, we covered:

Architecture Diagram

Components

Cluster

Master

Node

Pod

Container

Interaction through API

What is Kubernetes?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Kubernetes

Kubernetes is Greek for “helmsman” or “pilot”.  It was originally founded by Joe Beda, Brendan Burns and Craig McLuckie. Afterward, other Google engineers also joined the project. The original codename of Kubernetes within Google was Project Seven, a reference to a Star Trek character. The seven spokes on the wheel of the Kubernetes logo is a nod to that codename. Kubernetes is commonly referred to as K8s. It is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google and donated to the Cloud Native Computing Foundation. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. It supports a range of container tools, including Docker.

1.2 What is a Container?

Over the past few years, containers have grown in popularity. Containers provide operating-system-level virtualization.  It is a computer virtualization method in which the kernel of an operating system allows the existence of multiple isolated user-space instances, instead of just one. Such instances are called containers. A container is a software bucket comprising everything necessary to run the software independently. There can be multiple containers in a single machine and containers are completely isolated from one another as well as from the host machine. Containers are also called virtualization engines (VEs) or Jails (e.g. FreeBSD jail). Containers look like real computers from the point of view of programs running in them.  Items usually bundled into a container include Application, dependencies, libraries, binaries, and configuration files.

1.3 Container – Uses

  • OS-level virtualization is commonly used in virtual hosting environments.
  • A container is useful for packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and self-sufficient containers, that will run virtually anywhere. 
  • It is useful for securely allocating finite hardware amongst a large number of mutually-distributing users. 
  • System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on a single server.
  • Container is useful for packaging everything the app needs into a container and migrating that from one VM to another, to server or cloud without having to refactor the app.
  • Container usually imposes little to no overhead, because programs in virtual partitions use the OS’ normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machines. 
  • Container doesn’t require support in hardware to perform efficiently.

1.4 Container – Pros

  • Containers are fast compared to hardware-level virtualization, since there is no need to boot up a full virtual machine.A Container allows you to start apps in a virtual, software-defined environment much more quickly.
  • The average container size is within the range of tens of MB while VMs can take up several gigabytes. Therefore a server can host significantly more containers than virtual machines. 
  • Running containers is less resource intensive than running VMs so you can add more computing workload onto the same servers. 
  • Provisioning containers only take a few seconds or less, therefore, the data center can react quickly to a spike in user activity. 
  • Containers can enable you to easily allocate resources to processes and to run your application in various environments. 
  • Using containers can decrease the time needed for development, testing, and deployment of applications and services. 
  • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production. 
  • Containers are a very cost effective solution. They can potentially help you to decrease your operating cost (less servers, less staff) and your development cost (develop for one consistent runtime environment). 
  • Using containers, developers are able to have truly portable deployments. This helps in making Continuous Integration / Continuous Deployment easier. 
  • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment.

1.5 Container – Cons

  • Compared to traditional virtual machines, containers are less secure. Containers share the kernel, other components of the host operating system, and they have root access. This means that containers are less isolated from each other than virtual machines, and if there is a vulnerability in the kernel it can jeopardize the security of the other containers as well.
  • A container offers less flexibility in operating systems. You need to start a new server to be able to run containers with different operating systems. 
  • Networking can be challenging with containers. Deploying containers in a sufficiently isolated way while maintaining an adequate network connection can be tricky.
  • Developing and testing for containers requires training. Whereas writing applications for VMs, which are in effect the same as physical machines, is a straightforward transition for development teams.
  • Single VMs often run multiple applications. Whereas containers promotes a one-container one-application infrastructure. This means containerization tends to lead to a higher volume of discreet units to be monitored and managed.

1.6 Composition of a Container

At the core of container technology are:

  • Control Groups (cgroups)
  • Namespaces
  • Union filesystems

1.7 Control Groups

Control groups (cgroups) work by allowing the host to share and also limit the resources each process or container can consume. This is important for both, resource utilization and security. It prevents denial-of-service attacks on host’s hardware resources.

1.8 Namespaces

 Namespaces offer another form of isolation for process interaction within operating systems. It limits the visibility a process has on other processes, networking, filesystems, and user ID components. Container processes are limited to see only what is in the same namespace. Processes from containers or the host processes are not directly accessible from within the container process.

1.9 Union Filesystems

Containers run from an image, much like an image in the VM or Cloud world, it represents state at a particular point in time. Container images snapshot the filesystems. The snapshot tend to be much smaller than a VM. The container shares the host kernel and generally runs a much smaller set of processes. The filesystem is often layered or multi-leveled., e.g. Base layer can be Ubuntu with an application such as Apache or MySQL stacked on top of it.

1.10 Popular Containerization Software

  • Docker – Docker Swarm
  • Packer
  • Kubernetes
  • Rocket (rkt)
  • Apache Mesos
  • Linux Containers (LXC)
  • CloudSang
  • Marathon
  • Nomad
  • Fleet
  • Rancher
  • Containership
  • OpenVZ
  • Oracle Solaris Containers
  • Tectonic

1.11 Microservices

The microservice architectural style is an approach to developing a single application as a suite of small services. Each service runs in its own process and communicates with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.

1.12 Microservices and Containers / Clusters

Containers are excellent for microservices, as it isolates the services. Containerization of single services makes it easier to manage and update these services. Docker has led to the emergence of frameworks for managing complex scenarios, such as, how to manage single services in a cluster, how to manage multiple instances in a service across hosts , how to coordinate between multiple services on a deployment and management level. Kubernetes allows easy deployment and management of multiple Docker containers of the same type through an intelligent tagging system. With Kubernetes, you describe the characteristics of the image, e.g. number of instances, CPU, RAM, you would like to deploy.

1.13 Microservices and Orchestration

Microservices can benefit from deployment to containers. Issue with containers is, they are isolated. Microservices might require communication with each other. Container orchestration can be used to handle this issue. Container orchestration refers to the automated arrangement, coordination, and management of software containers. Container orchestration also helps in tackling challenges, such as service discovery, load balancing,  secrets/configuration/storage management, health checks, auto-[scaling/restart/healing] of containers and nodes,  zero-downtime deploys.

1.14 Microservices and Infrastructure-as-Code

In the old days, you would write a service and allow the Operations (Ops) team to deploy it to various servers for testing, and eventually production. Infrastructure-as-Code solutions helps in shortening the development cycles by automating the set up of infrastructure. Popular infrastructure-as-code solutions include Puppet, Chef, Ansible, Terraform, and Serverless. In the old days, servers were treated as part of the family. Servers were named, constantly monitored, and carefully updated. Due to containers and infrastructure-as-code solutions, these days the servers (containers) are often not updated. Instead, they are destroyed, then recreated. Containers and infrastructure-as-code solutions treat infrastructure as disposable.

1.15 Kubernetes Container Networking

Microservices require a reliable way to find and communicate with each other. Microservices in containers and clusters can make things more complex as we now have multiple networking namespaces to bear in mind. Communication and discovery requires traversing of container IP space and host networking. Kubernetes benefits from getting its ancestry from the clustering tools used by Google for the past decade. Many of the lessons learned from running and networking two billion containers per week have been distilled into Kubernetes.

1.16 Kubernetes Networking Options

Docker creates three types of networks by default:

  • bridged – this is the default choice. In this mode, the container has its own networking namespace and is then bridged via virtual interfaces to the host network. In this mode, two containers can use the same IP range because they are completely isolated.
  • ◊ host – in this mode, performance is greatly benefited since it removes a level of network virtualization; however, you lose the security of having an isolated network namespace.
  • none – creates a container with no external interface. Only a loopback device is shown if you inspect the network interfaces.
  • host, and none.
  • In all these scenarios, we are still on a single machine, and outside of a host mode, the container IP space is not available, outside the machine. Connecting containers across two machines then requires NAT and port mapping for communication.

Docker user-defined networks

  • Docker also supports user-defined networks via network plugins.

     bridge driver – allows creation of networks somewhat similar to default bridge driver.

     overlay driver – uses a distribution key-value store to synchronize the network creation across multiple hosts.

     Macvlan driver – uses the interface and sub-interfaces on the host. It offers a more efficient network virtualization and isolation as it bypasses the Linux bridge.

 Weave

  • Provides an overlay network for Docker containers

Flannel

  • Gives a full subnet to each host/node enabling a similar pattern to the Kubernetes practice of a routable IP per pod or group of containers.

Project Calico

  • Uses built-in routing functions of the Linux kernel.
  • It can be used for anything from small-scale deploys to large Internetscale installations.
  • There is no need for additional NAT, tunneling, or overlays.

Canal

  • It merges both Calico for network policy and Flannel for overlay into one solution.

1.17 Kubernetes Networking – Balanced Design

Using unique IP address at the host level is problematic as the number of containers grow. Assigning an IP address to each container can also be overkill. In cases of sizable scale, overlay networks and NATs are needed in order to address each container. Overlay networks add latency. You have to pick between fewer containers with multiple applications per container (unique IP address for each container) or multiple containers with fewer applications per container (Overlay networks / NAT)

1.18 Summary

Kubernetes provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes supports a range of container tools, including Docker. Containers are useful for packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and selfsufficient containers, that will run virtually anywhere. Microservices can benefit from containers and clustering. Kubernetes offers container orchestration.