What is Kubernetes?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Kubernetes

Kubernetes is Greek for “helmsman” or “pilot”.  It was originally founded by Joe Beda, Brendan Burns and Craig McLuckie. Afterward, other Google engineers also joined the project. The original codename of Kubernetes within Google was Project Seven, a reference to a Star Trek character. The seven spokes on the wheel of the Kubernetes logo is a nod to that codename. Kubernetes is commonly referred to as K8s. It is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google and donated to the Cloud Native Computing Foundation. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. It supports a range of container tools, including Docker.

1.2 What is a Container?

Over the past few years, containers have grown in popularity. Containers provide operating-system-level virtualization.  It is a computer virtualization method in which the kernel of an operating system allows the existence of multiple isolated user-space instances, instead of just one. Such instances are called containers. A container is a software bucket comprising everything necessary to run the software independently. There can be multiple containers in a single machine and containers are completely isolated from one another as well as from the host machine. Containers are also called virtualization engines (VEs) or Jails (e.g. FreeBSD jail). Containers look like real computers from the point of view of programs running in them.  Items usually bundled into a container include Application, dependencies, libraries, binaries, and configuration files.

1.3 Container – Uses

  • OS-level virtualization is commonly used in virtual hosting environments.
  • A container is useful for packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and self-sufficient containers, that will run virtually anywhere. 
  • It is useful for securely allocating finite hardware amongst a large number of mutually-distributing users. 
  • System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on a single server.
  • Container is useful for packaging everything the app needs into a container and migrating that from one VM to another, to server or cloud without having to refactor the app.
  • Container usually imposes little to no overhead, because programs in virtual partitions use the OS’ normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machines. 
  • Container doesn’t require support in hardware to perform efficiently.

1.4 Container – Pros

  • Containers are fast compared to hardware-level virtualization, since there is no need to boot up a full virtual machine.A Container allows you to start apps in a virtual, software-defined environment much more quickly.
  • The average container size is within the range of tens of MB while VMs can take up several gigabytes. Therefore a server can host significantly more containers than virtual machines. 
  • Running containers is less resource intensive than running VMs so you can add more computing workload onto the same servers. 
  • Provisioning containers only take a few seconds or less, therefore, the data center can react quickly to a spike in user activity. 
  • Containers can enable you to easily allocate resources to processes and to run your application in various environments. 
  • Using containers can decrease the time needed for development, testing, and deployment of applications and services. 
  • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production. 
  • Containers are a very cost effective solution. They can potentially help you to decrease your operating cost (less servers, less staff) and your development cost (develop for one consistent runtime environment). 
  • Using containers, developers are able to have truly portable deployments. This helps in making Continuous Integration / Continuous Deployment easier. 
  • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment.

1.5 Container – Cons

  • Compared to traditional virtual machines, containers are less secure. Containers share the kernel, other components of the host operating system, and they have root access. This means that containers are less isolated from each other than virtual machines, and if there is a vulnerability in the kernel it can jeopardize the security of the other containers as well.
  • A container offers less flexibility in operating systems. You need to start a new server to be able to run containers with different operating systems. 
  • Networking can be challenging with containers. Deploying containers in a sufficiently isolated way while maintaining an adequate network connection can be tricky.
  • Developing and testing for containers requires training. Whereas writing applications for VMs, which are in effect the same as physical machines, is a straightforward transition for development teams.
  • Single VMs often run multiple applications. Whereas containers promotes a one-container one-application infrastructure. This means containerization tends to lead to a higher volume of discreet units to be monitored and managed.

1.6 Composition of a Container

At the core of container technology are:

  • Control Groups (cgroups)
  • Namespaces
  • Union filesystems

1.7 Control Groups

Control groups (cgroups) work by allowing the host to share and also limit the resources each process or container can consume. This is important for both, resource utilization and security. It prevents denial-of-service attacks on host’s hardware resources.

1.8 Namespaces

 Namespaces offer another form of isolation for process interaction within operating systems. It limits the visibility a process has on other processes, networking, filesystems, and user ID components. Container processes are limited to see only what is in the same namespace. Processes from containers or the host processes are not directly accessible from within the container process.

1.9 Union Filesystems

Containers run from an image, much like an image in the VM or Cloud world, it represents state at a particular point in time. Container images snapshot the filesystems. The snapshot tend to be much smaller than a VM. The container shares the host kernel and generally runs a much smaller set of processes. The filesystem is often layered or multi-leveled., e.g. Base layer can be Ubuntu with an application such as Apache or MySQL stacked on top of it.

1.10 Popular Containerization Software

  • Docker – Docker Swarm
  • Packer
  • Kubernetes
  • Rocket (rkt)
  • Apache Mesos
  • Linux Containers (LXC)
  • CloudSang
  • Marathon
  • Nomad
  • Fleet
  • Rancher
  • Containership
  • OpenVZ
  • Oracle Solaris Containers
  • Tectonic

1.11 Microservices

The microservice architectural style is an approach to developing a single application as a suite of small services. Each service runs in its own process and communicates with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.

1.12 Microservices and Containers / Clusters

Containers are excellent for microservices, as it isolates the services. Containerization of single services makes it easier to manage and update these services. Docker has led to the emergence of frameworks for managing complex scenarios, such as, how to manage single services in a cluster, how to manage multiple instances in a service across hosts , how to coordinate between multiple services on a deployment and management level. Kubernetes allows easy deployment and management of multiple Docker containers of the same type through an intelligent tagging system. With Kubernetes, you describe the characteristics of the image, e.g. number of instances, CPU, RAM, you would like to deploy.

1.13 Microservices and Orchestration

Microservices can benefit from deployment to containers. Issue with containers is, they are isolated. Microservices might require communication with each other. Container orchestration can be used to handle this issue. Container orchestration refers to the automated arrangement, coordination, and management of software containers. Container orchestration also helps in tackling challenges, such as service discovery, load balancing,  secrets/configuration/storage management, health checks, auto-[scaling/restart/healing] of containers and nodes,  zero-downtime deploys.

1.14 Microservices and Infrastructure-as-Code

In the old days, you would write a service and allow the Operations (Ops) team to deploy it to various servers for testing, and eventually production. Infrastructure-as-Code solutions helps in shortening the development cycles by automating the set up of infrastructure. Popular infrastructure-as-code solutions include Puppet, Chef, Ansible, Terraform, and Serverless. In the old days, servers were treated as part of the family. Servers were named, constantly monitored, and carefully updated. Due to containers and infrastructure-as-code solutions, these days the servers (containers) are often not updated. Instead, they are destroyed, then recreated. Containers and infrastructure-as-code solutions treat infrastructure as disposable.

1.15 Kubernetes Container Networking

Microservices require a reliable way to find and communicate with each other. Microservices in containers and clusters can make things more complex as we now have multiple networking namespaces to bear in mind. Communication and discovery requires traversing of container IP space and host networking. Kubernetes benefits from getting its ancestry from the clustering tools used by Google for the past decade. Many of the lessons learned from running and networking two billion containers per week have been distilled into Kubernetes.

1.16 Kubernetes Networking Options

Docker creates three types of networks by default:

  • bridged – this is the default choice. In this mode, the container has its own networking namespace and is then bridged via virtual interfaces to the host network. In this mode, two containers can use the same IP range because they are completely isolated.
  • ◊ host – in this mode, performance is greatly benefited since it removes a level of network virtualization; however, you lose the security of having an isolated network namespace.
  • none – creates a container with no external interface. Only a loopback device is shown if you inspect the network interfaces.
  • host, and none.
  • In all these scenarios, we are still on a single machine, and outside of a host mode, the container IP space is not available, outside the machine. Connecting containers across two machines then requires NAT and port mapping for communication.

Docker user-defined networks

  • Docker also supports user-defined networks via network plugins.

     bridge driver – allows creation of networks somewhat similar to default bridge driver.

     overlay driver – uses a distribution key-value store to synchronize the network creation across multiple hosts.

     Macvlan driver – uses the interface and sub-interfaces on the host. It offers a more efficient network virtualization and isolation as it bypasses the Linux bridge.

 Weave

  • Provides an overlay network for Docker containers

Flannel

  • Gives a full subnet to each host/node enabling a similar pattern to the Kubernetes practice of a routable IP per pod or group of containers.

Project Calico

  • Uses built-in routing functions of the Linux kernel.
  • It can be used for anything from small-scale deploys to large Internetscale installations.
  • There is no need for additional NAT, tunneling, or overlays.

Canal

  • It merges both Calico for network policy and Flannel for overlay into one solution.

1.17 Kubernetes Networking – Balanced Design

Using unique IP address at the host level is problematic as the number of containers grow. Assigning an IP address to each container can also be overkill. In cases of sizable scale, overlay networks and NATs are needed in order to address each container. Overlay networks add latency. You have to pick between fewer containers with multiple applications per container (unique IP address for each container) or multiple containers with fewer applications per container (Overlay networks / NAT)

1.18 Summary

Kubernetes provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes supports a range of container tools, including Docker. Containers are useful for packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and selfsufficient containers, that will run virtually anywhere. Microservices can benefit from containers and clustering. Kubernetes offers container orchestration.

What is Docker?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Docker

Docker is an open-source (and 100% free) project for IT automation. You can view Docker as a system or a platform for creating virtual environments which are extremely lightweight virtual machines. Docker allows developers and system administrators to quickly assemble, test, and deploy applications and their dependencies inside Linux containers supporting the multi-tenancy deployment model on a single host. Docker’s lightweight containers lend themselves to rapid scaling up and down. A container is a group of controlled processes associated with a separate tenant executed in isolation from other tenants. It is written in the Go programming language.

1.2 Where Can I Run Docker?

Docker runs on any modern-kernel 64-bit Linux distributions. The minimum supported kernel version is 3.10. Kernels older than 3.10 lack some of the features required by Docker containers. You can install Docker on VirtualBox and run it on OS X or Windows. Docker can be installed natively on Windows using Docker Machine, but requires Hyper-V. Docker can be booted from the small footprint Linux distribution boot2docker.

1.3 Installing Docker Container Engine

Installing on Linux:
Docker is usually available via the package manager of the distributions. For example, on Ubuntu and derivatives:
sudo apt-get update && sudo apt install docker.io

Installing on Mac
Download and install the official Docker.dmg from docker.com

Installing on Windows
Hyper-V must be enabled on Windows. Download the latest installer from docker.com

1.4 Docker Machine

Though Docker runs natively on Linux, it may be desirable to have two different host environment, such as Ubuntu and CentOS. To achieve this, VMs running Docker may be used. To simplify management of different Docker host, it is possible to use Docker Machine. Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. Docker Machine enables you to provision multiple remote Docker hosts on various flavors of Linux. Additionally, Machine allows you to run Docker on older Mac or Windows systems as well as cloud providers such as AWS, Azure and GCP. Using the docker-machine command, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

1.5 Docker and Containerization on Linux

Docker leverages resource isolation features of the modern Linux kernel offered by cgroups and kernel namespaces. The cgroups and kernel namespaces features allow creation of strongly isolated containers acting as very lightweight virtual machines running on a single Linux host. Docker helps abstract operating-system-level virtualization on Linux using abstracted virtualization interfaces based on libvirt, LXC (LinuX Containers) and systemd-nspawn.  As of version 0.9, Docker has the capability to directly use virtualization facilities provided by the Linux kernel via its own libcontainer library.

1.6 Linux Kernel Features: cgroups and namespaces

The control group kernel feature (cgroup) is used by the Linux kernel to allocate system resources such as CPU, I/O, memory, and network subject to limits, quotas, prioritization, and other control arrangements. The kernel provides access to multiple subsystems through the cgroup interface. 
Examples of subsystems (controllers) are:
 The memory controller for limiting memory use
 The cpuacct controller for keeping track of CPU usage

The cgroups facility was merged into the Linux kernel version 2.6.24.  Systems that use cgroups: Docker, Linux Containers (LXC), Hadoop, etc.  The namespaces feature is a related to cgroups facility that enables different applications to act as separate tenants with completely isolated views of the operating environment, including users, process trees, network, and mounted file systems.

1.7 The Docker-Linux Kernel Interfaces


Source: Adapted from http://en.wikipedia.org/wiki/Docker_(software)

1.8 Docker Containers vs Traditional Virtualization

System virtualization tools or emulators such as VirtualBox, Hyper-V or VMware, boot virtual machines from a complete guest OS image (of your choice) and emulate a complete machine, which results in a high operational overhead. Virtual environments created by Docker run on the existing operating system kernel of the host’s OS without a need for a hypervisor. This leads to very low overhead and significantly faster container startup time. Docker-provisioned containers do not include or require a separate operating system (it runs in the host’s OS). This circumstance puts a significant limitation on your OS choices.

1.9 Docker Containers vs Traditional Virtualization

Overall, traditional virtualization has advantages over Docker in that you have a choice of guest OSes (as long as the machine architecture is supported). You can get only some (limited) choice of Linux distros. You still have some choice: e.g. you can deploy a Fedora container on a Debian host. You can, however, run a Windows VM inside a Linux machine using virtual machine emulators like VirtualBox (with less engineering efficiency). With Linux containers, you can achieve a higher level of deployed application density compared with traditional VMs (10x more units!). Docker runs everything through a central daemon which is not a particularly reliable and secure processing model.

1.10 Docker Integration

Docker can be integrated with a number of IT automation tools that extend its capabilities, including Ansible, Chef,  Jenkins, Puppet, Salt. Docker is also deployed on a number of Cloud platforms like Amazon Web Services, Google Cloud  Platform, Microsoft Azure, OpenStack and Rackspace.

1.11 Docker Services

Docker deployment model is application-centric and in this context provides the following services and tools:
◊ A uniform format for bundling an application along with its dependencies which is portable across different machines.
◊ Tools for automatic assembling a container from source code: make, maven, Debian packages, RPMs, etc.
◊ Container versioning with deltas between versions.

1.12 Docker Application Container Public Repository

Docker community maintains the repository for official and public domain. 
Docker application images: https://hub.docker.com


1.13 Competing Systems

  • Rocket container runtime from CoreOS (an open source lightweight Linux kernel-based operating system). 
  • LXD for Ubuntu from Canonical (the company behind Ubuntu)
  • The LXC (Linux Containers), used by Docker internally

1.14 Docker Command Line

The following commands are shown as executed by the root (privileged) user:
docker run ubuntu echo ‘Yo Docker!’
 This command will create a docker container based on the ubuntu image, execute the echo command on it, and then shuts down.
docker ps -a
This command will list all the containers created by Docker along with their IDs

1.15  Starting, Inspecting, and Stopping Docker Containers

docker start -i <container_id>
This command will start an existing stopped container in interactive (-i) mode (you will get container’s STDIN channel)
docker inspect <container_id>
This command will provide JSON-encoded information about the running container identified by container_id
docker stop <container_id>
This command will stop the running container identified by container_id
 For the Docker command-line reference, visit-https://docs.docker.com/engine/reference/commandline/cli/

1.16 Docker Volume

 If you destroy a container and recreate it, you will lose data. Ideally, data should not be stored in containers. Volumes are mounted file systems available to containers. Docker volumes are a good way of safely storing data outside a container.  Docker volumes can be shared across multiple containers.
Creating a Docker volume
docker volume create my-volume
Mounting a volume
docker run -v my-volume:/my-mount-path -it ubuntu:12.04
/bin/bash
Viewing all volumes
docker volume ls
Deleting a volume
docker volume rm my-volume

1.17 Dockerfile

Rather than manually creating containers and saving them as custom images, it’s better to use Dockerfile to build images
Sample script
# let’s use ubuntu docker image
FROM openjdk
RUN apt-get update -y
RUN apt-get install sqlite -y
# deploy the jar file to the container
COPY SimpleGreeting-1.0-SNAPSHOT.jar
/root/SimpleGreeting-1.0-SNAPSHOT.jar
The Dockerfile filename is case sensitive. The ‘D’ in Dockerfile has to be uppercase. Building an image using docker build. (Mind the space and period at the end of the docker build command)
docker build -t my-image:v1.0 .
Or, if you want to use a different file name:
docker build -t my-image:v1.0 -f mydockerfile.txt

1.18 Docker Compose

A container runs a single application. However, most modern application rely on multiple service, such as database, monitoring, logging, messages queues, etc. Managing a forest of containers individually is difficult especially when it comes to moving the environment from development to test to production, etc. Compose is a tool for defining and running multi-container Docker applications on the same host. A single configuration file, docker-compose.yml, is used to define a group of container that must be managed as a single entity.

1.19 Using Docker Compose

  • Define as many Dockerfile as necessary
  • Create a docker-compose.yml file that refers to the individual Dockerfile

Sample Dockerfile
version: ‘3’
services:
greeting:
build: .
ports:
– “8080:8080”
links:
– mongodb
mongodb:
image: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: wasadmin
MONGO_INITDB_ROOT_PASSWORD: secret
volumes:
– my-volume:/data/db
volumes:
my-volume: {}

1.20 Dissecting docker-compose.yml

The Docker Compose file should be named either docker-compose.yml or docker-compose.yaml. Using any other names will require to use the -f argument to specify the filename. The docker-compose.yml file is writing in YAML
 https://yaml.org/
The first line, version, indicates the version of Docker Compose being used. As of this writing, version 3 is the latest.

1.21 Specifying services

 A ‘service’ in docker-compose parlance is a container. Services are specified under the service: node of the configuration file.
You choose the name of a service. The name of the service is meaningful within the configuration. A service (container) can be specified in one of two ways: Dockerfile or image name. 
Use build: to specify the path to a Dockerfile
Use image: to specify the name of an image that is accessible to the host

1.22 Dependencies between containers

Some services may need to be brought up before other services. In docker-compose.yml, it is possible to specify which service relies on which using the links: node. If service C requires that service A and B be brought up first, add a link as
follows:
A:
build: ./servicea
B:
build: ./serviceb
C:
build: ./servicec
link:
– A
– B
It is possible to specify as many links as necessary. Circular links are not permitted (A links to B and B links to A).

1.23 Injecting Environment Variables

In a microservice, containerized application, environment variables are often used to pass configuration to an application.
It is possible to pass environment variable to a service via the dockercompose.
yml file
myservice:
environment:
MONGO_INITDB_ROOT_USERNAME: wasadmin
MONGO_INITDB_ROOT_PASSWORD: secret

1.24 runC Overview

Over the last few years, Linux has gradually gained a collection of features. Windows 10 and Windows Server 2016+, also added similar features. Those individual features have esoteric names like “control groups”, “namespaces”, “seccomp”, “capabilities”, “apparmor” and so on. Collectively, they are known as “OS containers” or sometimes  lightweight virtualization”. Docker makes heavy use of these features and has become famous for it. Because “containers” are actually an array of complicated, sometimes arcane system features, they are integrated into a unified low-level component called runC. runC now available as a standalone tool which is a lightweight, portable container runtime. It includes all of the plumbing code used by Docker to interact with system features related to containers. It has no dependency on the rest of the Docker platform.

1.25 runC Features

  •  Full support for Linux namespaces, including user namespaces
  •  Native support for all security features available in Linux: Selinux, Apparmor, seccomp, control groups, capability drop, pivot_root, uid/gid dropping, etc. If Linux can do it, runC can do it.
  • Native support for live migration, with the help of the CRIU team at Parallels
  • Native support of Windows 10 containers is being contributed directly by Microsoft engineers
  • Planned native support for Arm, Power, Sparc with direct participation and support from Arm, Intel, Qualcomm, IBM, and the entire hardware manufacturers ecosystem.
  • Planned native support for bleeding edge hardware features – DPDK, sriov, tpm, secure enclave, etc.

1.26 Using runC

In order to use runc you must have your container in the format of an Open Container Initiative (OCI) bundle. If you have Docker installed you can use its export method to acquire a root filesystem from an existing Docker container.
# create the topmost bundle directory
mkdir /mycontainer
cd /mycontainer
# create the rootfs directory
mkdir rootfs
# export busybox via Docker into the rootfs directory
docker export $(docker create busybox) | tar -C rootfs -xvf –
 After a root filesystem is populated you just generate a spec in the format of a config.json file inside your bundle.
runc spec

1.27 Running a Container using runC

The first way is to use the convenience command run that will handle creating, starting, and deleting the container after it exits.
# run as root
cd /mycontainer
runc run mycontainerid

The second way is to implement the entire lifecycle (create, start, connect, and delete the container), manually
# run as root
cd /mycontainer
runc create mycontainerid
# view the container is created and in the “created” state
runc list
# start the process inside the container
runc start mycontainerid
# after 5 seconds view that the container has exited and is now in the stopped
state
runc list
# now delete the container
runc delete mycontainerid

1.28 Summary

  • Docker is a system for creating virtual environments which are, for all intents and purposes, lightweight virtual machines. 
  • Docker containers can only run the type of OS that matches the host’s OS. 
  • Docker containers are extremely lightweight (although not so robust and secure), allowing you to achieve a higher level of deployed application density compared with traditional VMs (10x more units!). 
  • On-demand provisioning of applications by Docket supports the Platform as- a-Service (PaaS)–style deployment and scaling. 
  • runC is a container runtime which has support for various containerization solutions.

Current Trends in Infrastructure Training

Due to the popularity of Infrastructure concepts like DevOps, CI/CD, Automation, Cloud, and Containerization, here is the general industry trend seen among infrastructure over the last few years. 

Overall

 There is a general trend towards more automation and CI/CD due to the insertion of Dev Ops practices and ideas in most organizations.  Couple this with the move towards Agile and DevOps and you are seeing the infrastructure position ,whether it is titled  System Engineer, System Administrator, DevOps Engineer, Infrastructure Engineer etc, move into coding/orchestrating sophisticated self-provisioned offerings.  What does that mean?  Infrastructures are not expected to provide AWS style automated pipelines that do everything from delivering working code from source to production, but also create production systems in a similar manner. This expectations is driving a bevy of other skills sets into the Infrastructure role and expectations are super high if you are in this kind of position in terms of technical capabilities.  

Programming Languages

 You now must know a programming language not as a dabbler or hacker, but as someone who can create/build/manipulate product customer-facing software to be DevOps in 2020.   This means not just BASH shell scripting, but a full blown language like Python, Java, .Net, Go etc. Probably in that order is preferred.  you will still be required to know Bash because it is everywhere.  So there is a trend for everyone to know the companies product language (Java, Python, etc) so that can support it and troubleshoot it from an infrastructure perspective.  Bash is still king for most infrastructure teams with Python and Go in that order for most shops that are not Windows based.  For Windows-based shops it is mainly Powershell, .Net and Python.  Java is not favored in most early adopters to startups, but is still used with Enterprises and late adopters not for automation but for product support.

Automation/Orchestration tools

 In addition, you must have an orchestration tool in play in your infrastructure.   So technologies, like Jenkins, Bamboo, TravisCI, TeamCity, Gitlab and Github Actions (pending) are all emerging with Jenkins being the dominant player currently.  Jenkins is slowly losing respect though as it is not emerging tech and doesn’t have great API integration without a lot of manipulation.   The plugin community that enables integrations is not keeping things up to date as quickly as desired and thus people are looking more and more at Gitlab or Github actions or paying for Enterprise versions to cover the gap.  Most Enterprises are also locked into their current tool already through configuration investment and/or licensing. 

 Configuration Management

While this is dipping in popularity in part due to the rise of Immutable infrastructure concepts like Docker and VM Images for every provider (AMI, VMWare, etc) the configuration technologies in this space are extending like crazy to cover other areas such as cloud provisioning.  That being said Ansible is the rising star here with old holdouts like Chef, Puppet and newcomer Salt still present but seemingly losing ground. 

Infrastructure Provisioning

This is a training area tied close to the provider of infrastructure (if you are using AWS you use AWS Cloudformation, Microsoft Azure uses Azure Resources Templates, etc) but multi-cloud was the battle cry in 2019 and it will be in 2020 as well.  Since Hybrid cross-cloud is a target especially with Kubernetes there is a push to have Infrastructure Provisioners that go “across” cloud providers and data center boundaries.  The only tool that does this at present that almost every Dev Ops Engineer I know either loves or hates (no middle ground) is Hashicorp’s Terraform. Classes on this are hot right now.  There are no real cross competitive tools as of my most recent knowledge in late 2019. 

Image Management

 Immutable infrastructure as an architecture is on the rise in most organizations coupled with cloud and containers.   As such there is really only one tool that does this well and that  is Hashicorp’s Packer. Few are offering good classes on this, but since this will generate VM and Container images that can be run across cloud providers it is gaining in popularity.  Needs a Configuration Management tool (see above) to function properly. 

Containerization and Container Orchestration 

This is an overwhelmingly popular area of technology and the emerging trend here is mainly around  Container Orchestration and not Container Run-Time technologies themselves of which Docker remains king.  Kubernetes is the Container Orchestration alpha predator and has eaten and dominated all other competitors by far.  The hype train on this is strong and many organizations are pushing for this without regards to impact or value.   As such many are looking to speed to market for their Kubernetes initiatives so IBM/Redhat Openshift, Rancher, Platform9, Pivotal are key technologies. Managed Services in the cloud like cloud provided Kubernetes like Azure Kubernetes services or Amazon’s Elastic Kubernetes Service are also included in this trend.    Trend-wise Enterprises are using IBM/Redhat Openshift while smaller companies are looking at Rancher (FOSS).  Everyone is looking at the cloud provided Kubernetes services like Amazon Elastic Kubernetes Services, but something to note is that most infrastructures are still using Virtual Machines. 

DevOps

DevOps is emerging mainly under a focus on tools while de-emphasizing its cultural and silo-breaking roots to a certain degree so any classes that focus on  metrics(prometheus, grafana, graphite, nagios, cloudwatchm datadog) are going to be popular.   Anything doing with Continuous Integration/Continuous Delivery/Continuous Deployment in almost any of the popular tool configurations (i.e. CI/CD with Jenkins, Maven, and Artifactory) is also in trend.  DevOps classes are anything that brings all the afore mentioned tools together into a cohesive delivery system are going to be amazingly popular as well. 

Cloud

Trends here depend which provider they are using, but  everyone is now chasing a cloud provider.  Either because their clients are wanting it or they see the obvious benefit of hybridizing their cloud infrastructures.  AWS is seen as the leader and thus AWS trainings are favored, but multi-cloud is the battlecry of most Enterprises who are either chasing customers or who are worried about competition on other fronts from Amazon.com.  Since AWS funds most of Amazon.com operating capital, business-wise Enterprise leaders are looking to avoid funding Amazon.com invasion into IoT, Machine Learning, Last Mile Delivery, Streaming Services etc.   That all being said, most are still looking at AWS as their dominate cloud provider but with an eye on Hybrid data center computing.