Home  > Resources  > Blog

Google Cloud Platform Container Services

 
June 3, 2021 by Bibhas Bhattacharya
Category: Cloud

This tutorial is adapted from the Web Age course Getting Started with Google Kubernetes Engine.

1.1 What are Containers?

Containers are a form of light-weight, OS-level, portable operating system virtualization. Containers are significantly more light-weight (in terms of storage footprint, spin-up times, and system overhead) compared to the traditional (full) machine virtualization technologies because containers do not contain the full operating system images delegating the OS functions to the underlying host machine. Containers only contain the necessary executables, libraries, and configuration files needed to run your containerized applications. Popular containerization technologies are Docker [https://www.docker.com/] and Podman [https://podman.io/]. One of the popular production-grade container cluster orchestration technologies is Kubernetes (a.k.a K8s), initially developed by Google [https://kubernetes.io/].

1.2 What are Containers For?

As containers have appreciably faster start-up times compared to traditional virtual machines and less overhead, they make a great choice for building microservices. Multiple containers may be organized in and deployed as a cluster forming a basis for massively scalable and fault-tolerant application deployments.

1.3 What are Container Services?

Google Compute Engine (GCE) can run almost any container technology. Containers (Docker and Podman) are supported on modern Linux OSes; you can also run Docker on Windows Server 2016 or later. For container cluster management, you can use either your preferred container orchestration tool or Google Kubernetes Engine. For the latter choice, Google Cloud offers CaaS (Containers-as-a-Service) based on Kubernetes Engine (GKE). GKE Cloud or GKE on-prem are offered as part of Anthos platform.

Notes

For more information about Compute Engine containers, visit https://cloud.google.com/compute/docs/containers

 

1.4 Google Kubernetes Engine (GKE)

Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications on Google infrastructure. It Supports Kubernetes-native DevOps CI/CD tooling. A GKE container cluster consists of multiple Compute Engine instances; clusters can be as large as 15000 instances (nodes). GKE clusters natively support Kubernetes Network Policy to restrict traffic with pod-level firewall rules. GKE offers enterprise-ready containerized solutions with prebuilt deployment templates, simplified licensing, and consolidated billing. GKE provides fast-track access to open-source and commercial applications available on Google Cloud Marketplace.

For more information, visit https://cloud.google.com/kubernetes-engine/

1.5 The CLI Tools

To manage your GKE clusters, you need to use the gcloud and kubectl command-line tools. Both CLI tools come pre-installed with the Google Cloud Shell. You can install and use the gcloud and kubectl tools in your local environment. gcloud is the primary command-line interface for Google Cloud, kubectl provides the primary command-line interface for managing K8s clusters.

1.6 Container-Optimized VM Images

To ease the getting-started process for containerizing your applications, Compute Engine’s public image repository provides several VM images with minimalistic container-optimized OSes that come with modern versions of Docker, Podman, or Kubernetes preinstalled. [https://cloud.google.com/compute/docs/images]

The full list of VM images with image sizes can be viewed in the Google Cloud Console or using the gcloud command-line tool. Google regularly updates the public images repository and provide patches with critical vulnerability fixes.

1.7 GKE Main Components

A GKE cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are standard Compute Engine VM instances that run the Kubernetes processes. The default type is e2-medium; you can select a different machine type when you create a cluster.

Note: GKE cluster names must start with a letter and end with an alphanumeric character; names cannot be longer than 40 characters

1.8 The Benefits of Using Google Kubernetes Clusters

When you run a GKE cluster, you automatically take advantage of the advanced cluster management features that Google Cloud provides, including:

  • Load balancing for Compute Engine instances
  • Node pools to designate subsets of nodes within a cluster for additional flexibility
  • Automatic scaling of your cluster’s node instance count
  • Automatic upgrades for your cluster’s node software
  • Node auto-repair to maintain node health and availability
  • Logging and Monitoring with Cloud Monitoring for visibility into your cluster

1.9 Standard GKE Cluster Architecture

Source: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture

A zonal cluster, as shown on the diagram, has a single control plane in a single (availability) zone. Alternatively, you can distribute your worker notes in multiple zones.

1.10 The Control Plane

The control plane is the unified endpoint for your K8s cluster. it runs a variety of infrastructure processes, including the Kubernetes API server, scheduler, and core resource controllers.

The control plane is responsible for:

  • Workload (containerized apps) scheduling workloads,
  • Managing the workloads’ lifecycle, scaling, and upgrades, as well as
  • Network and storage resources for the workloads

1.11 Kubernetes API Server

The Kubernetes API server process is the hub for all client interactions that can happen over

  • HTTP/gRPC channel
  • kubectl CLI, or
  • Cloud Console

All internal cluster processes — the cluster nodes, system components, and application controllers — act as clients of the API server; GKE documentation refers to the API server as the single “source of truth” for the entire cluster. The control plane and nodes also communicate using Kubernetes APIs.

1.12 Google Container Registry

Google Cloud’s Container Registry (gcr.io) is a private Docker repository (usually set up using a Nexus server) that enables developers to securely store and manage their Docker container images. You are only required to have a service account with proper access permission. As an added value, with Container Registry, you can set up CI/CD pipelines with integration to Cloud Build or deploy directly to Google Kubernetes Engine, App Engine, Cloud Functions, or Firebase. When you create or update a K8s cluster, container images for the Kubernetes software running on the control plane and nodes are pulled from Container Registry.

1.13 Creating a GKE Cluster with the gcloud CLI

First, set the default compute zone (unless you already have the right default zone in your gcloud configuration):

gcloud config set compute/zone us-central1-a

Then, create the GKE cluster specifying the number of nodes (unless you are fine with the current default count):

gcloud container clusters create <YOUR_GKE_CLUSTER-NAME> --num-nodes = 3

Note: It may take several minutes for the command to complete

1.14 Deploying a Containerized App with the kubectl CLI

Now that you have created a cluster, fetch the auto-generated cluster credentials so that you can authenticate yourself to the cluster’s endpoint before you can start working with it:

gcloud container clusters get-credentials <YOUR_GKE_CLUSTER-NAME>

Note: The previous command will also generate a kubeconfig entry for your GKE cluster

At this point, you can deploy your containerized (e.g. Dockerized) application, for example, a web server, using the kubectl tool, e.g.

kubectl create deployment my_web --image=gcr.io/google-samples/hello-app:1.0

You can make your Kubernetes cluster available to external traffic with this command:

kubectl expose deployment my_web --type=LoadBalancer --port 9099

Notes:

The kubectl create deployment my_web command on the slide will perform the following cluster creation operations:

  • A deployment object identified as my_web will be created
  • The –image flag specifies the container image to be deployed
  • The specified image will be pulled from Google’s Container Registry bucket gcr.io/google-samples/hello-app:1.0

Note: If a version is not specified, the latest one will be pulled from the Registry bucket.

The workflow described in the slide can be illustrated by this diagram from Google’s documentation:

1.15 What is Anthos?

Anthos is an application management platform based on the Kubernetes technology that provides a set of operations and software engineering services for both in-cloud and on-premises environments.  Anthos supports critical DevOps cycles, Build, deploy, optimize. With Anthos, you can get consistent service-centric development and operations experience in hybrid and multi-cloud environments-Google Cloud; AWS; on-prem attached K8s clusters

Anthos supports federated clustering using Google Connect, allowing multiple clusters to be viewed and managed from the Anthos dashboard

For more information, visit https://cloud.google.com/anthos/docs/concepts/overview

1.16 Anthos Deployment Options

Source: Adapted from Google Documentation

1.17 Summary

In this tutorial, we discussed the following topics:

  • Google Cloud Platform Container Services
  • Google Kubernetes Engine (GKE)
  • Anthos

Follow Us

Blog Categories