November 18, 2021 by
Category:

Author: Michael Forrester

Management and Configuration in Rancher (core) falls into a few major
categories after Rancher is deployed. Namely:

  • User Interface and Command Line Interface
  • Deployments and Workload Types
  • Namespaces and Projects
  • Scaling Applications
  • Storage for Workloads
  • Networking for Workloads


We will cover each of these in detail in web-based User Interface and the Command Line in separate sections.

Web Based User Interface

Rancher runs a web interface on HTTP/HTTPS for a variety of management purposes ranging from cluster management to Rancher settings and Rancher Authentication. This includes:

  • setting up Clusters, importing Clusters, and managing clusters as shown below. Note that RKE2/K3s are currently approved for AKS, EKS,and GKE only. All other providers are tech preview as of 2.6.
  • Setting up Cloud Credentials to connect to providers such as AWS, Azure, GCP, and Drivers to enable those providers
  • Workload Management such as Deployments, CronJobs, etc.
  • Pod Security Policies that define capabilities, Run As policies, SE Linux policies and more

Please note that the Rancher UI is extensive covering all aspects of Kubernetes management including Storage, RBAC, Fleet, Scheduling, and more.

Rancher CLI

The Rancher CLI is a way to manage both your Rancher installation and your K8s clusters. This method uses the Rancher proxy authentication to enable access. Standard K8s commands like rancher kubectl get pods will run
successfully. When you first login to the Rancher CLI you will have to authenticate using an API bearer token using the following command:

  ./rancher login https://<insert your rancher server url> --token \ 

<insert bearer token here> 

Bearer tokens have to be retrieved from Rancher for use. Standard K8s commands like rancher kubectl get pods will run successfully as will any kubectl command. The Rancher CLI supports the following commands different from kubectl
Apps – Perform operations on catalog applications like Helm or
Rancher charts
Catalog – Perform operations on catalogs
Clusters – manage and change clusters
Context – switch between different Rancher Projects
Projects – Perform operations on Projects in Rancher-launched
◦ Settings – Show the current settings for your Rancher server
SSH – connect to a node in your clusters using ssh.
Help – or –help which will give you help for one command.

Workloads in Kubernetes and Rancher

Since Rancher is arguably a management layer to Kubernetes; all of the
standard Kubernetes workloads apply.
Kubernetes divides workloads into different types. The most popular types
supported by Kubernetes are:
Deployments are best used for stateless applications (i.e., when
you don’t have to maintain the workload’s state). Pods managed by
deployment workloads are treated as independent and disposable. If
a pod encounters a disruption, Kubernetes removes it and then
recreates it. An example application would be an Nginx web server.
StatefulSets, in contrast to deployments, are best used when your
application needs to maintain its identity and store data. An
application would be something like Zookeeper—an application that
requires a database for storage.
Daemonsets ensures that every node in the cluster runs a copy of
the pod. For use cases where you’re collecting logs or monitoring node
performance, this daemon-like workload works best.
Jobs launch one or more pods and ensure that a specified number
of them successfully terminate. Jobs are best used to run a finite
task to completion as opposed to managing an ongoing desired
application state.
CronJobs are similar to jobs. CronJobs, however, runs to
completion on a cron-based schedule.

All of the normal limitations and considerations for these standard K8s
workload types still apply. All deployments can be done through the Rancher UI or CLI just as with the K8s dashboard.
The one concept related to workloads that is different is Namespaces
versus Projects which we will cover next.

Workloads in  Rancher UI

Namespaces, Projects, and Rancher

Projects are a Rancher-created object that helps organize Namespaces
(which are a K8s native object). Every workload runs inside a Rancher Project which has at least one Namespace. The intention of Projects is to help cordon and group Namespaces particularly for multi-tenant/multi-team clusters so that users can have access to the same projects without interfering with other team’s projects. Clusters contain Projects; Projects contain Namespaces
In standard Kubernetes, Role-Based Access and Cluster resources are
assigned at a Namespace level which is can get repetitive if you have 20+
Namespaces for a team. With Rancher, RBAC and Cluster privileges are assigned to the Project which then assigns it automatically to all attached Namespaces.

Projects can support the following actions:
◊ Set Resource Quotas
◊ Configure Pod Security Policies
◊ Setup Pipelines
◊ Assign users a role inside a Project (and therefore numerous attached
Namespaces)
◊ Configure Tools
Note: that on the Rancher CLI your project must be set to the Project that
grants you the proper access. If you do not see an expected object or
resource than most likely you are in the wrong Project in Rancher to see it.
Run rancher context switch to list and change Projects or set the
Rancher UI as below

Rancher UI for Projects and Namespaces

 Workload Scaling

Scaling anything but Rancher works exactly like it does in Kubernetes as
described in previous chapters.
Scaling in Rancher can be at three different levels
◊ Scaling Rancher itself (not covered here)
◊ Scaling a specific Kubernetes Cluster’s worker nodes (not covered
here)
◊ Scaling your workload’s replica set. i.e. Deployments from 2 replicas to
3 replicas.
◊ Scaling a cluster using a cluster autoscaler or a horizontal pod
autoscaler works like it does in standard Kubernetes
◊ So does manually scaling a workload’s replicas

◊ Using the command rancher kubectl scale works as well.

Storage for Workloads

Persistent storage for Workloads is centered around the same concepts
as Kubernetes described in the K8s chapters. Kubernetes still uses Storage Classes, Persistent Volume Claims, and Persistent Volumes underneath the Rancher setup. Project Longhorn is a CNCF project that was contributed by Rancher to create a distributed Block Storage system.
This may show up as a driver/storage class as well as any cloud provider’s
block storage, NFS, local bind storage, etc. Note below that AWS, Azure,
and Google as listed just to name a few.

Networking for Workloads

  • Networking for Workloads is handled just like they are for Kubernetes with Rancher managed nodes.
  • All K8s concepts around Services (ClusterIP, NodePort, ExternalDNS, etc) are valid.
  • All K8s concepts around Ingresses (like using an AWS Application Load Balancer) are also valid.
  • The Service options under Rancher’s Service Discovery section shows the same K8s objects.

Summary

  • Rancher has its own web-based UI that works similarly to the Kubernetes Dashboard.
  • Rancher has its own rancher command line tool that will also run kubectl inside of it for multiple clusters.
  • Since Rancher is just a management layer almost all K8s concepts apply to how workloads are managed
  • The one concept that is different is the Project over Namespace concept which Rancher introduced so that Namespaces could be grouped into Projects. Openshift (via Redhat) did something similar.