October 5, 2021 by
Category:

Author: Faheem Javed

This tutorial is adapted from the Web Age course Kubernetes for Developers on AKS Training

1.1 Configuring AKS for Deployment

By default, AKS can only connect to the public Docker Hub registry. To allow AKS to connect to your private ACR, you are required to configure the AKS cluster.

The following command ensures the AKS cluster can access images in the custom ACR registry.

az aks update -n $AKS_NAME -g $RESOURCE_GROUP --attach-acr $ACR_NAME

To refer to ACR images, use the following convention:

{acr_login_server}/{image}:{tag}

1.2 Deploying to Kubernetes

There are various methods that can be used to deploy a workload to Kubernetes.

Method 1: Using Generators (kubectl run, kubectl create (without .yaml), kubectl expose)

If you want to check quickly whether the container is working then you might use Generators.

# creates simple pods that you cannot scale after deployment.
kubectl run nginx-deployment --image=nginx  

# creates a deployment that can be scaled after the deployment has been performed
kubectl create deployment nginx-deployment --image=nginx 

Method 2: Using Imperative way (kubectl create with .yaml)

It can work with one object configuration file at a time that can be checked into a source control repository, such as Git.

It’s a one-time operation, i.e. you cannot run the same configuration file to make changes.

kubectl create -f deployment.yaml

Method 3: Using Declarative way (kubectl apply)

It works with files, directories, and sub-directories containing object configuration YAML files.

You can modify the configuration file(s) and rerun them to update the deployment(s).

kubectl apply -f deployment.yaml
kubectl apply -f folder_name_with_a_bunch_of_yaml_files/

1.3    Kubernetes Services

Kubernetes services expose applications running on a set of Pods as network services.

Services are a solution to the following situation:

  • Instances of an application run in separate Pods
  • Each Pod gets its own IP address
  • Pods come and go based on load
  • How can we access applications when we don’t know which instances are running and what their IP addresses are?

1.4 Service Resources

Using the Generators method, you can use the following commands to deploy an application and expose the service.

kubectl create deployment nginx-deployment --image=nginx 

kubectl expose deployment {deployment-name} --type={type} --port={port} --target-port={target-port}

In production, it’s preferred to use yaml files to define Kubernetes Service Resource Objects.

Example:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

This spec creates the ‘my-service” service which exposes “MyApp” on the internal cluster IP address at port 9376.

1.5 Service Type

Services can have one of the following types:

  • ClusterIP (default) – IP address internal to the cluster
  • NodePort – Port on the parent node’s internal cluster IP address. Can be exposed outside the cluster.
  • LoadBalancer – Provides access through a load balancer’s IP address
  • ExternalName – Maps to the external DNS name in the externalName field

1.6 ClusterIP

ClusterIP is the default service type. Services created without a type field are assigned this type.

The following command creates a ClusterIP type service:

kubectl expose deployment hello-nginx --port=8080 --targetPort=80

Details of the created service:

kubectl get services hello-nginx
NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
hello-nginx   ClusterIP   10.98.16.57   <none>        8080/TCP   4m59s

Service can be accessed by shelling into the cluster.

minikube ssh
curl http://10.98.16.57:8080

1.7 NodePort

With a NodePort type of service, Kubernetes exposes the service through a randomly generated port number off of the node where the application pods reside.

Node ports can also be created using the expose command:

kubectl expose deployment hello-nginx --type=NodePort --port=80

Details of the created service

kubectl get services hello-nginx
NAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)       AGE
hello-nginx   NodePort   10.110.148.227   <none>      80:31164/TCP  36s

The service can be accessed through the node IP address and service’s port number:

minikube service hello-nginx --url=true
http://192.168.99.100:31164
curl http://192.168.99.100:31164

Notes

The “minikube service {service-name} –url=true” command combines the node ip address and the service’s port to create a URL that can be used to address the given service

1.8 NodePort from Service Spec

NodePort services can also be created by applying Service Resource specifications like this one:

# my-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: MyApp
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30007

Command creates/updates the service:

kubectl apply service -f my-service.yaml

1.9 LoadBalancer

This service type provisions an external LoadBalancer to direct traffic to the selected Pods. This type is often used when running Kubernetes on cloud providers that include their own load balancers. The IP address of the external load balancer populates the service’s status.loadBalancer.ip field

The following command can be used to create this type of service:

kubectl expose deployment hello-nginx --name hello-nginx-lb --type=LoadBalancer --port=80

Details of the created service:

kubectl get services hello-nginx
NAME        TYPE         CLUSTER-IP   EXTERNAL-IP   PORT(S)       AGE
hello-nginx LoadBalancer 10.111.8.184 192.168.122.1 80:30725/TCP  51m

 

The service can be accessed through the node IP address and service’s port number:

minikube service hello-nginx --url=true
http://192.168.122.1:31164

curl http://192.168.122.1:31164

1.10 LoadBalancer from Service Spec

LoadBalance type services can also be created from yaml file specs:

# example-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example
  ports:
    - port: 8765
      targetPort: 9376
  type: LoadBalancer

Command to apply the spec:

kubectl apply -f example-service.yaml

Remember: the above requires an external load balancer to work

1.11 ExternalName

ExternalName type services map the service to a DNS name. Redirection to the service happens at the DNS level rather than by proxy or forwarding.          This type maps ‘my-service’ in the ‘prod’ namespace to a server identified by a name (my.database.example.com) that can be resolved by an external network DNS.

Example spec:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: my.database.example.com

 

This type might be used for example to map a service name inside the cluster to a database server outside the cluster.

Notes

More information on the external name type is available here:

https://kubernetes.io/docs/concepts/services-networking/service/#externalname

1.12 Accessing Applications

Applications can be accessed in a variety of ways:

  • Shell into the app’s container and use localhost
  • Shell into any container in the same pod as the app and again use localhost to access the app
  • Apps on the same node can access apps in different pods using the app pod’s name or IP address
  • Apps outside a cluster can access a service in a k8s cluster through a service-defined port on the cluster’s IP address.
  • The ‘kubectl port-forward’ command can be used to forward a port on the local machine where the cluster is running to a pod running inside the cluster.

1.13 Service Without a Selector

The selector in a service specification is used to identify Pods for a service to point to

spec:
  selector:
    app: MyApp

When a service is meant to point to some other type of back-end (as opposed to Pods) the selector is omitted. You might do this:

  • To access an external production database cluster
  • To point to a service in a different namespace or cluster
  • When only some of your back-ends are hosted by Kubernetes
  • When the service points to one or more Pods an Endpoint is automatically created for the service.

 

For non-Pod services the endpoint must be added manually:

apiVersion: v1
kind: Endpoints
metadata:
  name: my-service
subsets:
  - addresses:
      - ip: 192.0.2.42
    ports:
      - port: 9376

This endpoint then allows the service to point to the IP address and port of any server you choose.

Notes

For more information see:

https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors

1.14 Ingress

An Ingress Resource is a Kubernetes object that acts as the entry point for your cluster. An Ingress sits between the external network and the services in the K8s Cluster and lets you expose multiple services under the same IP address. 

Ingresses can be used to provide:

  • Routes from network traffic outside the cluster to services within the cluster
  • Load balancing
  • SSL/TLS Termination
  • Name-based virtual hosting

1.15 Ingress Resource Example

Example:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
  - host: hello-world.info
    http:
      paths:
      - path: /
        backend:
          serviceName: web
          servicePort: 8080

Notes

Documentation for Ingress’ can be found here:

https://kubernetes.io/docs/concepts/services-networking/ingress/

An ingress tutorial can be found here:

https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

1.16 Ingress Controller

An Ingress Controller implements the features defined in the Ingress object. Ingress objects will have NO EFFECT unless an ingress controller has been deployed. 

Two Ingress Controllers supported by the Kubernetes project are:

  • nginx ingress controller for Kubernetes
  • GCE ingress-gce

 

Several other ingress controllers are also available. 

Setting up to use the ingress-nginx ingress controller on minikube:

minikube addons enable ingress

Setting up the ingress-nginx controller on other platforms:

Notes

Documentation on Ingress Controllers can be found here:

https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

1.17 Service Mesh

A service mesh can be thought of as an infrastructure layer for service-to-service communication.  A service mesh can implement a variety of cross-cutting features including: Load Balancing, Circuit Breaking Retry & Timeout, Seciurity(TLS), Monitoring, Traffic Metrics. A Service Mesh is typically implemented as a proxy deployed in a sidecar container next to each instance container of an application

Examples of Service Mesh include:

  • Open Service Mesh (OSM) (backed by Microsoft)
  • Istio (backed by Google & IBM)
  • Maesh
  • Kuma
  • AWS App Mesh

For more information on Service Mesh see:

https://servicemesh.es/ 

1.18 Summary

In this tutorial, we covered:

  • Kubernetes Services
  • Service Type
  • ClusterIP
  • NodePort
  • LoadBalancer
  • ExternalName
  • Accessing Applications
  • Services Without a Selector
  • Ingresses
  • Service Mesh

Leave a Reply

Your email address will not be published. Required fields are marked *