What is Docker?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Docker

Docker is an open-source (and 100% free) project for IT automation. You can view Docker as a system or a platform for creating virtual environments which are extremely lightweight virtual machines. Docker allows developers and system administrators to quickly assemble, test, and deploy applications and their dependencies inside Linux containers supporting the multi-tenancy deployment model on a single host. Docker’s lightweight containers lend themselves to rapid scaling up and down. A container is a group of controlled processes associated with a separate tenant executed in isolation from other tenants. It is written in the Go programming language.

1.2 Where Can I Run Docker?

Docker runs on any modern-kernel 64-bit Linux distributions. The minimum supported kernel version is 3.10. Kernels older than 3.10 lack some of the features required by Docker containers. You can install Docker on VirtualBox and run it on OS X or Windows. Docker can be installed natively on Windows using Docker Machine, but requires Hyper-V. Docker can be booted from the small footprint Linux distribution boot2docker.

1.3 Installing Docker Container Engine

Installing on Linux:
Docker is usually available via the package manager of the distributions. For example, on Ubuntu and derivatives:
sudo apt-get update && sudo apt install docker.io

Installing on Mac
Download and install the official Docker.dmg from docker.com

Installing on Windows
Hyper-V must be enabled on Windows. Download the latest installer from docker.com

1.4 Docker Machine

Though Docker runs natively on Linux, it may be desirable to have two different host environment, such as Ubuntu and CentOS. To achieve this, VMs running Docker may be used. To simplify management of different Docker host, it is possible to use Docker Machine. Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. Docker Machine enables you to provision multiple remote Docker hosts on various flavors of Linux. Additionally, Machine allows you to run Docker on older Mac or Windows systems as well as cloud providers such as AWS, Azure and GCP. Using the docker-machine command, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

1.5 Docker and Containerization on Linux

Docker leverages resource isolation features of the modern Linux kernel offered by cgroups and kernel namespaces. The cgroups and kernel namespaces features allow creation of strongly isolated containers acting as very lightweight virtual machines running on a single Linux host. Docker helps abstract operating-system-level virtualization on Linux using abstracted virtualization interfaces based on libvirt, LXC (LinuX Containers) and systemd-nspawn.  As of version 0.9, Docker has the capability to directly use virtualization facilities provided by the Linux kernel via its own libcontainer library.

1.6 Linux Kernel Features: cgroups and namespaces

The control group kernel feature (cgroup) is used by the Linux kernel to allocate system resources such as CPU, I/O, memory, and network subject to limits, quotas, prioritization, and other control arrangements. The kernel provides access to multiple subsystems through the cgroup interface. 
Examples of subsystems (controllers) are:
 The memory controller for limiting memory use
 The cpuacct controller for keeping track of CPU usage

The cgroups facility was merged into the Linux kernel version 2.6.24.  Systems that use cgroups: Docker, Linux Containers (LXC), Hadoop, etc.  The namespaces feature is a related to cgroups facility that enables different applications to act as separate tenants with completely isolated views of the operating environment, including users, process trees, network, and mounted file systems.

1.7 The Docker-Linux Kernel Interfaces


Source: Adapted from http://en.wikipedia.org/wiki/Docker_(software)

1.8 Docker Containers vs Traditional Virtualization

System virtualization tools or emulators such as VirtualBox, Hyper-V or VMware, boot virtual machines from a complete guest OS image (of your choice) and emulate a complete machine, which results in a high operational overhead. Virtual environments created by Docker run on the existing operating system kernel of the host’s OS without a need for a hypervisor. This leads to very low overhead and significantly faster container startup time. Docker-provisioned containers do not include or require a separate operating system (it runs in the host’s OS). This circumstance puts a significant limitation on your OS choices.

1.9 Docker Containers vs Traditional Virtualization

Overall, traditional virtualization has advantages over Docker in that you have a choice of guest OSes (as long as the machine architecture is supported). You can get only some (limited) choice of Linux distros. You still have some choice: e.g. you can deploy a Fedora container on a Debian host. You can, however, run a Windows VM inside a Linux machine using virtual machine emulators like VirtualBox (with less engineering efficiency). With Linux containers, you can achieve a higher level of deployed application density compared with traditional VMs (10x more units!). Docker runs everything through a central daemon which is not a particularly reliable and secure processing model.

1.10 Docker Integration

Docker can be integrated with a number of IT automation tools that extend its capabilities, including Ansible, Chef,  Jenkins, Puppet, Salt. Docker is also deployed on a number of Cloud platforms like Amazon Web Services, Google Cloud  Platform, Microsoft Azure, OpenStack and Rackspace.

1.11 Docker Services

Docker deployment model is application-centric and in this context provides the following services and tools:
◊ A uniform format for bundling an application along with its dependencies which is portable across different machines.
◊ Tools for automatic assembling a container from source code: make, maven, Debian packages, RPMs, etc.
◊ Container versioning with deltas between versions.

1.12 Docker Application Container Public Repository

Docker community maintains the repository for official and public domain. 
Docker application images: https://hub.docker.com


1.13 Competing Systems

  • Rocket container runtime from CoreOS (an open source lightweight Linux kernel-based operating system). 
  • LXD for Ubuntu from Canonical (the company behind Ubuntu)
  • The LXC (Linux Containers), used by Docker internally

1.14 Docker Command Line

The following commands are shown as executed by the root (privileged) user:
docker run ubuntu echo ‘Yo Docker!’
 This command will create a docker container based on the ubuntu image, execute the echo command on it, and then shuts down.
docker ps -a
This command will list all the containers created by Docker along with their IDs

1.15  Starting, Inspecting, and Stopping Docker Containers

docker start -i <container_id>
This command will start an existing stopped container in interactive (-i) mode (you will get container’s STDIN channel)
docker inspect <container_id>
This command will provide JSON-encoded information about the running container identified by container_id
docker stop <container_id>
This command will stop the running container identified by container_id
 For the Docker command-line reference, visit-https://docs.docker.com/engine/reference/commandline/cli/

1.16 Docker Volume

 If you destroy a container and recreate it, you will lose data. Ideally, data should not be stored in containers. Volumes are mounted file systems available to containers. Docker volumes are a good way of safely storing data outside a container.  Docker volumes can be shared across multiple containers.
Creating a Docker volume
docker volume create my-volume
Mounting a volume
docker run -v my-volume:/my-mount-path -it ubuntu:12.04
/bin/bash
Viewing all volumes
docker volume ls
Deleting a volume
docker volume rm my-volume

1.17 Dockerfile

Rather than manually creating containers and saving them as custom images, it’s better to use Dockerfile to build images
Sample script
# let’s use ubuntu docker image
FROM openjdk
RUN apt-get update -y
RUN apt-get install sqlite -y
# deploy the jar file to the container
COPY SimpleGreeting-1.0-SNAPSHOT.jar
/root/SimpleGreeting-1.0-SNAPSHOT.jar
The Dockerfile filename is case sensitive. The ‘D’ in Dockerfile has to be uppercase. Building an image using docker build. (Mind the space and period at the end of the docker build command)
docker build -t my-image:v1.0 .
Or, if you want to use a different file name:
docker build -t my-image:v1.0 -f mydockerfile.txt

1.18 Docker Compose

A container runs a single application. However, most modern application rely on multiple service, such as database, monitoring, logging, messages queues, etc. Managing a forest of containers individually is difficult especially when it comes to moving the environment from development to test to production, etc. Compose is a tool for defining and running multi-container Docker applications on the same host. A single configuration file, docker-compose.yml, is used to define a group of container that must be managed as a single entity.

1.19 Using Docker Compose

  • Define as many Dockerfile as necessary
  • Create a docker-compose.yml file that refers to the individual Dockerfile

Sample Dockerfile
version: ‘3’
services:
greeting:
build: .
ports:
– “8080:8080”
links:
– mongodb
mongodb:
image: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: wasadmin
MONGO_INITDB_ROOT_PASSWORD: secret
volumes:
– my-volume:/data/db
volumes:
my-volume: {}

1.20 Dissecting docker-compose.yml

The Docker Compose file should be named either docker-compose.yml or docker-compose.yaml. Using any other names will require to use the -f argument to specify the filename. The docker-compose.yml file is writing in YAML
 https://yaml.org/
The first line, version, indicates the version of Docker Compose being used. As of this writing, version 3 is the latest.

1.21 Specifying services

 A ‘service’ in docker-compose parlance is a container. Services are specified under the service: node of the configuration file.
You choose the name of a service. The name of the service is meaningful within the configuration. A service (container) can be specified in one of two ways: Dockerfile or image name. 
Use build: to specify the path to a Dockerfile
Use image: to specify the name of an image that is accessible to the host

1.22 Dependencies between containers

Some services may need to be brought up before other services. In docker-compose.yml, it is possible to specify which service relies on which using the links: node. If service C requires that service A and B be brought up first, add a link as
follows:
A:
build: ./servicea
B:
build: ./serviceb
C:
build: ./servicec
link:
– A
– B
It is possible to specify as many links as necessary. Circular links are not permitted (A links to B and B links to A).

1.23 Injecting Environment Variables

In a microservice, containerized application, environment variables are often used to pass configuration to an application.
It is possible to pass environment variable to a service via the dockercompose.
yml file
myservice:
environment:
MONGO_INITDB_ROOT_USERNAME: wasadmin
MONGO_INITDB_ROOT_PASSWORD: secret

1.24 runC Overview

Over the last few years, Linux has gradually gained a collection of features. Windows 10 and Windows Server 2016+, also added similar features. Those individual features have esoteric names like “control groups”, “namespaces”, “seccomp”, “capabilities”, “apparmor” and so on. Collectively, they are known as “OS containers” or sometimes  lightweight virtualization”. Docker makes heavy use of these features and has become famous for it. Because “containers” are actually an array of complicated, sometimes arcane system features, they are integrated into a unified low-level component called runC. runC now available as a standalone tool which is a lightweight, portable container runtime. It includes all of the plumbing code used by Docker to interact with system features related to containers. It has no dependency on the rest of the Docker platform.

1.25 runC Features

  •  Full support for Linux namespaces, including user namespaces
  •  Native support for all security features available in Linux: Selinux, Apparmor, seccomp, control groups, capability drop, pivot_root, uid/gid dropping, etc. If Linux can do it, runC can do it.
  • Native support for live migration, with the help of the CRIU team at Parallels
  • Native support of Windows 10 containers is being contributed directly by Microsoft engineers
  • Planned native support for Arm, Power, Sparc with direct participation and support from Arm, Intel, Qualcomm, IBM, and the entire hardware manufacturers ecosystem.
  • Planned native support for bleeding edge hardware features – DPDK, sriov, tpm, secure enclave, etc.

1.26 Using runC

In order to use runc you must have your container in the format of an Open Container Initiative (OCI) bundle. If you have Docker installed you can use its export method to acquire a root filesystem from an existing Docker container.
# create the topmost bundle directory
mkdir /mycontainer
cd /mycontainer
# create the rootfs directory
mkdir rootfs
# export busybox via Docker into the rootfs directory
docker export $(docker create busybox) | tar -C rootfs -xvf –
 After a root filesystem is populated you just generate a spec in the format of a config.json file inside your bundle.
runc spec

1.27 Running a Container using runC

The first way is to use the convenience command run that will handle creating, starting, and deleting the container after it exits.
# run as root
cd /mycontainer
runc run mycontainerid

The second way is to implement the entire lifecycle (create, start, connect, and delete the container), manually
# run as root
cd /mycontainer
runc create mycontainerid
# view the container is created and in the “created” state
runc list
# start the process inside the container
runc start mycontainerid
# after 5 seconds view that the container has exited and is now in the stopped
state
runc list
# now delete the container
runc delete mycontainerid

1.28 Summary

  • Docker is a system for creating virtual environments which are, for all intents and purposes, lightweight virtual machines. 
  • Docker containers can only run the type of OS that matches the host’s OS. 
  • Docker containers are extremely lightweight (although not so robust and secure), allowing you to achieve a higher level of deployed application density compared with traditional VMs (10x more units!). 
  • On-demand provisioning of applications by Docket supports the Platform as- a-Service (PaaS)–style deployment and scaling. 
  • runC is a container runtime which has support for various containerization solutions.

How to Secure a Web Application using Spring Security?

This tutorial is adapted from Web Age course  Technical Introduction to Microservices.

1.1 Securing Web Applications with Spring Security 3.0

 Spring Security (formerly known as Acegi) is a framework extending the traditional JEE Java Authentication and Authorization Service (JAAS). It can work by itself on top of any Servlet-based technology. It does however continue to use Spring core to configure itself.  It can integrate with many back-end technologies like OpenID, CAS, LDAP, Database. It uses a servlet-filter to control access to all Web requests. It can also integrate with AOP to filter method access. This gives you method-level security without having to actually use EJB.

1.2 Spring Security 3.0

Because it is based on a servlet-filter, it can also work with SOAP based Web Services, RESTful Services, any kind of Web Remoting, and Portlets. It can even be integrated with non-Spring web frameworks such as Struts, Seam, and ColdFusion. Single Sign On (SSO) can be integrated through CAS, the Central Authentication Service from JA-SIG. This gives us access to authenticate against X.509 Certificates, OpenID (supported by Google, Facebook, Yahoo, and many others), and LDAP. WS-Security and WS-Trust are built on top of these. It can integrate into WebFlow. There’s support for it in SpringSource Tool Suite.

1.3 Authentication and Authorization

Authentication answers the question “Who are you?” . It includes a User Registry of known user credentials.. It includes an Authentication Mechanism for comparing the user credentials with the User Registry. Spring Security can be configured to authenticate users using various means or to accept the authentication that has been done by an external mechanism. Authorization answers the question “What can you do?” Once a valid user has been identified, a decision can be made about allowing the user to perform the requested function. Spring Security can handle the authorization decision. Sometimes this may be very fine-grained. For example, allowing a user to delete their own data but not the data of other users.

1.4 Programmatic v Declarative Security

Programmatic security allows us to make fine grained security decisions but requires writing the security code within our application. The security rules being applied may be obscured by the code being used to enforce them. Whenever possible, we would prefer to declare the rules for access and have a framework like Spring Security enforce those rules. This allows us to focus on the security rules themselves and not writing the code to implement them. With Spring Security we have a DSL for security that enables us to declare the kinds of rules we would have had to code before. It also enables us to use EL in our declarations which gives us a lot of flexibility.  This can include contextual information like time of access, number of items in a shopping cart, number of previous orders, etc.

1.5 Getting Spring Security Gradle or Maven

Spring 3.0 split many different packages into different modules so you can use just what you need. The following will almost always be used

  • Core – Core classes
  • Config – XML namespace configuration
  • Web – filters and web-security infrastructure

The following will be used if the appropriate features are required

  • JSP Taglibs
  • LDAP – LDAP authentication and provisioning
  • ACL – Specialized domain object ACL implementation
  • CAS – Support for JA-SIG.org Central Authentication Support
  • OpenID – ‘OpenID for Java’ web authentication support

Getting Spring Security from Gradle

The exact syntax of how you add the above Spring Security modules using Maven will differ depending on if you get them from:
Maven Central – http://search.maven.org/
SpringSource Enterprise Bundle Repository (EBR) – http://ebr.springsource.com/repository/
The following is an example of getting them from the Maven Central:
group ‘com.shaneword’
version ‘1.0-SNAPSHOT’
apply plugin: ‘java’
sourceCompatibility = 1.7
repositories {
mavenCentral()
}
dependencies {
compile
“org.springframework.security:org.springframework.security.core:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.web:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.taglibs:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.config:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.ldap:5.1.5.RELEASE”
testCompile group: ‘junit’, name: ‘junit’, version: ‘4.11’
}

Getting Spring Security from Maven

The exact syntax of how you add the above Spring Security modules using Maven will differ depending on if you get them from:
Maven Central – http://search.maven.org/
SpringSource Enterprise Bundle Repository (EBR) – http://ebr.springsource.com/repository/
The following is an example of getting them from the SpringSource EBR:
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.core</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.web</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.taglibs</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.config</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.ldap</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>

1.6 Spring Security Configuration

If Spring Security is on the classpath, then web applications will be setup with “basic” authentication on all HTTP endpoints. There is a default AuthenticationManager that has a single user called ‘user’ with a random password.  The password is printed out during application startup. Override the password with ‘security.user.password’ in ‘application.properties’.  To override security settings, define a bean of  type ‘WebSecurityConfigurerAdapter’ and plug it into the configuration.

1.7 Spring Security Configuration Example

@Configuration
@Order(SecurityProperties.ACCESS_OVERRIDE_ORDER)
public class ApplicationSecurity
extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http)
throws Exception {
http.authorizeRequests()
.antMatchers(“/css/**”).permitAll().anyRequest()
.fullyAuthenticated().and().formLogin()
.loginPage(“/login”)
.failureUrl(“/login?error”)
.permitAll().and().logout().permitAll();
}
@Override
public void configure(AuthenticationManagerBuilder auth)
throws Exception {
auth.inMemoryAuthentication()
.withUser(“user”).password(“user”).roles(“USER”);
}
}

8.8 Authentication Manager

The AuthenticationManager class provides user information.  You can use multiple <authentication-provider> elements and they will be checked in the declared order to authenticate the user. In the example above, the WebSecurityConfigurerAdapter’s ‘configure()’ method gets called with an AuthenticationManagerBuilder. We can use this to configure the AuthenticationManager using a “fluent API”,  auth.jdbcAuthentication().dataSource(ds).withDefaultSchema()

1.9 Using Database User Authentication

You can obtain user details from tables in a database with the jdbcAuthentication() method. This will need a reference to a Spring Data Source bean configuration auth.jdbcAuthentication().dataSource(ds).withDefaultSchema(). If you do not want to use the database schema expected by Spring Security you can customize the queries used and map the information in your own database to what Spring Security expects
auth.jdbcAuthentication().dataSource(securityDatabase)
.usersByUsernameQuery(“SELECT username, password,
‘true’ as enabled FROM member WHERE username=?”)
.authoritiesByUsernameQuery(“SELECT
member.username, member_role.role as authority
FROM member, member_role WHERE member.username=?
AND member.id=member_role.member_id”);

Using Database User Authentication

The configuration of the ‘securityDatabase’ Data Source above is not shown but it is just like Spring database configuration.
The queries that Spring Security uses by default are:
SELECT username, password, enabled FROM users WHERE username = ?
SELECT username, authority FROM authorities WHERE username = ?
The default statements above assume a database schema similar to:
CREATE TABLE USERS (
USERNAME VARCHAR(20) NOT NULL,
PASSWORD VARCHAR(20) NOT NULL,
ENABLED SMALLINT,
PRIMARY KEY (USERNAME)
);
CREATE TABLE AUTHORITIES (
USERNAME VARCHAR(20) NOT NULL,
AUTHORITY VARCHAR(20) NOT NULL,
FOREIGN KEY (USERNAME) REFERENCES USERS
);
Notice in the custom queries defined in the slide the ‘enabled’ part of the query is mapped as ‘true’ since it is assumed the table referenced does not have this column but Spring Security expects it. If the table does have some column similar to ‘enabled’ it should map to a boolean type (like a ‘1’ for enabled and ‘0’ for disabled).
The custom queries above would work with a database schema of:
CREATE TABLE MEMBER (
ID BIGINT NOT NULL,
USERNAME VARCHAR(20) NOT NULL,
PASSWORD VARCHAR(20) NOT NULL,
PRIMARY KEY (ID)
);
CREATE TABLE MEMBER_ROLE (
MEMBER_ID BIGINT NOT NULL,
ROLE VARCHAR(20) NOT NULL,
FOREIGN KEY (MEMBER_ID) REFERENCES MEMBER
);

1.10 LDAP Authentication

It is common to have an LDAP server that stores user data for an entire organization. The first step in using this with Spring Security is to configure how Spring Security will connect to the LDAP server with the ldapAuthentication builder.
auth.ldapAuthentication()
.contextSource()
.url(“ldap://localhost”).port(389)
.managerDn(“cn=Directory Admin”)
.managerPassword(“ldap”);
You can also use a “embedded” LDAP server in a test environment by not providing the ‘url’ attribute and instead providing ldif files to load
auth.ldapAuthentication()
.contextSource()
.url(“ldap://localhost”).port(389)
.managerDn(“cn=Directory Admin”)
.managerPassword(“ldap”);

LDAP Authentication

The ‘manager-dn’ and ‘manager-password’ attributes of <ldap-server> are used for how to authenticate against the LDAP server to query user details. If using the embedded LDAP server the default for the ‘root’ will be “dc=springframework,dc=org” if you do not supply a value.
In order to configure Spring Security there are a number of attributes related to LDAP that have various defaults that may affect how your LDAP configuration behaves. This slide is meant to simply introduce the feature. One step you should take when attempting to use Spring Security with LDAP is to avoid configuring everything at once. Start with an embedded list of users to test the other configuration settings and then switch to using LDAP. Also try using the embedded LDAP server with an ldif file exported from your LDAP server with a few sample users.

1.11 What is Security Assertion Markup Language (SAML)?

Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP). It’s a security protocol similar to OpenId, OAuth, Kerberos etc. SAML is the link between the authentication of a user’s identity and the authorization to use a service. SAML adoption allows IT shops to use software as a service (SaaS) solutions while maintaining a secure federated identity management system.  SAML enables Single-Sign On (SSO), which means users can log in once, and those same credentials can be reused to log into other service providers.

1.12 What is a SAML Provider?

A SAML provider is a system that helps a user access a service they need. There are two primary types of SAML providers, service provider, and identity provider. A service provider needs the authentication from the identity provider to grant authorization to the user. An identity provider performs the authentication that the end user is who they say they are and sends that data to the service provider along with the user’s access rights for the service.  Microsoft Active Directory or Azure are common identity providers. Salesforce and other CRM solutions are usually service providers, in that they depend on an identity provider for user authentication.

1.13 Spring SAML2.0 Web SSO Authentication

This diagram from wikipedia explains how SAML works:

 

Pic

Pic source- CC BY-SA 3.0, Link

1. User hits the Service Provider URL Service provider discovers the IDP to contact for authentication
2. Service provider redirects to the corresponding IDP
3. User hits the IDP and identifies the user
4. IDP redirects to the Login form
5. Redirect to Service provider Assertion consumer URL (the URL in Service provider that accepts SAML assertion)
6. SP initiates redirect to target resource
7. Browser requests for the target resource
8. Service provider responds with the requested resource

1.14 Setting Up an SSO Provider

For SAML authentication to work we need an identity provider (IdP). There are various providers, such as Active Directory, Azure, AWS, Google, Microsoft, Facebook, Onelogin, etc. Obtain the domain name and fully qualified domain name of the Active Directory server. To enable SSO on Active Directory, the following steps are typically performed:

  • Ensure that LDAP is configured on the Active Directory (AD) server.
  • From the AD Server, run ldp.
  • From the Connections menu, click Connect, and configure Server name, port, and select SSL option.
  • When the LDAP is properly configured, the external domain server details are displayed in the LDP window. Otherwise, an error message
    appears indicating that a connection cannot be made using this feature.
  • When the LDAP is properly configured, the external domain server details are displayed in the LDP window. Otherwise, an error message
    appears indicating that a connection cannot be made using this feature.

1.15 Adding SAML Dependencies to a Project

 Here are the dependencies in Gradle
◊ compile group: ‘org.springframework.security’, name: ‘spring-securitycore’, version: “4.2.3.RELEASE”
◊ compile group: ‘org.springframework.security’, name: ‘spring-securityweb’, version: “4.2.3.RELEASE”
◊ compile group: ‘org.springframework.security’, name: ‘spring-securityconfig’, version: “4.2.3.RELEASE”
◊ compile group: ‘org.springframework.security.extensions’ , name:
‘spring-security-saml2-core’ , version : “1.0.2.RELEASE”

1.16 Dealing with the State

Microservices are stateless to achieve scalability and high availability. But you need to keep state in order to maintain position in the client-server conversation, reduce chattiness of the conversation by minimizing client-server round trips. State is maintained either within a client-server session or within a cross-session conversation. State may not need to be maintained outside the established session duration and can be expired.

1.17 How Can I Maintain State?

 You have two options. One is to maiintain state on the service’s side. You can use a caching solution or durable store. Here, you may want to configure TTL for session / state to be expired (e.g. for abandoned sessions, timed-out sessions, etc.). Other option is to have the client send its state as part of the request, ie. cookies, custom HTTP headers, part of the request URL (query strings), as part of the payload.

1.18 SAML vs. OAuth2

OAuth is a slightly newer standard that was co-developed by Google and Twitter to enable streamlined internet logins. OAuth uses a similar methodology as SAML to share login information. SAML provides more control to enterprises to keep their SSO logins more secure, whereas OAuth is better on mobile and uses JSON. Facebook and Google are two OAuth providers that you might use to log into other internet sites. 

1.19 OAuth2 Overview

OAuth is an authorization method to provide access to resources over the HTTP protocol.  It can be used for authorization of various applications or manual user access. It is commonly used as a way for internet users to grant websites or applications access to their information on other websites without giving them the passwords. This mechanism is used by companies, such as Google, Facebook, Microsoft, Twitter, and DropBox, to permit the users to share information about their accounts with third party applications or websites. It allows an application to have an access token . Access token represents a user’s permission for the client to access their data. The access token is used to authenticate a request to an API endpoint.

1.20 OAuth – Facebook Sample Flow

Although, the diagram below is for Facebook, but it’s similar for any other provider.

1.21 OAuth Versions

There are two versions of OAuth authorization, OAuth 1 – HMAC-SHA signature strings and OAuth 2 – tokens over HTTPS. OAuth2 is not backwards compatible with OAuth 1.0. OAuth2 provides specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.

1.22 OAuth2 Components

Resource server is the API server which contains the resources to be accessed. Authorization server provides access tokens. It can be the same as the API server. Resource owner access tokens are provided by the resource owner, i.e. the user, when resources are accessed. Client / consumer is an application using the credentials.

1.23 OAuth2 – End Points

The token Endpoint is used by clients to get an access token from the authorization server. It can also optionally refresh the token.

1.24 OAuth2 – Tokens

There are  two token types involved in OAuth2 authentication. Access Token is used for authentication an authorization to get access to the resources from the resource server. Refresh Token is sent together with the access token. It is used to get a new access token, when the old one expires. It allows for having a short expiration time for access tokens to the resource server and a long expiration time for access to the authorization server. Access tokens also have a type which defines how they are constructed. Bearer Tokens uses HTTPS security and the request is not signed or encrypted. Possession of the bearer token is considered authentication. MAC Tokens are more secure than bearer tokens. MAC tokens are similar to signatures, in that they provide a way to have partial cryptographic verification of the request.

1.25 OAuth – Grants

Methods to get access tokens from the authorization server are called grants. The same method used to request a token is also used by the resource server to validate a token. There are 4 basic grant types:

  • Authorization Code – When the resource owner allows access, an authorization code is then sent to the client via browser redirect, and the authorization code is used in the background to get an access token. Optionally, a refresh token is also sent. This grant flow is used when the client is a third-party server or web application, which performs the access to the protected resource.
  • Implicit – It is similar to authorization code, but instead of using the code as an intermediary, the access token is sent directory through a browser redirect. This grant flow is used when the user-agent will access the protected resource directly, such as in a rich web application or a mobile app.
  • Resource Owner Credentials – The password / resource owner credentials grant uses the resource owner password to obtain the access token. Optionally, a refresh token is also sent. The password is then authenticated.
  • Client Credentials – The client’s credentials are used instead of the resource owner’s. The access token is associated either with the client itself, or delegated authorization from a resource owner. This grant flow is used when the client is requesting access to protected resources under its control

1.26 Authenticating Against an OAuth2 API

Most OAuth2 services use the /oauth/token URI endpoint for handling all OAuth2 requests. The first step in authenticating against an OAuth2 protected API service is exchanging your API key for an Access Token.

 It can be done by performing these steps:

  • Create a POST request
  • Supply grant_type=client_credentials in the body of the request

Let’s say the API key has two components

  • ID:xxx
  • Secret: yyy

cURL could be used to get an Access Token like this:
curl –user xxx:yyy –data grant_type=client_credentials -X

POST https://api.someapi.com/oauth/token

1.27 OAuth2 using Spring Boot – Dependencies

Gradle dependencies

compile “org.springframework.boot:spring-boot-startersecurity:*”
compile “org.springframework.security.oauth.boot:springsecurity-
oauth2-autoconfigure:2.0.0.RELEASE”

1.28 OAuth2 using Spring Boot – application.yml

  •  src/main/resources/application.yml requires security configuration
  • Note: This example uses the Facebook provider.

security:
oauth2:
client:
clientId: 233668646673605
clientSecret: 33b17e044ee6a4fa383f46ec6e28ea1d
accessTokenUri:
https://graph.facebook.com/oauth/access_token
userAuthorizationUri:
https://www.facebook.com/dialog/oauth
tokenName: oauth_token
authenticationScheme: query
clientAuthenticationScheme: form
resource:
userInfoUri: https://graph.facebook.com/me

1.29 OAuth2 using Spring Boot – Main Class

@SpringBootApplication
@EnableOAuth2Sso
@RestController
public class DemoApplication extends WebSecurityConfigurerAdapter {
@RequestMapping(“/user”)
public Principal user(Principal principal) {
return principal;
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.antMatcher(“/**”)
.authorizeRequests()
.antMatchers(“/”, “/login**”, “/webjars/**”)
.permitAll()
.anyRequest()
.authenticated()
.and().logout().logoutSuccessUrl(“/”).permitAll()
.and().csrf().csrfTokenRepository(CookieCsrfTokenRepository.with
HttpOnlyFalse());
}

1.30 OAuth2 using Spring Boot – SPA Client

The sample code below uses AngularJS, but you can use similar concepts with or without client-side framework
angular.module(“app”, []).controller(“home”, function($http) {
var self = this;
self.logout = function() {
$http.post(‘/logout’, {}).success(function() {
self.authenticated = false;
$location.path(“/”);
}).error(function(data) {
console.log(“Logout failed”)
self.authenticated = false;
});
};
$http.get(“/user”).success(function(data) {
self.user = data.userAuthentication.details.name;
self.authenticated = true;
}).error(function() {
self.user = “N/A”;
self.authenticated = false;
});
});

1.31 JSON Web Tokens

They are replacement for standard/traditional API keys. They are an open standard.  They allow fine-grained access control via “claims”. A claim is any data a client “claims” to be true. It typically includes “who issued the request” and “when it was issued”. JSON Web Tokens are Cross-Domain capable (cookies are not), Compact (compared with XML based security), Encoded (URL-Safe), Signed (to prevent tampering). OAuth and JWT are not the same. JWT is a specific protocol for a security access token. OAuth is a broader security framework for the interaction of different actors (end users, back-end APIs, authorization servers) for the generation and distribution of security access tokens.

1.32 JSON Web Token Architecture

There are three sections in JSON Web Token -Header,  Payload and Signature. Header and Payload are base64 encoded. Signature is calculated from the encoded header and payload. Sections are separated by a period.

1.33 How JWT Works

JWT works as a two way protocol where a request is made and the response is generated from a server.

The browser makes the request for JWT encoded data. The server generates the signed token and return to the client. The token can be sent over the http request for every other request that needs authentication on the server. The server then validates the token and, if it’s valid, returns the secure resource to the client.

1.34 JWT Header

Declares the signature algorithm and type
{
“typ”:”JWT”,
“alg”:”HS256″
}
The algorithm shown here (HMAC SHA-256) will be used to create the signature. The type “JWT” stands for JSON Web Token. When base64 encoded it looks like this:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

1.35 JWT Payload

The payload contains “Claims”. Claims come in several types as Registered, Public and Private. Examples of Registered claims include

  • iss: use to define who issuedthe token
  • sub: the subject of the token
  • exp: token expiration time

Public claims  use URI schemes to prevent name collision i.e. https:/corpname.com/jwt_claims/is_user. Private claims are used inside organizations. They can use simple naming conventions  i.e. “department”.

1.36 JWT Example Payload

 Example:
{
“iss”: “corpname.com”,
“aud”: “corpname.com/rest/product”,
“sub”: “jdoe”,
“Email”: “jdoe@corpname.com”
}
 After base64 encoding:
eyJpc3MiOiJjb3JwbmFtZS5jb20iLCJhdWQiOiJjb3JwbmFtZS5jb20vcmV
zdC9wcm9kdWN0Iiwic3ViIjoiamRvZSIsIkVtYWlsIjoiamRvZUBjb3Jwbm
FtZS5jb20ifQ

1.37 JWT Example Signature

The signature is created from the header and body like this:
content = base64UrlEncode(header)
+ “.”
+ base64UrlEncode(payload);
signature = HMACSHA256(content);
Completed signature:
pEonrJLKkpSvAMk5dmBYoxP5hZ0ZhKcnkLJYNNlVxipSoZbCnDrhSq8Psda
5dPqyjnLasPY7pyxoRKx99HAVu8L9hwdO_h9GZ6K443Xvb6uDSMsyvqQp8v
65Rv0SjUenWQRK7INyZ2N8rkHdEaMOOiOPFp7yHLUo8Tq_AM2Q

1.38 How JWT Tokens are Used

 Client requests token sends credentials to Authentication server. Server returns a JWT token. Client adds token to HTTP request via the Authentication header. A JWT token can be cached on the browser and returned on every request to the server to ensure the user has access to the resources on every request without authentication on every request. The downside of this is that the user will have access for the duration of the token unless there is a blacklist each service checks against.  Client sends the request. API receives request. It reads the JWT from the Authentication header, unpacks the payload, checks claims, allows or denies access.

1.39 Adding JWT to HTTP Header

After obtaining a JWT token the client adds it to an HTTP request as an HTTP header
◊ Header Name: Authorization
◊ Type: Bearer
Example:
Authorization:Bearer
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJjb3JwbmFtZS
5jb20iLCJhdWQiOiJjb3JwbmFtZS5jb20vcmVzdC9wcm9kdWN0Iiwic3ViI
joiamRvZSIsIkVtYWlsIjoiamRvZUBjb3JwbmFtZS5jb20ifQ.pEonrJLKk
pSvAMk5dmBYoxP5hZ0ZhKcnkLJYNNlVxipSoZbCnDrhSq8Psda5dPqyjnLa
sPY7pyxoRKx99HAVu8L9hwdO_h9GZ6K443Xvb6uDSMsyvqQp8v65Rv0SjUe
nWQRK7INyZ2N8rkHdEaMOOiOPFp7yHLUo8Tq_AM2Q

1.40 How The Server Makes Use of JWT Tokens

The RESTful web service needs to validate JWT tokens when it receives requests.
 Process
◊ Unpack token
◊ Validate that signature matches header and payload
◊ Validates claims (has token expired?)
◊ Compares scopes
◊ If required it makes call to ACL (access control list) server.
◊ Grants or denies access
This process can be coded into JEE Servlet filters or added directly to the web service code

1.41 What are “Scopes”?

 The payload area of a JSON web token contains a “claim” named “scope”. The value for the “scope” field is an array.
 Example:
“scope”: [ “app.feature” ]
“scope”: [ “HR.review ” ]

 Technically scope strings can include any text. In practice scope strings are limited to those defined by an organization. Scope strings refer to specific operations on a specific API endpoints.

1.42 JWT with Spring Boot – Dependencies

Add JWT dependencies
compile “org.springframework.boot:spring-boot-startersecurity:*”
compile “io.jsonwebtoken:jjwt:0.9.0”

1.43 JWT with Spring Boot – Main Class

@EnableWebSecurity
public class SecurityTokenConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.csrf().disable()

// Add a filter to validate the tokens with every request
.addFilterAfter(new JwtTokenAuthenticationFilter(jwtConfig),
UsernamePasswordAuthenticationFilter.class)
// authorization requests config
.authorizeRequests()
// allow all who are accessing “auth” service
.antMatchers(HttpMethod.POST, jwtConfig.getUri()).permitAll()
// must be an admin if trying to access admin area
(authentication is also required here)
.antMatchers(“/gallery” + “/admin/**”).hasRole(“ADMIN”)
// Any other request must be authenticated
.anyRequest().authenticated();
}

1.44 Summary

  • Spring Security has many features that simplify securing web applications.
  • Making use of many of these features only requires configuration in a Spring configuration file.
  • Spring Security can work with many different sources of user and permission information.

Twelve-factor Applications- 12 Best Practices for Microservices

This tutorial is adapted from Web Age course  Technical Introduction to Microservices.

1.1 Twelve-factor Applications


1.2 Twelve Factors, Microservices, and App Modernization

 Heroku, a platform as a service (PaaS) provider, established general principles for creating useful web apps known as the Twelve-Factor Application.  Applying 12-factor to microservices requires modification of the original PaaS definitions. The goal of combining microservices, twelve-factor app and app modernization is a general purpose reference architecture enabling continuous delivery.

1.3 The Twelve Factors

  1.  Codebase – One codebase tracked in revision control, many deploys
  2.  Dependencies – Explicitly declare and isolate dependencies
  3.  Config – Store config in the environment
  4. Backing services – Treat backing services as attached resources
  5.  Build, release, run – Strictly separate build and run stages
  6.  Processes – Execute the app as one or more stateless processes
  7.  Port binding – Export services via port binding
  8.  Concurrency – Scale out via the process model
  9. Disposability – Maximize robustness with fast startup and graceful shutdown
  10. Dev/prod parity – Keep development, staging, and production as similar as possible
  11. Logs – Treat logs as event streams
  12. Admin processes – Run admin/management tasks as one-off processes

1.4 Categorizing the 12 Factors

Code
  • Codebase
  •  Build, Release, Run
  • Dev/prod parity
Deploy
  • Dependencies
  • Config
  • Processes
  • Backing services
  • Port Binding
Operate
  • Concurrency
  • Disposability
  • Logs
  • Admin Processes

1.5 12-Factor Microservice Codebase

The Twelve-Factor App recommends one codebase per app. In a microservices architecture, the correct approach is one codebase per service. This codebase should be in version control, either distributed, e.g. git, or centralized, e.g. SVN.

1.6 12-Factor Microservice Dependencies

As suggested in The Twelve-Factor App, regardless of what platform your  application is running on, use the dependency manager included with your language or framework. Do not assume that the tool, library or application your code depends on will be there.  How you install an operating system or platform dependencies depends on the platform. In noncontainerized environments, use a configuration management tool (Chef, Puppet, Salt, Ansible) to install system dependencies.  In a containerized environment, do this in the Dockerfile.

1.7 12-Factor Microservice Config

Anything that varies between deployments can be considered configuration.  All configuration data should be stored in a separate place from the code, and read in by the code at runtime, e.g. when you deploy code to an environment, you copy the correct configuration files into the codebase at that time.  The Twelve-Factor App guidelines recommend storing all configuration in the environment, rather than committing it to the source code repository.   Use non-version controlled .env files for local development. Docker supports the loading of these files at runtime.  Keep all .env files in a secure storage system, such as Hashicorp Vault, to keep the files available to the development teams, but not committed to Git. Use an environment variable for anything that can change at runtime, and for any secrets that should not be committed to the shared repository.  Once you have deployed your application to a delivery platform, use the delivery platform’s mechanism for managing environment variables.

1.8 12-Factor Microservice Backing Services

The Twelve-Factor App guidelines define a backing service as “any service the app consumes over the network as part of its normal operation.”  Anything external to a service is treated as an attached resource, including other services. This ensures that every service is completely portable and loosely coupled to the other resources in the system.  Strict separation increases flexibility during development – developers only need to run the service(s) they are modifying, not others.  A database, cache, queueing system, etc. These should all be referenced by a simple endpoint (URL) and credentials, if necessary. 

1.9 12-Factor Microservice Build, Release, Run

To support strict separation of build, release, and run stages, as recommended by The Twelve-Factor App, use a continuous integration/continuous delivery (CI/CD) tool to automate builds.  Docker images make it easy to separate the build and run stages. Ideally,  images are created from every commit and treated as deployment artifacts.

1.10 12-Factor Microservice Processes

For microservices, the application needs to be stateless.  Stateless services scale a service horizontally by simply adding more instances of that service. Store any stateful data, or data that needs to be shared between instances, in a backing service.

1.11 12-Factor Microservice Port Binding

The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service.  The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port. In a local development environment, the developer visits a service URL like http://localhost:5000/ to access the service exported by their app.  In deployment, a routing layer handles routing requests from a publicfacing hostname to the port-bound web processes.  This is typically implemented by using dependency declaration to add a webserver library to the app, such as Tornado for Python, Thin for Ruby, or Jetty for Java and other JVM-based languages. This happens entirely in user space, that is, within the app’s code. The contract with the execution environment is binding to a port to serve requests. Nearly any kind of server software can be run via a process binding to a port and awaiting incoming requests. Examples include ejabberd (speaking XMPP), and Redis (speaking the Redis protocol).  The port-binding approach means that one app can become the backing service for another app, by providing the URL to the backing app as a resource handle in the config for the consuming app.

1.12 12-Factor Microservice Concurrency

The Unix and Mainframe process models are predecessors to a true microservices architecture, allowing specialization and resource sharing for different tasks within a monolithic application.  For microservices architecture, we horizontally scale each service independently, to the extent supported by the underlying infrastructure.  Docker or other containerized services, provide service concurrency.

1.13 12-Factor Microservice Disposability

Instances of a service need to be disposable so they can be started, stopped, and redeployed quickly, and with no loss of data. Services deployed in Docker containers satisfy this requirement automatically, as it’s an inherent feature of containers that they can be stopped and started instantly.Storing state or session data in queues or other backing services ensures that a request is handled seamlessly in the event of a container crash. Backing stores support crash-only design.

1.14 12-Factor Microservice Dev/Prod Parity

Keep all of your environments – development, staging, production, and so on as identical as possible, to reduce the risk that bugs show up only in some environments. Containers enable you to run exactly the same execution environment all the way from local development through production. Differences in the underlying data can still result in runtime changes in application behavior.

1.15 12-Factor Microservice Logs

Use a log-management solution in a microservice for routing or storing logs. Define logging strategy as part of the architecture standards, so all services generate logs in a similar fashion.  Log strategy should be part of a larger Application Performance Management (APM) or Digital Performance Management (DPM) solution tied to the Everything as a Service model (XaaS).

1.16 12-Factor Microservice Admin Processes

In a production environment, run administrative and maintenance tasks separately from the app. Containers make this very easy, as you can spin up a container just to run a task and then shut it down. Examples include doing data cleanup, running analytics for a presentation, or turning on and off features for A/B testing.

1.17 Kubernetes and the Twelve Factors – 1 Codebase

Kubernetes makes heavy use of declarative constructs. All parts of a Kubernetes application are described with text-based representations in YAML or JSON. The referenced containers are themselves described in source code as a Dockerfile.  Because everything from the image to the container deployment behavior is encapsulated in text, you are able to easily source control all the things, typically using git.

1.18 Kubernetes and the Twelve Factors – 2 Dependencies

A microservice is only as reliable as its most unreliable dependency. Kubernetes includes readinessProbes and livenessProbes that enable you to do ongoing dependency checking. The readinessProbe allows you to validate whether you have backing services that are healthy and you’re able to accept requests.  The livenessProbe allows you to confirm that your microservice is healthy on its own.  If either probe fails over a given window of time and threshold attempts, the Pod will be restarted.

1.19 Kubernetes and the Twelve Factors – 3 Config

 The Config factor requires storing configuration sources in your process environment table (e.g. ENV VARs).  Kubernetes provides ConfigMaps and Secrets that can be managed in source repositories.  Secrets should never be source controlled without an additional layer of encryption. Containers can retrieve the config details at runtime.

1.20 Kubernetes and the Twelve Factors – 4 Backing Services

When you have network dependencies, we treat that dependency as a “Backing Service”.  At any time, a backing service could be attached or detached and our microservice must be able to respond appropriately. For example, you have an application that interacts with a web server, you should isolate all interaction to that web server with some connection details (either dynamic service discovery or via Config in a Kubernetes Secret). Then consider whether your network requests implement fault tolerance such that if the backing service fails at runtime, your microservice does not trigger a cascading failure. That service may also be running in a separate container or somewhere off-cluster. Your microservice should not care as all interactions then occur through APIs to interact with the database.

1.21 Kubernetes and the Twelve Factors – 5 Build, Release, Run

Once you commit the code, a build occurs and the container image is built and published to an image registry. If you’re using Helm, your Kubernetes application may also be packaged and published into a Helm registry as well. These “releases” are then re-used and deployed across multiple environments to ensure that an unexpected change is not introduced somewhere in the process (by re-building the binary or image for each environment).

1.22 Kubernetes and the Twelve Factors – 6 Processes

In Kubernetes, a container image runs as a container process within a Pod. Kubernetes (and containers in general) provide a facade to provide better isolation of the container process from other containers running on the same host. Using a process model enables easier management for scaling and failure recover (e.g. restarts).  Typically, the process should be stateless to support scaling the workload out through replication.  For any state used by the application, you should use a persistent data store that all instances of your application process will discover via your Config. In Kubernetes-based applications where multiple copies of pods are running, requests can go to any pod, hence the microservice cannot assume sticky sessions.

1.23 Kubernetes and the Twelve Factors – 7 Port Binding

You can use Kubernetes Service objects to declare the network endpoints of your microservices and to resolve the network endpoints of other services in the cluster or off-cluster. Without containers, whenever you deployed a new service (or new version), you would have to perform some amount of collision avoidance for ports that are already in use on each host.  Container isolation allows you to run every process (including multiple versions of the same microservice) on the same port (by using network namespaces in the Linux kernel) on a single host.

1.24 Kubernetes and the Twelve Factors – 8 Concurrency

Kubernetes allows you to scale the stateless application at runtime with various kinds of lifecycle controllers. The desired number of replicas are defined in the declarative model and can be changed at runtime. Kubernetes defines many lifecycle controllers for concurrency including ReplicationControllers, ReplicaSets, Deployments, StatefulSets, Jobs, and DaemonSets. Kubernetes supports autoscaling based on compute resource thresholds around CPU and memory or other external metrics. The Horizontal Pod Autoscaler (HPA) allows you to automatically scale the number of pods within a Deployment or ReplicaSet.

1.25 Kubernetes and the Twelve Factors – 9 Disposability

Within Kubernetes, you focus on the simple unit of deployment of Pods which can be created and destroyed as needed — no single Pod is all that valuable. When you achieve disposability, you can start up fast and the microservices can die at any time with no impact on user experience. With the livenessProbes and readinessProbes, Kubernetes will actually destroy Pods that are not healthy over a given window of time.

1.26 Kubernetes and the Twelve Factors – 10 Dev/Prod Parity

Containers (and to a large extent Kubernetes) standardize how you deliver your application and its running dependencies, meaning that you’re able to deploy everything the same way everywhere. For example, if you’re using MySQL in a highly available configuration in production, you can deploy the same architecture of MySQL in your dev cluster. By establishing parity of production architectures in earlier dev environments, you can typically avoid unforeseen differences that are important to how the application runs (or more importantly how it fails).

1.27 Kubernetes and the Twelve Factors – 11 Logs

For containers, you will typically write all logs to stdout and stderr file descriptors. The important design point is that a container should not attempt to manage internal files for log output, but instead delegate to the container orchestration system around it to collect logs and handle analysis and archival. Often in Kubernetes, you’ll configure Log collection as one of the common services to manage Kubernetes. For example, you can enable an Elasticsearch-Logstash-Kibana (ELK) stack within the cluster.

1.28 Kubernetes and the Twelve Factors – 12 Admin Processes

Within Kubernetes, the Job controller allows you to create Pods that are run once or on a schedule to perform various activities. A Job might implement business logic, but because Kubernetes mounts API tokens into the Pod, you can also use them for interacting with the Kubernetes orchestrator as well. By isolating these kinds of administrative tasks, you can further simplify the behavior of your microservice.

1.29 Summary

The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc). The twelve-factor methodology is highly useful when creating microservices architecture based applications.