How to implement Spring Data REST?

This tutorial is adapted from Web Age course Spring Boot Training.

In this tutorial, you will add Spring Data REST support to an existing application. The existing application is a simple Spring Boot application with Spring Boot, JPA, and H2 packages added to it. The application will persist data in an H2 database. You will expose the data as REST APIs using Spring Data REST. Spring Data REST exposes data in JSON format. The way JSON is structured is called HAL – Hypertext Application Language. HAL is flexible and offers a convenient way to provides links to the data that is served.

Part 1 – Set-Up

In this part, you will set up a project in Eclipse that you will use to implement Spring Data REST.

   1. Download the spring-data-rest-starter.zip from hereExtract C:\LabFiles\spring-data-rest-starter.zip to C:\Workspace\ folder. This will create the C:\Workspace\spring-data-rest-one-to-many folder.

(Note: Ensure that you don’t extract it using the zip file name, or adjust the folder as above)
2. Your folder should look like this:

3. In Eclipse, click File | Import.
4. Select Maven | Existing Maven Projects.
5. Click Next.
6. Click Browse.
7. Select the C:\Workspace\spring-data-rest-one-to-many folder.

8. Click Finish.
9. In Project Explorer, expand spring-data-rest-one-to-many, right-click pom.xml, and click Maven | Update Project.
10. Click OK.

Part 2 – Understanding the Starter Project

In this part, you will explore the existing project before you implement Spring Data REST.
1. In Project Explorer, open pom.xml
Notice the following packages are already added to the project.
org.springframework.boot:spring-boot-starter-web
org.springframework.boot:spring-boot-devtools
com.h2database:h2
org.springframework.boot:spring-boot-maven-plugin is also
configured in pom.xml

2. In Project Explorer, expand src/main/java.
3. Expand com.webagesolutions.springdatarest.domain.
4. Open Customer.java.
Notice there is a Customer class with Id and Name properties. The class isn’t configured with annotations to act as an entity. You will do that later in this tutorial.

5. Open Order.java.
Notice there is an Order class with Id, OrderDate, and Amount properties. The class isn’t configured with annotations to act as an entity. You will do that later in this tutorial. Also, there is no relationship between Customer and Order classes at this point. You will configure the one-to-many relationship between the classes later in the  tutorial.

6. In Project Explorer, open src/main/resources/application.properties.
Notice it contains the following configuration for the in-memory H2 data.
spring.h2.console.enabled=true
spring.datasource.url=jdbc:h2:mem:testdb
spring.data.jpa.repositories.bootstrap-mode=default
spring.jpa.defer-datasource-initialization=true

7. Open src/main/resources/data.sql.
Notice it contains the following SQL that adds sample data into the customers table of the H2 database. SQL to insert data into the orders is currently commented out. You will uncomment and test it later in this tutorial.
insert into customers
values(1001, ‘Bobby’);

insert into customers
values(1002,’Drew’);

–insert into orders
–values (1, 99.95, ‘2010-10-01’, 1001);
–insert into orders
–values (2, 149.95, ‘2015-05-03’, 1002);
–insert into orders
–values (3, 49.95, ‘2018-08-07’, 1002);
–insert into orders
–values (4, 19.95, ‘2021-01-02’, 1001);
8. Close all open files.

Part 3 – Modify the application to create the Customer Table in H2

In this part, you will add annotations to the Customer class so a Customer table gets created in the H2 database.
1. Open Customer.java.
2. Add annotation to the Customer class as shown in bold below:
@Entity( name =”customers”)
public class Customer {
3. Add annotations to the id field as shown in bold below:
@Id
@GeneratedValue
private Long id;
4. Press Ctrl+Shift+O to resolve imports. Select javax.persistence for all imports.
5. Save the file.

6. In Project Explorer, expand com.webagesolutions.springdatarest
7. Right-click SpringDataRestApplication.java, and select Run As | Java Application.
8. Open a browser window and navigate to the following URL to view H2 Console:
http://localhost:8080/h2-console
Notice the following options show up on the page.

9. In JDBC URL:, enter the following value:
jdbc:h2:mem:testdb
10. Keep the remaining options as-is and click Connect.
Notice the following page shows up.

Notice there is the CUSTOMERS table in H2. How did the Customer table get generated? It’s because of Spring Boot Auto Configuration.

11. In the pane on the left-hand side, click CUSTOMERS.
It should automatically write the following query for you:
SELECT * FROM CUSTOMERS

12. Click Run.
Notice the following result shows up on the page.

At this point, you have a working application that generates the Customers table in H2 using JPA.

13. Back in eclipse, click the red button to Terminate the App.

14. Click the XX button to Remove All Terminated Launches.

Part 4 – Using Spring Data REST

In this part, you will modify the application so the Customers table can be exposed as REST API using Spring Data REST.
1. In Project Explorer, open pom.xml.
2. In the dependencies section, add the following Spring Data REST dependency:

org.springframework.boot
spring-boot-starter-data-rest

3. Save the file.
4. Right-click src/main/java and click New | Package.
5. In Name, type com.webagesolutions.springdatarest.dataservice
6. Click Finish.
Note: If the package shows up as a nested package, right-click the project and click refresh.

7. Right-click the newly created com.webagesolutions.springdatarest.dataservice package and click New | Interface.
8. In Name, enter CustomerRepository
9. Click the Add button next to Extended interfaces.
10. In choose interfaces, type PagingAndSortingRepository.

11. Click OK.
12. Click Finish.
13. Modify the interface as shown in bold below:
public interface CustomerRepository extends PagingAndSortingRepository<Customer, Long> {
14. Press Ctrl+Shift+O to organize imports.
15. Add the annotation to the interface:
@RepositoryRestResource(path = “customers”, collectionResourceRel = “customers”)
Note: the path will be required as part of the URL. e.g. http://localhost:8080/customers.
collectionResourceRel=” customers” means the JSON produced by the REST API will be enclosed in a collection named customers. You can customize it if you want.

16. Press Ctrl+Shift+O to organize imports.
17. Save the file.

18. Right-click SpringDataRestApplication.java, and select Run As | Java Application.
19. Open a new tab in the browser window and navigate to the following URL:
http://localhost:8080
Notice the output looks like this:
{
“_links” : {
“customers” : {
“href” : “http://localhost:8080/customers{?page,size,sort}”,
“templated” : true
},
“profile” : {
“href” : “http://localhost:8080/profile”
}
}
}
20. Spring Data REST has made our Customer entity available as a REST service endpoint.
21. In the browser, navigate to the following URL:
http://localhost:8080/customers
Notice all your customers are displayed in JSON format.

Part 5 – Test Spring Data REST with Postman

By default, Spring Data REST supports all REST operations. In this part, you will perform GET, POST, PUT, and DELETE operations using Postman.
1. Open Postman. If it’s not installed on the machine, download it for free.
2. Click the + to open a new tab.
3. Use the GET method with the following URL and click Send:
http://localhost:8080/customers

Notice the output matches what you saw on the browser.

4. Switch the method to POST. Do NOT press the Send button yet.
5. Click Body.
6. Switch the drop-down values to raw | JSON.

7. Enter the following JSON.
{ “name”: “Riley” }
8. Click Send.
Notice a status code of 201 Created is displayed. You can also see the newly created record’s URL.

You can also verify on http://localhost:8080/h2-console that the record got inserted into the H2 database.

9. Switch the method to GET and click Send.
Notice the newly inserted record shows up in the product list.
10. Change the method to PUT.
11. Change the URL to:
http://localhost:8080/customers/1
Notice you have added the product Id to the URL.

12. In Body, change JSON as shown in bold below:
{ “name”: “Morgan” }
13. Click Send.
Notice you get a 200 OK Status.

14. Change the method to GET, URL to http://localhost:8080/customers, and click Send.
15. Verify that the record has been updated.
16. Change the method to DELETE, URL to http://localhost:8080/customers/1, and click Send.
17. Verify using the GET method and on the H2 Console that the record got deleted.
18. Back in eclipse, click the red button to Terminate the App.

19. Click the XX button to Remove All Terminated Launches.

Part 6 – Using Spring Data REST HAL Explorer

In this part, you will modify the application to add HAL Explorer to it.
1. In Project Explorer, open pom.xml.
2. Add the following dependency to the dependencies section:

org.springframework.data
spring-data-rest-hal-explorer

3. Save the file.
4. Right-click SpringDataRestApplication.java and click Run As | Java Application.
5. In the browser window, navigate to the following URL:
http://localhost:8080
Notice Spring Boot Data REST HAL Explorer shows up like this:

6. Click the HTTP Request button next to customers.
7. Set the size to 1.
8. Click Go.
Notice just the first record is displayed in Response Body.

Also, notice the URL looks like this:
http://localhost:8080/customers?size=1
Likewise, you can set page and sort query string values.

Part 7 – Using One-to-Many Relationship

In this part, you will implement the one-to-many relationship between Customer and Order entities. You will also test it using HAL Explorer.
1. In Project Explorer, open Order.java.
2. Add the annotation to the Order class as shown in bold below:
@Entity( name =”orders”)
public class Order {
3. Add annotations to the id field as shown in bold below:
@Id
@GeneratedValue
private Long id;
4. Press Ctrl+Shift+O to resolve imports. Select javax.persistence for all imports.
Take a moment to study the customer field and the setter/getter methods. It would be used to back-link an order to the customer who placed the order.

5. Add the following annotation to the customer field as shown below in bold:
@ManyToOne(fetch=FetchType.EAGER)
private Customer customer;
Notice you are using eager loading, instead of lazy loading.

6. Press Ctrl+Shift+O to resolve imports.
7. Save the file.

Next, let’s link the Customer class to the Order class.
8. Open Customer.java.
9. Locate the comment // TODO: add code here and add the orders property as shown in bold below:
private List orders;

public List getOrders() {
return orders;
}

public void setOrders(List orders) {
this.orders = orders;
}
10. Press Ctrl+Shift+O to organize imports. Select java.util.list.
11. Link the Customer class to the Order class by adding the annotation to the orders field as shown below in bold:
@OneToMany(mappedBy = “customer”, cascade = CascadeType.ALL, fetch = FetchType.EAGER)
private List orders;
Ensure the mappedBy attribute matches the property name in the Order class’s backlink reference.

12. Press Ctrl+Shift+O to organize imports. Select javax.persistence.CascadeType
13. Save the file.

Next, you will create a Spring Data REST service for the Order entity.
14. Right-click the newly created com.webagesolutions.springdatarest.dataservice package and click New | Interface.
15. In Name, enter OrderRepository
16. Click the Add button next to Extended interfaces.
17. In choose interfaces, type PagingAndSortingRepository.
18. Click OK.
19. Click Finish.
20. Modify the interface as shown in bold below:
public interface OrderRepository extends PagingAndSortingRepository<Order, Long> {
21. Press Ctrl+Shift+O to organize imports. Select com.webagesolutions.springdatarest.domain.Order and click Finish.
22. Add the annotation to the interface as shown below:
@RepositoryRestResource(collectionResourceRel = “orders”, path = “orders”)
public interface OrderRepository extends PagingAndSortingRepository<Order, Long> {
Note: path will be required as part of the URL. e.g. http://localhost:8080/orders.
collectionResourceRel=”orders” means the JSON produced by the REST API will be enclosed in a collection named orders. You can customize it if you want.

23. Press Ctrl+Shift+O to organize imports.
24. Save the file.
Since you are using the DevTools package, each time you save a file, the application is automatically recompiled and hosted.

25. Expand src/main/resources.
26. Open data.sql
27. Uncomment the statements that insert data into the orders table.
Your data.sql should look like this after uncommenting the code:
insert into customers
values(1001, ‘Bobby’);

insert into customers
values(1002,’Drew’);

insert into orders
values (1, 99.95, ‘2010-10-01’, 1001);
insert into orders
values (2, 149.95, ‘2015-05-03’, 1002);
insert into orders
values (3, 49.95, ‘2018-08-07’, 1002);
insert into orders
values (4, 19.95, ‘2021-01-02’, 1001);
28. Save the file.
29. Back in eclipse, click the red button to Terminate the App.

30. Click the XX button to Remove All Terminated Launches.

31. Right-click SpringDataRestApplication.java and click Run As | Java Application.
32. Open a new tab in the browser window and navigate to the following URL:
http://localhost:8080
Notice it shows the following page:

33. Click the button next to customers.
34. Click Go.
Notice all customers are displayed in Response Body.

35. In the Embedded Resources section, click the second customer.

Notice Drew is displayed.

36. Click orders to see orders placed by Drew.

37. Notice Response Body shows only orders placed by Drew.
38. In HAL Explorer’s URL field, enter the following URL:
http://localhost:8080/orders/1/customer

39. Click Go.

Notice it shows who placed Order Id 1. It’s Bobby in this case.

Part 8 – Clean-Up

1. Back in eclipse, click the red button to Terminate the App.

2. Click the XX button to Remove All Terminated Launches.

3. Close all open files.
4. Close the browser.
5. Close Postman.

Part 9 – Review

In this tutorial, you added Spring Data REST support to an existing application.

How to Use Resilience4j to Implement Circuit Breaker?

This course is adapted from the Web Age course Mastering Microservices with Spring Boot and Spring Cloud.

The circuit breaker is a design pattern where you stop executing some code when the previous attempt(s) have failed. For example, calling web services/REST APIs and accessing databases can fail if the backend isn’t up and running or the performance threshold isn’t met. The CircuitBreaker uses a finite state machine with three normal states:

Continue reading “How to Use Resilience4j to Implement Circuit Breaker?”

What is Docker?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Docker

Docker is an open-source (and 100% free) project for IT automation. You can view Docker as a system or a platform for creating virtual environments which are extremely lightweight virtual machines. Docker allows developers and system administrators to quickly assemble, test, and deploy applications and their dependencies inside Linux containers supporting the multi-tenancy deployment model on a single host. Docker’s lightweight containers lend themselves to rapid scaling up and down. A container is a group of controlled processes associated with a separate tenant executed in isolation from other tenants. It is written in the Go programming language.

1.2 Where Can I Run Docker?

Docker runs on any modern-kernel 64-bit Linux distributions. The minimum supported kernel version is 3.10. Kernels older than 3.10 lack some of the features required by Docker containers. You can install Docker on VirtualBox and run it on OS X or Windows. Docker can be installed natively on Windows using Docker Machine, but requires Hyper-V. Docker can be booted from the small footprint Linux distribution boot2docker.

1.3 Installing Docker Container Engine

Installing on Linux:
Docker is usually available via the package manager of the distributions. For example, on Ubuntu and derivatives:
sudo apt-get update && sudo apt install docker.io

Installing on Mac
Download and install the official Docker.dmg from docker.com

Installing on Windows
Hyper-V must be enabled on Windows. Download the latest installer from docker.com

1.4 Docker Machine

Though Docker runs natively on Linux, it may be desirable to have two different host environment, such as Ubuntu and CentOS. To achieve this, VMs running Docker may be used. To simplify management of different Docker host, it is possible to use Docker Machine. Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. Docker Machine enables you to provision multiple remote Docker hosts on various flavors of Linux. Additionally, Machine allows you to run Docker on older Mac or Windows systems as well as cloud providers such as AWS, Azure and GCP. Using the docker-machine command, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

1.5 Docker and Containerization on Linux

Docker leverages resource isolation features of the modern Linux kernel offered by cgroups and kernel namespaces. The cgroups and kernel namespaces features allow creation of strongly isolated containers acting as very lightweight virtual machines running on a single Linux host. Docker helps abstract operating-system-level virtualization on Linux using abstracted virtualization interfaces based on libvirt, LXC (LinuX Containers) and systemd-nspawn.  As of version 0.9, Docker has the capability to directly use virtualization facilities provided by the Linux kernel via its own libcontainer library.

1.6 Linux Kernel Features: cgroups and namespaces

The control group kernel feature (cgroup) is used by the Linux kernel to allocate system resources such as CPU, I/O, memory, and network subject to limits, quotas, prioritization, and other control arrangements. The kernel provides access to multiple subsystems through the cgroup interface. 
Examples of subsystems (controllers) are:
 The memory controller for limiting memory use
 The cpuacct controller for keeping track of CPU usage

The cgroups facility was merged into the Linux kernel version 2.6.24.  Systems that use cgroups: Docker, Linux Containers (LXC), Hadoop, etc.  The namespaces feature is a related to cgroups facility that enables different applications to act as separate tenants with completely isolated views of the operating environment, including users, process trees, network, and mounted file systems.

1.7 The Docker-Linux Kernel Interfaces


Source: Adapted from http://en.wikipedia.org/wiki/Docker_(software)

1.8 Docker Containers vs Traditional Virtualization

System virtualization tools or emulators such as VirtualBox, Hyper-V or VMware, boot virtual machines from a complete guest OS image (of your choice) and emulate a complete machine, which results in a high operational overhead. Virtual environments created by Docker run on the existing operating system kernel of the host’s OS without a need for a hypervisor. This leads to very low overhead and significantly faster container startup time. Docker-provisioned containers do not include or require a separate operating system (it runs in the host’s OS). This circumstance puts a significant limitation on your OS choices.

1.9 Docker Containers vs Traditional Virtualization

Overall, traditional virtualization has advantages over Docker in that you have a choice of guest OSes (as long as the machine architecture is supported). You can get only some (limited) choice of Linux distros. You still have some choice: e.g. you can deploy a Fedora container on a Debian host. You can, however, run a Windows VM inside a Linux machine using virtual machine emulators like VirtualBox (with less engineering efficiency). With Linux containers, you can achieve a higher level of deployed application density compared with traditional VMs (10x more units!). Docker runs everything through a central daemon which is not a particularly reliable and secure processing model.

1.10 Docker Integration

Docker can be integrated with a number of IT automation tools that extend its capabilities, including Ansible, Chef,  Jenkins, Puppet, Salt. Docker is also deployed on a number of Cloud platforms like Amazon Web Services, Google Cloud  Platform, Microsoft Azure, OpenStack and Rackspace.

1.11 Docker Services

Docker deployment model is application-centric and in this context provides the following services and tools:
◊ A uniform format for bundling an application along with its dependencies which is portable across different machines.
◊ Tools for automatic assembling a container from source code: make, maven, Debian packages, RPMs, etc.
◊ Container versioning with deltas between versions.

1.12 Docker Application Container Public Repository

Docker community maintains the repository for official and public domain. 
Docker application images: https://hub.docker.com


1.13 Competing Systems

  • Rocket container runtime from CoreOS (an open source lightweight Linux kernel-based operating system). 
  • LXD for Ubuntu from Canonical (the company behind Ubuntu)
  • The LXC (Linux Containers), used by Docker internally

1.14 Docker Command Line

The following commands are shown as executed by the root (privileged) user:
docker run ubuntu echo ‘Yo Docker!’
 This command will create a docker container based on the ubuntu image, execute the echo command on it, and then shuts down.
docker ps -a
This command will list all the containers created by Docker along with their IDs

1.15  Starting, Inspecting, and Stopping Docker Containers

docker start -i <container_id>
This command will start an existing stopped container in interactive (-i) mode (you will get container’s STDIN channel)
docker inspect <container_id>
This command will provide JSON-encoded information about the running container identified by container_id
docker stop <container_id>
This command will stop the running container identified by container_id
 For the Docker command-line reference, visit-https://docs.docker.com/engine/reference/commandline/cli/

1.16 Docker Volume

 If you destroy a container and recreate it, you will lose data. Ideally, data should not be stored in containers. Volumes are mounted file systems available to containers. Docker volumes are a good way of safely storing data outside a container.  Docker volumes can be shared across multiple containers.
Creating a Docker volume
docker volume create my-volume
Mounting a volume
docker run -v my-volume:/my-mount-path -it ubuntu:12.04
/bin/bash
Viewing all volumes
docker volume ls
Deleting a volume
docker volume rm my-volume

1.17 Dockerfile

Rather than manually creating containers and saving them as custom images, it’s better to use Dockerfile to build images
Sample script
# let’s use ubuntu docker image
FROM openjdk
RUN apt-get update -y
RUN apt-get install sqlite -y
# deploy the jar file to the container
COPY SimpleGreeting-1.0-SNAPSHOT.jar
/root/SimpleGreeting-1.0-SNAPSHOT.jar
The Dockerfile filename is case sensitive. The ‘D’ in Dockerfile has to be uppercase. Building an image using docker build. (Mind the space and period at the end of the docker build command)
docker build -t my-image:v1.0 .
Or, if you want to use a different file name:
docker build -t my-image:v1.0 -f mydockerfile.txt

1.18 Docker Compose

A container runs a single application. However, most modern application rely on multiple service, such as database, monitoring, logging, messages queues, etc. Managing a forest of containers individually is difficult especially when it comes to moving the environment from development to test to production, etc. Compose is a tool for defining and running multi-container Docker applications on the same host. A single configuration file, docker-compose.yml, is used to define a group of container that must be managed as a single entity.

1.19 Using Docker Compose

  • Define as many Dockerfile as necessary
  • Create a docker-compose.yml file that refers to the individual Dockerfile

Sample Dockerfile
version: ‘3’
services:
greeting:
build: .
ports:
– “8080:8080”
links:
– mongodb
mongodb:
image: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: wasadmin
MONGO_INITDB_ROOT_PASSWORD: secret
volumes:
– my-volume:/data/db
volumes:
my-volume: {}

1.20 Dissecting docker-compose.yml

The Docker Compose file should be named either docker-compose.yml or docker-compose.yaml. Using any other names will require to use the -f argument to specify the filename. The docker-compose.yml file is writing in YAML
 https://yaml.org/
The first line, version, indicates the version of Docker Compose being used. As of this writing, version 3 is the latest.

1.21 Specifying services

 A ‘service’ in docker-compose parlance is a container. Services are specified under the service: node of the configuration file.
You choose the name of a service. The name of the service is meaningful within the configuration. A service (container) can be specified in one of two ways: Dockerfile or image name. 
Use build: to specify the path to a Dockerfile
Use image: to specify the name of an image that is accessible to the host

1.22 Dependencies between containers

Some services may need to be brought up before other services. In docker-compose.yml, it is possible to specify which service relies on which using the links: node. If service C requires that service A and B be brought up first, add a link as
follows:
A:
build: ./servicea
B:
build: ./serviceb
C:
build: ./servicec
link:
– A
– B
It is possible to specify as many links as necessary. Circular links are not permitted (A links to B and B links to A).

1.23 Injecting Environment Variables

In a microservice, containerized application, environment variables are often used to pass configuration to an application.
It is possible to pass environment variable to a service via the dockercompose.
yml file
myservice:
environment:
MONGO_INITDB_ROOT_USERNAME: wasadmin
MONGO_INITDB_ROOT_PASSWORD: secret

1.24 runC Overview

Over the last few years, Linux has gradually gained a collection of features. Windows 10 and Windows Server 2016+, also added similar features. Those individual features have esoteric names like “control groups”, “namespaces”, “seccomp”, “capabilities”, “apparmor” and so on. Collectively, they are known as “OS containers” or sometimes  lightweight virtualization”. Docker makes heavy use of these features and has become famous for it. Because “containers” are actually an array of complicated, sometimes arcane system features, they are integrated into a unified low-level component called runC. runC now available as a standalone tool which is a lightweight, portable container runtime. It includes all of the plumbing code used by Docker to interact with system features related to containers. It has no dependency on the rest of the Docker platform.

1.25 runC Features

  •  Full support for Linux namespaces, including user namespaces
  •  Native support for all security features available in Linux: Selinux, Apparmor, seccomp, control groups, capability drop, pivot_root, uid/gid dropping, etc. If Linux can do it, runC can do it.
  • Native support for live migration, with the help of the CRIU team at Parallels
  • Native support of Windows 10 containers is being contributed directly by Microsoft engineers
  • Planned native support for Arm, Power, Sparc with direct participation and support from Arm, Intel, Qualcomm, IBM, and the entire hardware manufacturers ecosystem.
  • Planned native support for bleeding edge hardware features – DPDK, sriov, tpm, secure enclave, etc.

1.26 Using runC

In order to use runc you must have your container in the format of an Open Container Initiative (OCI) bundle. If you have Docker installed you can use its export method to acquire a root filesystem from an existing Docker container.
# create the topmost bundle directory
mkdir /mycontainer
cd /mycontainer
# create the rootfs directory
mkdir rootfs
# export busybox via Docker into the rootfs directory
docker export $(docker create busybox) | tar -C rootfs -xvf –
 After a root filesystem is populated you just generate a spec in the format of a config.json file inside your bundle.
runc spec

1.27 Running a Container using runC

The first way is to use the convenience command run that will handle creating, starting, and deleting the container after it exits.
# run as root
cd /mycontainer
runc run mycontainerid

The second way is to implement the entire lifecycle (create, start, connect, and delete the container), manually
# run as root
cd /mycontainer
runc create mycontainerid
# view the container is created and in the “created” state
runc list
# start the process inside the container
runc start mycontainerid
# after 5 seconds view that the container has exited and is now in the stopped
state
runc list
# now delete the container
runc delete mycontainerid

1.28 Summary

  • Docker is a system for creating virtual environments which are, for all intents and purposes, lightweight virtual machines. 
  • Docker containers can only run the type of OS that matches the host’s OS. 
  • Docker containers are extremely lightweight (although not so robust and secure), allowing you to achieve a higher level of deployed application density compared with traditional VMs (10x more units!). 
  • On-demand provisioning of applications by Docket supports the Platform as- a-Service (PaaS)–style deployment and scaling. 
  • runC is a container runtime which has support for various containerization solutions.