What is Kubernetes?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Kubernetes

Kubernetes is Greek for “helmsman” or “pilot”.  It was originally founded by Joe Beda, Brendan Burns and Craig McLuckie. Afterward, other Google engineers also joined the project. The original codename of Kubernetes within Google was Project Seven, a reference to a Star Trek character. The seven spokes on the wheel of the Kubernetes logo is a nod to that codename. Kubernetes is commonly referred to as K8s. It is an open-source system for automating deployment, scaling, and management of containerized applications. It was originally designed by Google and donated to the Cloud Native Computing Foundation. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. It supports a range of container tools, including Docker.

1.2 What is a Container?

Over the past few years, containers have grown in popularity. Containers provide operating-system-level virtualization.  It is a computer virtualization method in which the kernel of an operating system allows the existence of multiple isolated user-space instances, instead of just one. Such instances are called containers. A container is a software bucket comprising everything necessary to run the software independently. There can be multiple containers in a single machine and containers are completely isolated from one another as well as from the host machine. Containers are also called virtualization engines (VEs) or Jails (e.g. FreeBSD jail). Containers look like real computers from the point of view of programs running in them.  Items usually bundled into a container include Application, dependencies, libraries, binaries, and configuration files.

1.3 Container – Uses

  • OS-level virtualization is commonly used in virtual hosting environments.
  • A container is useful for packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and self-sufficient containers, that will run virtually anywhere. 
  • It is useful for securely allocating finite hardware amongst a large number of mutually-distributing users. 
  • System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on a single server.
  • Container is useful for packaging everything the app needs into a container and migrating that from one VM to another, to server or cloud without having to refactor the app.
  • Container usually imposes little to no overhead, because programs in virtual partitions use the OS’ normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machines. 
  • Container doesn’t require support in hardware to perform efficiently.

1.4 Container – Pros

  • Containers are fast compared to hardware-level virtualization, since there is no need to boot up a full virtual machine.A Container allows you to start apps in a virtual, software-defined environment much more quickly.
  • The average container size is within the range of tens of MB while VMs can take up several gigabytes. Therefore a server can host significantly more containers than virtual machines. 
  • Running containers is less resource intensive than running VMs so you can add more computing workload onto the same servers. 
  • Provisioning containers only take a few seconds or less, therefore, the data center can react quickly to a spike in user activity. 
  • Containers can enable you to easily allocate resources to processes and to run your application in various environments. 
  • Using containers can decrease the time needed for development, testing, and deployment of applications and services. 
  • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production. 
  • Containers are a very cost effective solution. They can potentially help you to decrease your operating cost (less servers, less staff) and your development cost (develop for one consistent runtime environment). 
  • Using containers, developers are able to have truly portable deployments. This helps in making Continuous Integration / Continuous Deployment easier. 
  • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment.

1.5 Container – Cons

  • Compared to traditional virtual machines, containers are less secure. Containers share the kernel, other components of the host operating system, and they have root access. This means that containers are less isolated from each other than virtual machines, and if there is a vulnerability in the kernel it can jeopardize the security of the other containers as well.
  • A container offers less flexibility in operating systems. You need to start a new server to be able to run containers with different operating systems. 
  • Networking can be challenging with containers. Deploying containers in a sufficiently isolated way while maintaining an adequate network connection can be tricky.
  • Developing and testing for containers requires training. Whereas writing applications for VMs, which are in effect the same as physical machines, is a straightforward transition for development teams.
  • Single VMs often run multiple applications. Whereas containers promotes a one-container one-application infrastructure. This means containerization tends to lead to a higher volume of discreet units to be monitored and managed.

1.6 Composition of a Container

At the core of container technology are:

  • Control Groups (cgroups)
  • Namespaces
  • Union filesystems

1.7 Control Groups

Control groups (cgroups) work by allowing the host to share and also limit the resources each process or container can consume. This is important for both, resource utilization and security. It prevents denial-of-service attacks on host’s hardware resources.

1.8 Namespaces

 Namespaces offer another form of isolation for process interaction within operating systems. It limits the visibility a process has on other processes, networking, filesystems, and user ID components. Container processes are limited to see only what is in the same namespace. Processes from containers or the host processes are not directly accessible from within the container process.

1.9 Union Filesystems

Containers run from an image, much like an image in the VM or Cloud world, it represents state at a particular point in time. Container images snapshot the filesystems. The snapshot tend to be much smaller than a VM. The container shares the host kernel and generally runs a much smaller set of processes. The filesystem is often layered or multi-leveled., e.g. Base layer can be Ubuntu with an application such as Apache or MySQL stacked on top of it.

1.10 Popular Containerization Software

  • Docker – Docker Swarm
  • Packer
  • Kubernetes
  • Rocket (rkt)
  • Apache Mesos
  • Linux Containers (LXC)
  • CloudSang
  • Marathon
  • Nomad
  • Fleet
  • Rancher
  • Containership
  • OpenVZ
  • Oracle Solaris Containers
  • Tectonic

1.11 Microservices

The microservice architectural style is an approach to developing a single application as a suite of small services. Each service runs in its own process and communicates with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.

1.12 Microservices and Containers / Clusters

Containers are excellent for microservices, as it isolates the services. Containerization of single services makes it easier to manage and update these services. Docker has led to the emergence of frameworks for managing complex scenarios, such as, how to manage single services in a cluster, how to manage multiple instances in a service across hosts , how to coordinate between multiple services on a deployment and management level. Kubernetes allows easy deployment and management of multiple Docker containers of the same type through an intelligent tagging system. With Kubernetes, you describe the characteristics of the image, e.g. number of instances, CPU, RAM, you would like to deploy.

1.13 Microservices and Orchestration

Microservices can benefit from deployment to containers. Issue with containers is, they are isolated. Microservices might require communication with each other. Container orchestration can be used to handle this issue. Container orchestration refers to the automated arrangement, coordination, and management of software containers. Container orchestration also helps in tackling challenges, such as service discovery, load balancing,  secrets/configuration/storage management, health checks, auto-[scaling/restart/healing] of containers and nodes,  zero-downtime deploys.

1.14 Microservices and Infrastructure-as-Code

In the old days, you would write a service and allow the Operations (Ops) team to deploy it to various servers for testing, and eventually production. Infrastructure-as-Code solutions helps in shortening the development cycles by automating the set up of infrastructure. Popular infrastructure-as-code solutions include Puppet, Chef, Ansible, Terraform, and Serverless. In the old days, servers were treated as part of the family. Servers were named, constantly monitored, and carefully updated. Due to containers and infrastructure-as-code solutions, these days the servers (containers) are often not updated. Instead, they are destroyed, then recreated. Containers and infrastructure-as-code solutions treat infrastructure as disposable.

1.15 Kubernetes Container Networking

Microservices require a reliable way to find and communicate with each other. Microservices in containers and clusters can make things more complex as we now have multiple networking namespaces to bear in mind. Communication and discovery requires traversing of container IP space and host networking. Kubernetes benefits from getting its ancestry from the clustering tools used by Google for the past decade. Many of the lessons learned from running and networking two billion containers per week have been distilled into Kubernetes.

1.16 Kubernetes Networking Options

Docker creates three types of networks by default:

  • bridged – this is the default choice. In this mode, the container has its own networking namespace and is then bridged via virtual interfaces to the host network. In this mode, two containers can use the same IP range because they are completely isolated.
  • ◊ host – in this mode, performance is greatly benefited since it removes a level of network virtualization; however, you lose the security of having an isolated network namespace.
  • none – creates a container with no external interface. Only a loopback device is shown if you inspect the network interfaces.
  • host, and none.
  • In all these scenarios, we are still on a single machine, and outside of a host mode, the container IP space is not available, outside the machine. Connecting containers across two machines then requires NAT and port mapping for communication.

Docker user-defined networks

  • Docker also supports user-defined networks via network plugins.

     bridge driver – allows creation of networks somewhat similar to default bridge driver.

     overlay driver – uses a distribution key-value store to synchronize the network creation across multiple hosts.

     Macvlan driver – uses the interface and sub-interfaces on the host. It offers a more efficient network virtualization and isolation as it bypasses the Linux bridge.


  • Provides an overlay network for Docker containers


  • Gives a full subnet to each host/node enabling a similar pattern to the Kubernetes practice of a routable IP per pod or group of containers.

Project Calico

  • Uses built-in routing functions of the Linux kernel.
  • It can be used for anything from small-scale deploys to large Internetscale installations.
  • There is no need for additional NAT, tunneling, or overlays.


  • It merges both Calico for network policy and Flannel for overlay into one solution.

1.17 Kubernetes Networking – Balanced Design

Using unique IP address at the host level is problematic as the number of containers grow. Assigning an IP address to each container can also be overkill. In cases of sizable scale, overlay networks and NATs are needed in order to address each container. Overlay networks add latency. You have to pick between fewer containers with multiple applications per container (unique IP address for each container) or multiple containers with fewer applications per container (Overlay networks / NAT)

1.18 Summary

Kubernetes provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes supports a range of container tools, including Docker. Containers are useful for packaging, shipping, and deployment of any software applications that are presented as lightweight, portable, and selfsufficient containers, that will run virtually anywhere. Microservices can benefit from containers and clustering. Kubernetes offers container orchestration.

What is Docker?

This tutorial is adapted from Web Age course Microservices Development Bootcamp with Immersive Project.

1.1 What is Docker

Docker is an open-source (and 100% free) project for IT automation. You can view Docker as a system or a platform for creating virtual environments which are extremely lightweight virtual machines. Docker allows developers and system administrators to quickly assemble, test, and deploy applications and their dependencies inside Linux containers supporting the multi-tenancy deployment model on a single host. Docker’s lightweight containers lend themselves to rapid scaling up and down. A container is a group of controlled processes associated with a separate tenant executed in isolation from other tenants. It is written in the Go programming language.

1.2 Where Can I Run Docker?

Docker runs on any modern-kernel 64-bit Linux distributions. The minimum supported kernel version is 3.10. Kernels older than 3.10 lack some of the features required by Docker containers. You can install Docker on VirtualBox and run it on OS X or Windows. Docker can be installed natively on Windows using Docker Machine, but requires Hyper-V. Docker can be booted from the small footprint Linux distribution boot2docker.

1.3 Installing Docker Container Engine

Installing on Linux:
Docker is usually available via the package manager of the distributions. For example, on Ubuntu and derivatives:
sudo apt-get update && sudo apt install docker.io

Installing on Mac
Download and install the official Docker.dmg from docker.com

Installing on Windows
Hyper-V must be enabled on Windows. Download the latest installer from docker.com

1.4 Docker Machine

Though Docker runs natively on Linux, it may be desirable to have two different host environment, such as Ubuntu and CentOS. To achieve this, VMs running Docker may be used. To simplify management of different Docker host, it is possible to use Docker Machine. Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. Docker Machine enables you to provision multiple remote Docker hosts on various flavors of Linux. Additionally, Machine allows you to run Docker on older Mac or Windows systems as well as cloud providers such as AWS, Azure and GCP. Using the docker-machine command, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

1.5 Docker and Containerization on Linux

Docker leverages resource isolation features of the modern Linux kernel offered by cgroups and kernel namespaces. The cgroups and kernel namespaces features allow creation of strongly isolated containers acting as very lightweight virtual machines running on a single Linux host. Docker helps abstract operating-system-level virtualization on Linux using abstracted virtualization interfaces based on libvirt, LXC (LinuX Containers) and systemd-nspawn.  As of version 0.9, Docker has the capability to directly use virtualization facilities provided by the Linux kernel via its own libcontainer library.

1.6 Linux Kernel Features: cgroups and namespaces

The control group kernel feature (cgroup) is used by the Linux kernel to allocate system resources such as CPU, I/O, memory, and network subject to limits, quotas, prioritization, and other control arrangements. The kernel provides access to multiple subsystems through the cgroup interface. 
Examples of subsystems (controllers) are:
 The memory controller for limiting memory use
 The cpuacct controller for keeping track of CPU usage

The cgroups facility was merged into the Linux kernel version 2.6.24.  Systems that use cgroups: Docker, Linux Containers (LXC), Hadoop, etc.  The namespaces feature is a related to cgroups facility that enables different applications to act as separate tenants with completely isolated views of the operating environment, including users, process trees, network, and mounted file systems.

1.7 The Docker-Linux Kernel Interfaces

Source: Adapted from http://en.wikipedia.org/wiki/Docker_(software)

1.8 Docker Containers vs Traditional Virtualization

System virtualization tools or emulators such as VirtualBox, Hyper-V or VMware, boot virtual machines from a complete guest OS image (of your choice) and emulate a complete machine, which results in a high operational overhead. Virtual environments created by Docker run on the existing operating system kernel of the host’s OS without a need for a hypervisor. This leads to very low overhead and significantly faster container startup time. Docker-provisioned containers do not include or require a separate operating system (it runs in the host’s OS). This circumstance puts a significant limitation on your OS choices.

1.9 Docker Containers vs Traditional Virtualization

Overall, traditional virtualization has advantages over Docker in that you have a choice of guest OSes (as long as the machine architecture is supported). You can get only some (limited) choice of Linux distros. You still have some choice: e.g. you can deploy a Fedora container on a Debian host. You can, however, run a Windows VM inside a Linux machine using virtual machine emulators like VirtualBox (with less engineering efficiency). With Linux containers, you can achieve a higher level of deployed application density compared with traditional VMs (10x more units!). Docker runs everything through a central daemon which is not a particularly reliable and secure processing model.

1.10 Docker Integration

Docker can be integrated with a number of IT automation tools that extend its capabilities, including Ansible, Chef,  Jenkins, Puppet, Salt. Docker is also deployed on a number of Cloud platforms like Amazon Web Services, Google Cloud  Platform, Microsoft Azure, OpenStack and Rackspace.

1.11 Docker Services

Docker deployment model is application-centric and in this context provides the following services and tools:
◊ A uniform format for bundling an application along with its dependencies which is portable across different machines.
◊ Tools for automatic assembling a container from source code: make, maven, Debian packages, RPMs, etc.
◊ Container versioning with deltas between versions.

1.12 Docker Application Container Public Repository

Docker community maintains the repository for official and public domain. 
Docker application images: https://hub.docker.com

1.13 Competing Systems

  • Rocket container runtime from CoreOS (an open source lightweight Linux kernel-based operating system). 
  • LXD for Ubuntu from Canonical (the company behind Ubuntu)
  • The LXC (Linux Containers), used by Docker internally

1.14 Docker Command Line

The following commands are shown as executed by the root (privileged) user:
docker run ubuntu echo ‘Yo Docker!’
 This command will create a docker container based on the ubuntu image, execute the echo command on it, and then shuts down.
docker ps -a
This command will list all the containers created by Docker along with their IDs

1.15  Starting, Inspecting, and Stopping Docker Containers

docker start -i <container_id>
This command will start an existing stopped container in interactive (-i) mode (you will get container’s STDIN channel)
docker inspect <container_id>
This command will provide JSON-encoded information about the running container identified by container_id
docker stop <container_id>
This command will stop the running container identified by container_id
 For the Docker command-line reference, visit-https://docs.docker.com/engine/reference/commandline/cli/

1.16 Docker Volume

 If you destroy a container and recreate it, you will lose data. Ideally, data should not be stored in containers. Volumes are mounted file systems available to containers. Docker volumes are a good way of safely storing data outside a container.  Docker volumes can be shared across multiple containers.
Creating a Docker volume
docker volume create my-volume
Mounting a volume
docker run -v my-volume:/my-mount-path -it ubuntu:12.04
Viewing all volumes
docker volume ls
Deleting a volume
docker volume rm my-volume

1.17 Dockerfile

Rather than manually creating containers and saving them as custom images, it’s better to use Dockerfile to build images
Sample script
# let’s use ubuntu docker image
FROM openjdk
RUN apt-get update -y
RUN apt-get install sqlite -y
# deploy the jar file to the container
COPY SimpleGreeting-1.0-SNAPSHOT.jar
The Dockerfile filename is case sensitive. The ‘D’ in Dockerfile has to be uppercase. Building an image using docker build. (Mind the space and period at the end of the docker build command)
docker build -t my-image:v1.0 .
Or, if you want to use a different file name:
docker build -t my-image:v1.0 -f mydockerfile.txt

1.18 Docker Compose

A container runs a single application. However, most modern application rely on multiple service, such as database, monitoring, logging, messages queues, etc. Managing a forest of containers individually is difficult especially when it comes to moving the environment from development to test to production, etc. Compose is a tool for defining and running multi-container Docker applications on the same host. A single configuration file, docker-compose.yml, is used to define a group of container that must be managed as a single entity.

1.19 Using Docker Compose

  • Define as many Dockerfile as necessary
  • Create a docker-compose.yml file that refers to the individual Dockerfile

Sample Dockerfile
version: ‘3’
build: .
– “8080:8080”
– mongodb
image: mongodb
– my-volume:/data/db
my-volume: {}

1.20 Dissecting docker-compose.yml

The Docker Compose file should be named either docker-compose.yml or docker-compose.yaml. Using any other names will require to use the -f argument to specify the filename. The docker-compose.yml file is writing in YAML
The first line, version, indicates the version of Docker Compose being used. As of this writing, version 3 is the latest.

1.21 Specifying services

 A ‘service’ in docker-compose parlance is a container. Services are specified under the service: node of the configuration file.
You choose the name of a service. The name of the service is meaningful within the configuration. A service (container) can be specified in one of two ways: Dockerfile or image name. 
Use build: to specify the path to a Dockerfile
Use image: to specify the name of an image that is accessible to the host

1.22 Dependencies between containers

Some services may need to be brought up before other services. In docker-compose.yml, it is possible to specify which service relies on which using the links: node. If service C requires that service A and B be brought up first, add a link as
build: ./servicea
build: ./serviceb
build: ./servicec
– A
– B
It is possible to specify as many links as necessary. Circular links are not permitted (A links to B and B links to A).

1.23 Injecting Environment Variables

In a microservice, containerized application, environment variables are often used to pass configuration to an application.
It is possible to pass environment variable to a service via the dockercompose.
yml file

1.24 runC Overview

Over the last few years, Linux has gradually gained a collection of features. Windows 10 and Windows Server 2016+, also added similar features. Those individual features have esoteric names like “control groups”, “namespaces”, “seccomp”, “capabilities”, “apparmor” and so on. Collectively, they are known as “OS containers” or sometimes  lightweight virtualization”. Docker makes heavy use of these features and has become famous for it. Because “containers” are actually an array of complicated, sometimes arcane system features, they are integrated into a unified low-level component called runC. runC now available as a standalone tool which is a lightweight, portable container runtime. It includes all of the plumbing code used by Docker to interact with system features related to containers. It has no dependency on the rest of the Docker platform.

1.25 runC Features

  •  Full support for Linux namespaces, including user namespaces
  •  Native support for all security features available in Linux: Selinux, Apparmor, seccomp, control groups, capability drop, pivot_root, uid/gid dropping, etc. If Linux can do it, runC can do it.
  • Native support for live migration, with the help of the CRIU team at Parallels
  • Native support of Windows 10 containers is being contributed directly by Microsoft engineers
  • Planned native support for Arm, Power, Sparc with direct participation and support from Arm, Intel, Qualcomm, IBM, and the entire hardware manufacturers ecosystem.
  • Planned native support for bleeding edge hardware features – DPDK, sriov, tpm, secure enclave, etc.

1.26 Using runC

In order to use runc you must have your container in the format of an Open Container Initiative (OCI) bundle. If you have Docker installed you can use its export method to acquire a root filesystem from an existing Docker container.
# create the topmost bundle directory
mkdir /mycontainer
cd /mycontainer
# create the rootfs directory
mkdir rootfs
# export busybox via Docker into the rootfs directory
docker export $(docker create busybox) | tar -C rootfs -xvf –
 After a root filesystem is populated you just generate a spec in the format of a config.json file inside your bundle.
runc spec

1.27 Running a Container using runC

The first way is to use the convenience command run that will handle creating, starting, and deleting the container after it exits.
# run as root
cd /mycontainer
runc run mycontainerid

The second way is to implement the entire lifecycle (create, start, connect, and delete the container), manually
# run as root
cd /mycontainer
runc create mycontainerid
# view the container is created and in the “created” state
runc list
# start the process inside the container
runc start mycontainerid
# after 5 seconds view that the container has exited and is now in the stopped
runc list
# now delete the container
runc delete mycontainerid

1.28 Summary

  • Docker is a system for creating virtual environments which are, for all intents and purposes, lightweight virtual machines. 
  • Docker containers can only run the type of OS that matches the host’s OS. 
  • Docker containers are extremely lightweight (although not so robust and secure), allowing you to achieve a higher level of deployed application density compared with traditional VMs (10x more units!). 
  • On-demand provisioning of applications by Docket supports the Platform as- a-Service (PaaS)–style deployment and scaling. 
  • runC is a container runtime which has support for various containerization solutions.

What is BIZBOK?

This tutorial is adapted from Web Age course Business Architecture Foundation Workshop

1. 1 What are BIZBOK and BIZBOK Guide?

 BIZBOK™ stands for the Business Architecture Body of Knowledge™. BIZBOK comprises the core set of Business Architecture concepts and artifacts that enable every organization to create, communicate and manage their respective Business Architecture. The BIZBOK™ Guide is a handbook that provides business architecture practitioners and other individuals interested in this discipline with comprehensive coverage of BIZBOK. The Guide comprises a growing collection of concepts, disciplines, and emerging best practices used by many business architecture practitioners in various industries. The Business Architecture Guild, the sponsor of the BIZBOK Guide, promotes the Guide as the emerging standard for building, deploying, and leveraging business architecture within an organization.

1.2 The Business Architecture Guild

The Business Architecture Guild is a not-for-profit organization of business architecture practitioners dedicated to advancing the discipline of business architecture.  Best practices emerging in the field of the business architecture discipline are contributed through membership participation. The Guild sponsors the BIZBOK™ Guide which represents the consensus, formalization, and documentation of best practices and knowledge from active members of the Guild.

1.3 The BIZBOK Guide Make Up

 The BIZBOK™ Guide is organized into the following major parts:
◊ Business Architecture Blueprints (Views of the Business )
◊ Business Architecture Practice
◊ Business Architecture Scenarios
◊ The Business Architecture Knowledge base
◊ Business Architecture and IT Architecture Alignment
◊ Industry Reference Models

1.4 BIZBOK’s Definition of Business Architecture

 BIZBOK defines business architecture as  “A blueprint of the enterprise that provides a common understanding of the organization and is used to align strategic objectives and tactical demands.”  According to BIZBOK Guide, a business architecture is not confined by the Enterprise boundaries and must also provide for the interests of the external stakeholders (partners, clients, etc.)

1.5 Business Architecture Characteristics

Business Architecture blueprints use a common vocabulary, standardized framework, and shared business knowledge base of business artifacts and elements. The common vocabulary includes such terms as capabilities, value streams, information views, etc. which helps eliminate much of the confusion often found across business units. The above arrangements form the following characteristics of a Business Architecture:
◊ It is about the business
◊ It’s scope is the scope of the business
◊ It is not prescriptive
◊ It is iterative
◊ It is reusable

1.6 Who, What, Where, When, Why, and How

Business Architecture helps executives answer commonly asked questions: who, what, where, when, why, and how

Fig 1. Aspects of the Business Represented by a Business Architecture
Source: BIZBOK Guide, v 3.5, 2014, p. 2

1.7 What are a Capability and Value Stream?

These are core BIZBOK Guide’s business architecture concepts and blueprints.  Capabilities are what a business does; it is an ability or capacity that a business may possess or exchange to achieve a specific purpose or outcome. For example, an insurance company will have such capabilities as Claims Management and Policy Management. A value stream defines the major stages involved in delivering value to the internal and external stakeholders. In an insurance business, the Process Claim business process represents a value stream. Capabilities enable each stage of the value stream. The BIZBOK™ Guide assists practitioners with the creation and use of these business blueprints.

1.8 BIZBOK Common Blueprints

Adapted from: BIZBOK Guide, v 3.5, 2014, p. 4

1.9 The BIZBOK Business Architecture Framework

There are three important components within the Business Architecture framework:
◊ Business blueprints
◊ Business Architecture scenarios
◊ Business Architecture knowledge base

1.10 The BIZBOK Business Architecture Framework Diagram

Source: BIZBOK Guide, v 3.5, 2014, p. 5

1.11 Business Architecture Scenario Topics

The BIZBOK Guide covers the following business architecture scenario topics which include initiatives, programs, and projects:
◊ Investment Analysis
◊ Shift to Customer Centric Business Model
◊ Merger & Acquisition Analysis
◊ New Product/Service Rollout
◊ Globalization
◊ Business Capability Outsourcing
◊ Supply Chain Streamlining
◊ Divestiture
◊ Regulatory Compliance
◊ Change Management
◊ Operational Cost Reduction
◊ Joint Venture Deployment

1.12 Summary

In this tutorial, we reviewed the organization of the BIZBOK Guide sponsored by the Business Architecture Guild
 We reviewed the definition of the Business Architecture
 We defined the Capability and Value Stream blueprints
 The BIZBOK Business Architecture framework was introduced