What is Business Architecture?

This tutorial is adapted from Web Age course Business Architecture Foundation Workshop

1.1 Defining Business Architecture

 A Business Architecture is an essential function of the Business that describes what it does and how it does it to support organizational goals and objectives. Business Architecture is a composite of business capabilities and business processes expressed in the form of formalized artifacts. It is a part of the Enterprise Architecture.
Notes:
While Business Architecture is a relatively new discipline it has been recognized by industry standards bodies such as the Open Group and OMG. In practice, there is no general consensus as to what Business Architecture exactly is. Even to the extent that mature architecture frameworks such as TOGAF or FEAF agree (and they largely do), there is still a gap in the perception of many business and IT professionals regarding the scope and content of the Business Architecture discipline.
IBM Global Business Services promote their own understanding of what a Business Architecture may be, which they call the Actionable Business Architecture (http://www-935.ibm.com/services/us/gbs/strategy/actionable_business_architecture). According to IBM GBS, an Actionable Business Architecture is the product of “the confluence of three often disparate models in strategy, operations and IT” which accelerates time-to-value in business / IT projects. IBM GBS define models as “reusable assets, industry leading practices and industry standards or guidelines that capture knowledge about the industry and the enterprise.”

1.2 Layers of the Enterprise Architecture

 Enterprise Architecture regards the Organization as a large and complex system of interconnected sub-systems. One way of reducing system complexities is to separate them into layers with their own problem (architecture) domains.


Source: Wikipedia.org

1.3 The Business Architecture Domain

Within the Business Architecture Domain fall the following core components and activities which reflect and help with the realization of the Organization’s strategy and vision:
◊ Business Process design and modeling
◊ Business Requirements
◊ Business Rules
◊ Critical success factors
◊ Organization structure

1.4 The Values of Business Architecture

The Value of Insight
◊ If you found 10% more funds to invest, where would be the optimum place to invest?
◊ If you needed to trim 5% from your operating costs, where would you look first?
◊ Without a model for how business gets done and an understanding of how your business is structured (which are all artifacts of Business Architecture), these types of decisions are very challenging

 The Value of Managed Change

 Process changes, organization changes, requirements changes, and comprehensive strategy changes are a fact of business. Business Architecture provides a baseline model to support and enable enterprise changes without the usual chaos and confusion that is typically associated with significant change.

1.5 Relationship with Other Types of Architecture

There are four domains of architecture recognized by most prominent architecture frameworks. Business Architecture is positioned at the top of the enterprise architecture stack driving project activities downstream in support of the strategic and
tactical business decision. 


Business Architecture project activities must occur early in the life cycle of enterprise projects in other architecture domains.  Other types of architecture must align with the Business Architecture. Underlying Architectures provide upstream feedback helping fine-tuning standards, business models and requirements.
TOGAF and FEAF both recognize these 4 domains of architecture. 
NIH (National Institutes of Health) condenses their framework down into three domains:
• Business Architecture
• Information Architecture (including Data, Integration, and Application)
• Technology Architecture

1.6 Other Pillars of Business Architecture

In addition to the Technology supporting pillar, Business Architecture also depends on the Human Resources and Business Processes of the Organization. Company’s Human Capital and Business Processes are accounted for in Business Architecture and must be coherently aligned for realizing Enterprise Strategy and Vision.

1.7 Formal Business Architecture

Characteristics of ineffective Business Architectures

Difficult to interpret artifacts – Explanatory information is often incomplete. Also, artifacts representing similar or even the same parts may use different characteristics or visual notations, further confusing readers.

Difficult to retrieve related artifacts – Stored in multiple repositories, individual machines or buried in documents.

Difficult to maintain artifact consistency – If something about a part changes, a name for example, and the part is depicted in multiple artifacts, than all artifacts need to be updated.

Part relationships not maintained –The fabric of your business is encompassed by the relationships among it’s parts: To be of value, your business architecture needs to precisely express these relationships.

Characteristics of Formal Business Architectures
No ambiguity – Each unique part in the model has a single precise representation (formal grammars) and is maintained only once in the database.

Formal relationships – Relationships between parts are themselves parts, and as such have formal grammars and are maintained only once.

Easy to make changes – Parts and relationship representations are used by all model artifacts. Thus, when the model representation changes, all the artifacts using that part are updated automatically.

Easy to find – Model parts, relationships and all artifacts are stored in a single computer database.

1.8 Ownership vs Stewardship

Business Architecture (BA) should be OWNED by the Business, even if it is actively managed, or STEWARDED, by the technology team. IT often serve as stewards (executors, servers) of business rules engines, process modeling tools, databases, etc. which gives the illusion of ownership, while, ultimately, those are paid for, created in support of and owned by the Business. ◊ It may help to view BA as a joint effort between Business (owners) and IT (stewards).

1.9 Business Architecture Frameworks

 Business Architecture must be documented in and realized through some sort of an Enterprise Framework which must:
◊ Capture the “essence” of the Business
◊ Be suitable for identifying technology needs
◊ Establish consistent vocabulary and notation
◊ Be product and technology agnostic
◊ Reflect multiple views of the enterprise
◊ Be comprehensive
◊ Provide for agile and cost-effective change

1.10 Enterprise Architecture Frameworks

 Enterprise Architecture (EA) Frameworks help deal with complexity and change.  We will review two popular EA frameworks as they apply to Business Architecture:
◊ Zachman
◊ The Open Group Architecture Framework (TOGAF)

1.11 Business Architect vs Business Analyst – 1/3

To help further define Business Architecture, it may be useful to review the differences in roles of a Business Architect and Business Analyst
 Noted Enterprise Architect Nick Malik (Enterprise Architect with Microsoft), provides the following comparison between Business Architects and Business Analysts

  Business Architect Business Analyst
Why To uncover the gaps between strategic needs of a business unit, and their abilities to meet those needs, and to charter initiatives to fill those gaps. To develop and document the detailed knowledge of a business problem that an initiative has been chartered to address.
How Analysis of future-looking strategies, capturing of capabilities, and modeling of inter- and intra- business relationships needed to discover the key capability gaps that a business must be prepared
to face, along with the development of cross-functional roadmaps to address them. System requirements are NOT captured.
Interviews with existing business stakeholders and SMEs to elicit business rules, understand processes, information, and systems in use, and
detailing the consequences (intentional or not) of making a business change to address a specific issue. The primary result of this activity is the document of System Requirements.
When On-going process that is triggered by periodic strategy cycles within a business. As-needed activity that is triggered .AFTER a problem has been identified and requirements for a solution are needed.
Who Business or IT Generalists with a  strong understanding of business functional
issues, interdependencies, and business structural concerns. Must be excellent at capability analysis. Must leverage modeling and rigorous analysis skills.
Business or IT Generalists with a strong understanding of information and application interdependencies, requirements analysis, and system development methodologies. Must be excellent at IT requirements elicitation.
Must leverage modeling and rigorous analysis skills.
What Business motivational models, Value Streams, Scenarios, Capability models, Heat Maps, Funding Maps, Risk maps Business Requirements, Business Rules, Use Cases, and Detailed Business Process descriptions
Scope Enterprise continuum / cross-domain Project / process
Focus Strategy / tactical / solution-neutral / holistic Solution and/or operation specific
Skills /
Personality
Architecture methodologies / human relation Applied process engineering / task-oriented

Source
Nick Malik’s blog entry: http://blogs.msdn.com/b/nickmalik/archive/2012/04/06/the-differencebetween- business-architect-and-business-analyst.aspx 

1.12 Going Beyond the Process

It is quite common to confuse Business Architecture with Business Process Modeling (BPM) or business process management. The important thing to realize is that Business Architecture will typically INCLUDE and even direct process improvement activities and high-level end-to-end business process modeling, but will typically refrain from detailed process modeling and management activities . For many organizations, BPM is considered a subset of Business Architecture.  No formally recognized architecture framework or architecture discipline treats Business Architecture and BPM as being synonymous .

Key Elements of Business Architecture (beyond process)
• Model-driven
• Integrated artifacts
• Formal semantics and grammar
• Consistent and unambiguous descriptions of capability
• Business / IT alignment

1.13 Summary

 There is a lot of confusion surrounding the role of Business Architecture.
 A few universal aspects are broadly agreed upon with respect to business architecture:
◊ It is essential
◊ It is a joint effort between Business and IT
◊ It occurs early in the architecture life cycle and drives and informs other downstream architecture activities
◊ Business Architecture is not the same as Business Analysis

How to Secure a Web Application using Spring Security?

This tutorial is adapted from Web Age course  Technical Introduction to Microservices.

1.1 Securing Web Applications with Spring Security 3.0

 Spring Security (formerly known as Acegi) is a framework extending the traditional JEE Java Authentication and Authorization Service (JAAS). It can work by itself on top of any Servlet-based technology. It does however continue to use Spring core to configure itself.  It can integrate with many back-end technologies like OpenID, CAS, LDAP, Database. It uses a servlet-filter to control access to all Web requests. It can also integrate with AOP to filter method access. This gives you method-level security without having to actually use EJB.

1.2 Spring Security 3.0

Because it is based on a servlet-filter, it can also work with SOAP based Web Services, RESTful Services, any kind of Web Remoting, and Portlets. It can even be integrated with non-Spring web frameworks such as Struts, Seam, and ColdFusion. Single Sign On (SSO) can be integrated through CAS, the Central Authentication Service from JA-SIG. This gives us access to authenticate against X.509 Certificates, OpenID (supported by Google, Facebook, Yahoo, and many others), and LDAP. WS-Security and WS-Trust are built on top of these. It can integrate into WebFlow. There’s support for it in SpringSource Tool Suite.

1.3 Authentication and Authorization

Authentication answers the question “Who are you?” . It includes a User Registry of known user credentials.. It includes an Authentication Mechanism for comparing the user credentials with the User Registry. Spring Security can be configured to authenticate users using various means or to accept the authentication that has been done by an external mechanism. Authorization answers the question “What can you do?” Once a valid user has been identified, a decision can be made about allowing the user to perform the requested function. Spring Security can handle the authorization decision. Sometimes this may be very fine-grained. For example, allowing a user to delete their own data but not the data of other users.

1.4 Programmatic v Declarative Security

Programmatic security allows us to make fine grained security decisions but requires writing the security code within our application. The security rules being applied may be obscured by the code being used to enforce them. Whenever possible, we would prefer to declare the rules for access and have a framework like Spring Security enforce those rules. This allows us to focus on the security rules themselves and not writing the code to implement them. With Spring Security we have a DSL for security that enables us to declare the kinds of rules we would have had to code before. It also enables us to use EL in our declarations which gives us a lot of flexibility.  This can include contextual information like time of access, number of items in a shopping cart, number of previous orders, etc.

1.5 Getting Spring Security Gradle or Maven

Spring 3.0 split many different packages into different modules so you can use just what you need. The following will almost always be used

  • Core – Core classes
  • Config – XML namespace configuration
  • Web – filters and web-security infrastructure

The following will be used if the appropriate features are required

  • JSP Taglibs
  • LDAP – LDAP authentication and provisioning
  • ACL – Specialized domain object ACL implementation
  • CAS – Support for JA-SIG.org Central Authentication Support
  • OpenID – ‘OpenID for Java’ web authentication support

Getting Spring Security from Gradle

The exact syntax of how you add the above Spring Security modules using Maven will differ depending on if you get them from:
Maven Central – http://search.maven.org/
SpringSource Enterprise Bundle Repository (EBR) – http://ebr.springsource.com/repository/
The following is an example of getting them from the Maven Central:
group ‘com.shaneword’
version ‘1.0-SNAPSHOT’
apply plugin: ‘java’
sourceCompatibility = 1.7
repositories {
mavenCentral()
}
dependencies {
compile
“org.springframework.security:org.springframework.security.core:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.web:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.taglibs:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.config:5.1.5.RELEASE”
compile
“org.springframework.security:org.springframework.security.ldap:5.1.5.RELEASE”
testCompile group: ‘junit’, name: ‘junit’, version: ‘4.11’
}

Getting Spring Security from Maven

The exact syntax of how you add the above Spring Security modules using Maven will differ depending on if you get them from:
Maven Central – http://search.maven.org/
SpringSource Enterprise Bundle Repository (EBR) – http://ebr.springsource.com/repository/
The following is an example of getting them from the SpringSource EBR:
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.core</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.web</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.taglibs</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.config</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>org.springframework.security.ldap</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>

1.6 Spring Security Configuration

If Spring Security is on the classpath, then web applications will be setup with “basic” authentication on all HTTP endpoints. There is a default AuthenticationManager that has a single user called ‘user’ with a random password.  The password is printed out during application startup. Override the password with ‘security.user.password’ in ‘application.properties’.  To override security settings, define a bean of  type ‘WebSecurityConfigurerAdapter’ and plug it into the configuration.

1.7 Spring Security Configuration Example

@Configuration
@Order(SecurityProperties.ACCESS_OVERRIDE_ORDER)
public class ApplicationSecurity
extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http)
throws Exception {
http.authorizeRequests()
.antMatchers(“/css/**”).permitAll().anyRequest()
.fullyAuthenticated().and().formLogin()
.loginPage(“/login”)
.failureUrl(“/login?error”)
.permitAll().and().logout().permitAll();
}
@Override
public void configure(AuthenticationManagerBuilder auth)
throws Exception {
auth.inMemoryAuthentication()
.withUser(“user”).password(“user”).roles(“USER”);
}
}

8.8 Authentication Manager

The AuthenticationManager class provides user information.  You can use multiple <authentication-provider> elements and they will be checked in the declared order to authenticate the user. In the example above, the WebSecurityConfigurerAdapter’s ‘configure()’ method gets called with an AuthenticationManagerBuilder. We can use this to configure the AuthenticationManager using a “fluent API”,  auth.jdbcAuthentication().dataSource(ds).withDefaultSchema()

1.9 Using Database User Authentication

You can obtain user details from tables in a database with the jdbcAuthentication() method. This will need a reference to a Spring Data Source bean configuration auth.jdbcAuthentication().dataSource(ds).withDefaultSchema(). If you do not want to use the database schema expected by Spring Security you can customize the queries used and map the information in your own database to what Spring Security expects
auth.jdbcAuthentication().dataSource(securityDatabase)
.usersByUsernameQuery(“SELECT username, password,
‘true’ as enabled FROM member WHERE username=?”)
.authoritiesByUsernameQuery(“SELECT
member.username, member_role.role as authority
FROM member, member_role WHERE member.username=?
AND member.id=member_role.member_id”);

Using Database User Authentication

The configuration of the ‘securityDatabase’ Data Source above is not shown but it is just like Spring database configuration.
The queries that Spring Security uses by default are:
SELECT username, password, enabled FROM users WHERE username = ?
SELECT username, authority FROM authorities WHERE username = ?
The default statements above assume a database schema similar to:
CREATE TABLE USERS (
USERNAME VARCHAR(20) NOT NULL,
PASSWORD VARCHAR(20) NOT NULL,
ENABLED SMALLINT,
PRIMARY KEY (USERNAME)
);
CREATE TABLE AUTHORITIES (
USERNAME VARCHAR(20) NOT NULL,
AUTHORITY VARCHAR(20) NOT NULL,
FOREIGN KEY (USERNAME) REFERENCES USERS
);
Notice in the custom queries defined in the slide the ‘enabled’ part of the query is mapped as ‘true’ since it is assumed the table referenced does not have this column but Spring Security expects it. If the table does have some column similar to ‘enabled’ it should map to a boolean type (like a ‘1’ for enabled and ‘0’ for disabled).
The custom queries above would work with a database schema of:
CREATE TABLE MEMBER (
ID BIGINT NOT NULL,
USERNAME VARCHAR(20) NOT NULL,
PASSWORD VARCHAR(20) NOT NULL,
PRIMARY KEY (ID)
);
CREATE TABLE MEMBER_ROLE (
MEMBER_ID BIGINT NOT NULL,
ROLE VARCHAR(20) NOT NULL,
FOREIGN KEY (MEMBER_ID) REFERENCES MEMBER
);

1.10 LDAP Authentication

It is common to have an LDAP server that stores user data for an entire organization. The first step in using this with Spring Security is to configure how Spring Security will connect to the LDAP server with the ldapAuthentication builder.
auth.ldapAuthentication()
.contextSource()
.url(“ldap://localhost”).port(389)
.managerDn(“cn=Directory Admin”)
.managerPassword(“ldap”);
You can also use a “embedded” LDAP server in a test environment by not providing the ‘url’ attribute and instead providing ldif files to load
auth.ldapAuthentication()
.contextSource()
.url(“ldap://localhost”).port(389)
.managerDn(“cn=Directory Admin”)
.managerPassword(“ldap”);

LDAP Authentication

The ‘manager-dn’ and ‘manager-password’ attributes of <ldap-server> are used for how to authenticate against the LDAP server to query user details. If using the embedded LDAP server the default for the ‘root’ will be “dc=springframework,dc=org” if you do not supply a value.
In order to configure Spring Security there are a number of attributes related to LDAP that have various defaults that may affect how your LDAP configuration behaves. This slide is meant to simply introduce the feature. One step you should take when attempting to use Spring Security with LDAP is to avoid configuring everything at once. Start with an embedded list of users to test the other configuration settings and then switch to using LDAP. Also try using the embedded LDAP server with an ldif file exported from your LDAP server with a few sample users.

1.11 What is Security Assertion Markup Language (SAML)?

Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP). It’s a security protocol similar to OpenId, OAuth, Kerberos etc. SAML is the link between the authentication of a user’s identity and the authorization to use a service. SAML adoption allows IT shops to use software as a service (SaaS) solutions while maintaining a secure federated identity management system.  SAML enables Single-Sign On (SSO), which means users can log in once, and those same credentials can be reused to log into other service providers.

1.12 What is a SAML Provider?

A SAML provider is a system that helps a user access a service they need. There are two primary types of SAML providers, service provider, and identity provider. A service provider needs the authentication from the identity provider to grant authorization to the user. An identity provider performs the authentication that the end user is who they say they are and sends that data to the service provider along with the user’s access rights for the service.  Microsoft Active Directory or Azure are common identity providers. Salesforce and other CRM solutions are usually service providers, in that they depend on an identity provider for user authentication.

1.13 Spring SAML2.0 Web SSO Authentication

This diagram from wikipedia explains how SAML works:

 

Pic

Pic source- CC BY-SA 3.0, Link

1. User hits the Service Provider URL Service provider discovers the IDP to contact for authentication
2. Service provider redirects to the corresponding IDP
3. User hits the IDP and identifies the user
4. IDP redirects to the Login form
5. Redirect to Service provider Assertion consumer URL (the URL in Service provider that accepts SAML assertion)
6. SP initiates redirect to target resource
7. Browser requests for the target resource
8. Service provider responds with the requested resource

1.14 Setting Up an SSO Provider

For SAML authentication to work we need an identity provider (IdP). There are various providers, such as Active Directory, Azure, AWS, Google, Microsoft, Facebook, Onelogin, etc. Obtain the domain name and fully qualified domain name of the Active Directory server. To enable SSO on Active Directory, the following steps are typically performed:

  • Ensure that LDAP is configured on the Active Directory (AD) server.
  • From the AD Server, run ldp.
  • From the Connections menu, click Connect, and configure Server name, port, and select SSL option.
  • When the LDAP is properly configured, the external domain server details are displayed in the LDP window. Otherwise, an error message
    appears indicating that a connection cannot be made using this feature.
  • When the LDAP is properly configured, the external domain server details are displayed in the LDP window. Otherwise, an error message
    appears indicating that a connection cannot be made using this feature.

1.15 Adding SAML Dependencies to a Project

 Here are the dependencies in Gradle
◊ compile group: ‘org.springframework.security’, name: ‘spring-securitycore’, version: “4.2.3.RELEASE”
◊ compile group: ‘org.springframework.security’, name: ‘spring-securityweb’, version: “4.2.3.RELEASE”
◊ compile group: ‘org.springframework.security’, name: ‘spring-securityconfig’, version: “4.2.3.RELEASE”
◊ compile group: ‘org.springframework.security.extensions’ , name:
‘spring-security-saml2-core’ , version : “1.0.2.RELEASE”

1.16 Dealing with the State

Microservices are stateless to achieve scalability and high availability. But you need to keep state in order to maintain position in the client-server conversation, reduce chattiness of the conversation by minimizing client-server round trips. State is maintained either within a client-server session or within a cross-session conversation. State may not need to be maintained outside the established session duration and can be expired.

1.17 How Can I Maintain State?

 You have two options. One is to maiintain state on the service’s side. You can use a caching solution or durable store. Here, you may want to configure TTL for session / state to be expired (e.g. for abandoned sessions, timed-out sessions, etc.). Other option is to have the client send its state as part of the request, ie. cookies, custom HTTP headers, part of the request URL (query strings), as part of the payload.

1.18 SAML vs. OAuth2

OAuth is a slightly newer standard that was co-developed by Google and Twitter to enable streamlined internet logins. OAuth uses a similar methodology as SAML to share login information. SAML provides more control to enterprises to keep their SSO logins more secure, whereas OAuth is better on mobile and uses JSON. Facebook and Google are two OAuth providers that you might use to log into other internet sites. 

1.19 OAuth2 Overview

OAuth is an authorization method to provide access to resources over the HTTP protocol.  It can be used for authorization of various applications or manual user access. It is commonly used as a way for internet users to grant websites or applications access to their information on other websites without giving them the passwords. This mechanism is used by companies, such as Google, Facebook, Microsoft, Twitter, and DropBox, to permit the users to share information about their accounts with third party applications or websites. It allows an application to have an access token . Access token represents a user’s permission for the client to access their data. The access token is used to authenticate a request to an API endpoint.

1.20 OAuth – Facebook Sample Flow

Although, the diagram below is for Facebook, but it’s similar for any other provider.

1.21 OAuth Versions

There are two versions of OAuth authorization, OAuth 1 – HMAC-SHA signature strings and OAuth 2 – tokens over HTTPS. OAuth2 is not backwards compatible with OAuth 1.0. OAuth2 provides specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.

1.22 OAuth2 Components

Resource server is the API server which contains the resources to be accessed. Authorization server provides access tokens. It can be the same as the API server. Resource owner access tokens are provided by the resource owner, i.e. the user, when resources are accessed. Client / consumer is an application using the credentials.

1.23 OAuth2 – End Points

The token Endpoint is used by clients to get an access token from the authorization server. It can also optionally refresh the token.

1.24 OAuth2 – Tokens

There are  two token types involved in OAuth2 authentication. Access Token is used for authentication an authorization to get access to the resources from the resource server. Refresh Token is sent together with the access token. It is used to get a new access token, when the old one expires. It allows for having a short expiration time for access tokens to the resource server and a long expiration time for access to the authorization server. Access tokens also have a type which defines how they are constructed. Bearer Tokens uses HTTPS security and the request is not signed or encrypted. Possession of the bearer token is considered authentication. MAC Tokens are more secure than bearer tokens. MAC tokens are similar to signatures, in that they provide a way to have partial cryptographic verification of the request.

1.25 OAuth – Grants

Methods to get access tokens from the authorization server are called grants. The same method used to request a token is also used by the resource server to validate a token. There are 4 basic grant types:

  • Authorization Code – When the resource owner allows access, an authorization code is then sent to the client via browser redirect, and the authorization code is used in the background to get an access token. Optionally, a refresh token is also sent. This grant flow is used when the client is a third-party server or web application, which performs the access to the protected resource.
  • Implicit – It is similar to authorization code, but instead of using the code as an intermediary, the access token is sent directory through a browser redirect. This grant flow is used when the user-agent will access the protected resource directly, such as in a rich web application or a mobile app.
  • Resource Owner Credentials – The password / resource owner credentials grant uses the resource owner password to obtain the access token. Optionally, a refresh token is also sent. The password is then authenticated.
  • Client Credentials – The client’s credentials are used instead of the resource owner’s. The access token is associated either with the client itself, or delegated authorization from a resource owner. This grant flow is used when the client is requesting access to protected resources under its control

1.26 Authenticating Against an OAuth2 API

Most OAuth2 services use the /oauth/token URI endpoint for handling all OAuth2 requests. The first step in authenticating against an OAuth2 protected API service is exchanging your API key for an Access Token.

 It can be done by performing these steps:

  • Create a POST request
  • Supply grant_type=client_credentials in the body of the request

Let’s say the API key has two components

  • ID:xxx
  • Secret: yyy

cURL could be used to get an Access Token like this:
curl –user xxx:yyy –data grant_type=client_credentials -X

POST https://api.someapi.com/oauth/token

1.27 OAuth2 using Spring Boot – Dependencies

Gradle dependencies

compile “org.springframework.boot:spring-boot-startersecurity:*”
compile “org.springframework.security.oauth.boot:springsecurity-
oauth2-autoconfigure:2.0.0.RELEASE”

1.28 OAuth2 using Spring Boot – application.yml

  •  src/main/resources/application.yml requires security configuration
  • Note: This example uses the Facebook provider.

security:
oauth2:
client:
clientId: 233668646673605
clientSecret: 33b17e044ee6a4fa383f46ec6e28ea1d
accessTokenUri:
https://graph.facebook.com/oauth/access_token
userAuthorizationUri:
https://www.facebook.com/dialog/oauth
tokenName: oauth_token
authenticationScheme: query
clientAuthenticationScheme: form
resource:
userInfoUri: https://graph.facebook.com/me

1.29 OAuth2 using Spring Boot – Main Class

@SpringBootApplication
@EnableOAuth2Sso
@RestController
public class DemoApplication extends WebSecurityConfigurerAdapter {
@RequestMapping(“/user”)
public Principal user(Principal principal) {
return principal;
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.antMatcher(“/**”)
.authorizeRequests()
.antMatchers(“/”, “/login**”, “/webjars/**”)
.permitAll()
.anyRequest()
.authenticated()
.and().logout().logoutSuccessUrl(“/”).permitAll()
.and().csrf().csrfTokenRepository(CookieCsrfTokenRepository.with
HttpOnlyFalse());
}

1.30 OAuth2 using Spring Boot – SPA Client

The sample code below uses AngularJS, but you can use similar concepts with or without client-side framework
angular.module(“app”, []).controller(“home”, function($http) {
var self = this;
self.logout = function() {
$http.post(‘/logout’, {}).success(function() {
self.authenticated = false;
$location.path(“/”);
}).error(function(data) {
console.log(“Logout failed”)
self.authenticated = false;
});
};
$http.get(“/user”).success(function(data) {
self.user = data.userAuthentication.details.name;
self.authenticated = true;
}).error(function() {
self.user = “N/A”;
self.authenticated = false;
});
});

1.31 JSON Web Tokens

They are replacement for standard/traditional API keys. They are an open standard.  They allow fine-grained access control via “claims”. A claim is any data a client “claims” to be true. It typically includes “who issued the request” and “when it was issued”. JSON Web Tokens are Cross-Domain capable (cookies are not), Compact (compared with XML based security), Encoded (URL-Safe), Signed (to prevent tampering). OAuth and JWT are not the same. JWT is a specific protocol for a security access token. OAuth is a broader security framework for the interaction of different actors (end users, back-end APIs, authorization servers) for the generation and distribution of security access tokens.

1.32 JSON Web Token Architecture

There are three sections in JSON Web Token -Header,  Payload and Signature. Header and Payload are base64 encoded. Signature is calculated from the encoded header and payload. Sections are separated by a period.

1.33 How JWT Works

JWT works as a two way protocol where a request is made and the response is generated from a server.

The browser makes the request for JWT encoded data. The server generates the signed token and return to the client. The token can be sent over the http request for every other request that needs authentication on the server. The server then validates the token and, if it’s valid, returns the secure resource to the client.

1.34 JWT Header

Declares the signature algorithm and type
{
“typ”:”JWT”,
“alg”:”HS256″
}
The algorithm shown here (HMAC SHA-256) will be used to create the signature. The type “JWT” stands for JSON Web Token. When base64 encoded it looks like this:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

1.35 JWT Payload

The payload contains “Claims”. Claims come in several types as Registered, Public and Private. Examples of Registered claims include

  • iss: use to define who issuedthe token
  • sub: the subject of the token
  • exp: token expiration time

Public claims  use URI schemes to prevent name collision i.e. https:/corpname.com/jwt_claims/is_user. Private claims are used inside organizations. They can use simple naming conventions  i.e. “department”.

1.36 JWT Example Payload

 Example:
{
“iss”: “corpname.com”,
“aud”: “corpname.com/rest/product”,
“sub”: “jdoe”,
“Email”: “jdoe@corpname.com”
}
 After base64 encoding:
eyJpc3MiOiJjb3JwbmFtZS5jb20iLCJhdWQiOiJjb3JwbmFtZS5jb20vcmV
zdC9wcm9kdWN0Iiwic3ViIjoiamRvZSIsIkVtYWlsIjoiamRvZUBjb3Jwbm
FtZS5jb20ifQ

1.37 JWT Example Signature

The signature is created from the header and body like this:
content = base64UrlEncode(header)
+ “.”
+ base64UrlEncode(payload);
signature = HMACSHA256(content);
Completed signature:
pEonrJLKkpSvAMk5dmBYoxP5hZ0ZhKcnkLJYNNlVxipSoZbCnDrhSq8Psda
5dPqyjnLasPY7pyxoRKx99HAVu8L9hwdO_h9GZ6K443Xvb6uDSMsyvqQp8v
65Rv0SjUenWQRK7INyZ2N8rkHdEaMOOiOPFp7yHLUo8Tq_AM2Q

1.38 How JWT Tokens are Used

 Client requests token sends credentials to Authentication server. Server returns a JWT token. Client adds token to HTTP request via the Authentication header. A JWT token can be cached on the browser and returned on every request to the server to ensure the user has access to the resources on every request without authentication on every request. The downside of this is that the user will have access for the duration of the token unless there is a blacklist each service checks against.  Client sends the request. API receives request. It reads the JWT from the Authentication header, unpacks the payload, checks claims, allows or denies access.

1.39 Adding JWT to HTTP Header

After obtaining a JWT token the client adds it to an HTTP request as an HTTP header
◊ Header Name: Authorization
◊ Type: Bearer
Example:
Authorization:Bearer
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJjb3JwbmFtZS
5jb20iLCJhdWQiOiJjb3JwbmFtZS5jb20vcmVzdC9wcm9kdWN0Iiwic3ViI
joiamRvZSIsIkVtYWlsIjoiamRvZUBjb3JwbmFtZS5jb20ifQ.pEonrJLKk
pSvAMk5dmBYoxP5hZ0ZhKcnkLJYNNlVxipSoZbCnDrhSq8Psda5dPqyjnLa
sPY7pyxoRKx99HAVu8L9hwdO_h9GZ6K443Xvb6uDSMsyvqQp8v65Rv0SjUe
nWQRK7INyZ2N8rkHdEaMOOiOPFp7yHLUo8Tq_AM2Q

1.40 How The Server Makes Use of JWT Tokens

The RESTful web service needs to validate JWT tokens when it receives requests.
 Process
◊ Unpack token
◊ Validate that signature matches header and payload
◊ Validates claims (has token expired?)
◊ Compares scopes
◊ If required it makes call to ACL (access control list) server.
◊ Grants or denies access
This process can be coded into JEE Servlet filters or added directly to the web service code

1.41 What are “Scopes”?

 The payload area of a JSON web token contains a “claim” named “scope”. The value for the “scope” field is an array.
 Example:
“scope”: [ “app.feature” ]
“scope”: [ “HR.review ” ]

 Technically scope strings can include any text. In practice scope strings are limited to those defined by an organization. Scope strings refer to specific operations on a specific API endpoints.

1.42 JWT with Spring Boot – Dependencies

Add JWT dependencies
compile “org.springframework.boot:spring-boot-startersecurity:*”
compile “io.jsonwebtoken:jjwt:0.9.0”

1.43 JWT with Spring Boot – Main Class

@EnableWebSecurity
public class SecurityTokenConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.csrf().disable()

// Add a filter to validate the tokens with every request
.addFilterAfter(new JwtTokenAuthenticationFilter(jwtConfig),
UsernamePasswordAuthenticationFilter.class)
// authorization requests config
.authorizeRequests()
// allow all who are accessing “auth” service
.antMatchers(HttpMethod.POST, jwtConfig.getUri()).permitAll()
// must be an admin if trying to access admin area
(authentication is also required here)
.antMatchers(“/gallery” + “/admin/**”).hasRole(“ADMIN”)
// Any other request must be authenticated
.anyRequest().authenticated();
}

1.44 Summary

  • Spring Security has many features that simplify securing web applications.
  • Making use of many of these features only requires configuration in a Spring configuration file.
  • Spring Security can work with many different sources of user and permission information.

Twelve-factor Applications- 12 Best Practices for Microservices

This tutorial is adapted from Web Age course  Technical Introduction to Microservices.

1.1 Twelve-factor Applications


1.2 Twelve Factors, Microservices, and App Modernization

 Heroku, a platform as a service (PaaS) provider, established general principles for creating useful web apps known as the Twelve-Factor Application.  Applying 12-factor to microservices requires modification of the original PaaS definitions. The goal of combining microservices, twelve-factor app and app modernization is a general purpose reference architecture enabling continuous delivery.

1.3 The Twelve Factors

  1.  Codebase – One codebase tracked in revision control, many deploys
  2.  Dependencies – Explicitly declare and isolate dependencies
  3.  Config – Store config in the environment
  4. Backing services – Treat backing services as attached resources
  5.  Build, release, run – Strictly separate build and run stages
  6.  Processes – Execute the app as one or more stateless processes
  7.  Port binding – Export services via port binding
  8.  Concurrency – Scale out via the process model
  9. Disposability – Maximize robustness with fast startup and graceful shutdown
  10. Dev/prod parity – Keep development, staging, and production as similar as possible
  11. Logs – Treat logs as event streams
  12. Admin processes – Run admin/management tasks as one-off processes

1.4 Categorizing the 12 Factors

Code
  • Codebase
  •  Build, Release, Run
  • Dev/prod parity
Deploy
  • Dependencies
  • Config
  • Processes
  • Backing services
  • Port Binding
Operate
  • Concurrency
  • Disposability
  • Logs
  • Admin Processes

1.5 12-Factor Microservice Codebase

The Twelve-Factor App recommends one codebase per app. In a microservices architecture, the correct approach is one codebase per service. This codebase should be in version control, either distributed, e.g. git, or centralized, e.g. SVN.

1.6 12-Factor Microservice Dependencies

As suggested in The Twelve-Factor App, regardless of what platform your  application is running on, use the dependency manager included with your language or framework. Do not assume that the tool, library or application your code depends on will be there.  How you install an operating system or platform dependencies depends on the platform. In noncontainerized environments, use a configuration management tool (Chef, Puppet, Salt, Ansible) to install system dependencies.  In a containerized environment, do this in the Dockerfile.

1.7 12-Factor Microservice Config

Anything that varies between deployments can be considered configuration.  All configuration data should be stored in a separate place from the code, and read in by the code at runtime, e.g. when you deploy code to an environment, you copy the correct configuration files into the codebase at that time.  The Twelve-Factor App guidelines recommend storing all configuration in the environment, rather than committing it to the source code repository.   Use non-version controlled .env files for local development. Docker supports the loading of these files at runtime.  Keep all .env files in a secure storage system, such as Hashicorp Vault, to keep the files available to the development teams, but not committed to Git. Use an environment variable for anything that can change at runtime, and for any secrets that should not be committed to the shared repository.  Once you have deployed your application to a delivery platform, use the delivery platform’s mechanism for managing environment variables.

1.8 12-Factor Microservice Backing Services

The Twelve-Factor App guidelines define a backing service as “any service the app consumes over the network as part of its normal operation.”  Anything external to a service is treated as an attached resource, including other services. This ensures that every service is completely portable and loosely coupled to the other resources in the system.  Strict separation increases flexibility during development – developers only need to run the service(s) they are modifying, not others.  A database, cache, queueing system, etc. These should all be referenced by a simple endpoint (URL) and credentials, if necessary. 

1.9 12-Factor Microservice Build, Release, Run

To support strict separation of build, release, and run stages, as recommended by The Twelve-Factor App, use a continuous integration/continuous delivery (CI/CD) tool to automate builds.  Docker images make it easy to separate the build and run stages. Ideally,  images are created from every commit and treated as deployment artifacts.

1.10 12-Factor Microservice Processes

For microservices, the application needs to be stateless.  Stateless services scale a service horizontally by simply adding more instances of that service. Store any stateful data, or data that needs to be shared between instances, in a backing service.

1.11 12-Factor Microservice Port Binding

The twelve-factor app is completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service.  The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port. In a local development environment, the developer visits a service URL like http://localhost:5000/ to access the service exported by their app.  In deployment, a routing layer handles routing requests from a publicfacing hostname to the port-bound web processes.  This is typically implemented by using dependency declaration to add a webserver library to the app, such as Tornado for Python, Thin for Ruby, or Jetty for Java and other JVM-based languages. This happens entirely in user space, that is, within the app’s code. The contract with the execution environment is binding to a port to serve requests. Nearly any kind of server software can be run via a process binding to a port and awaiting incoming requests. Examples include ejabberd (speaking XMPP), and Redis (speaking the Redis protocol).  The port-binding approach means that one app can become the backing service for another app, by providing the URL to the backing app as a resource handle in the config for the consuming app.

1.12 12-Factor Microservice Concurrency

The Unix and Mainframe process models are predecessors to a true microservices architecture, allowing specialization and resource sharing for different tasks within a monolithic application.  For microservices architecture, we horizontally scale each service independently, to the extent supported by the underlying infrastructure.  Docker or other containerized services, provide service concurrency.

1.13 12-Factor Microservice Disposability

Instances of a service need to be disposable so they can be started, stopped, and redeployed quickly, and with no loss of data. Services deployed in Docker containers satisfy this requirement automatically, as it’s an inherent feature of containers that they can be stopped and started instantly.Storing state or session data in queues or other backing services ensures that a request is handled seamlessly in the event of a container crash. Backing stores support crash-only design.

1.14 12-Factor Microservice Dev/Prod Parity

Keep all of your environments – development, staging, production, and so on as identical as possible, to reduce the risk that bugs show up only in some environments. Containers enable you to run exactly the same execution environment all the way from local development through production. Differences in the underlying data can still result in runtime changes in application behavior.

1.15 12-Factor Microservice Logs

Use a log-management solution in a microservice for routing or storing logs. Define logging strategy as part of the architecture standards, so all services generate logs in a similar fashion.  Log strategy should be part of a larger Application Performance Management (APM) or Digital Performance Management (DPM) solution tied to the Everything as a Service model (XaaS).

1.16 12-Factor Microservice Admin Processes

In a production environment, run administrative and maintenance tasks separately from the app. Containers make this very easy, as you can spin up a container just to run a task and then shut it down. Examples include doing data cleanup, running analytics for a presentation, or turning on and off features for A/B testing.

1.17 Kubernetes and the Twelve Factors – 1 Codebase

Kubernetes makes heavy use of declarative constructs. All parts of a Kubernetes application are described with text-based representations in YAML or JSON. The referenced containers are themselves described in source code as a Dockerfile.  Because everything from the image to the container deployment behavior is encapsulated in text, you are able to easily source control all the things, typically using git.

1.18 Kubernetes and the Twelve Factors – 2 Dependencies

A microservice is only as reliable as its most unreliable dependency. Kubernetes includes readinessProbes and livenessProbes that enable you to do ongoing dependency checking. The readinessProbe allows you to validate whether you have backing services that are healthy and you’re able to accept requests.  The livenessProbe allows you to confirm that your microservice is healthy on its own.  If either probe fails over a given window of time and threshold attempts, the Pod will be restarted.

1.19 Kubernetes and the Twelve Factors – 3 Config

 The Config factor requires storing configuration sources in your process environment table (e.g. ENV VARs).  Kubernetes provides ConfigMaps and Secrets that can be managed in source repositories.  Secrets should never be source controlled without an additional layer of encryption. Containers can retrieve the config details at runtime.

1.20 Kubernetes and the Twelve Factors – 4 Backing Services

When you have network dependencies, we treat that dependency as a “Backing Service”.  At any time, a backing service could be attached or detached and our microservice must be able to respond appropriately. For example, you have an application that interacts with a web server, you should isolate all interaction to that web server with some connection details (either dynamic service discovery or via Config in a Kubernetes Secret). Then consider whether your network requests implement fault tolerance such that if the backing service fails at runtime, your microservice does not trigger a cascading failure. That service may also be running in a separate container or somewhere off-cluster. Your microservice should not care as all interactions then occur through APIs to interact with the database.

1.21 Kubernetes and the Twelve Factors – 5 Build, Release, Run

Once you commit the code, a build occurs and the container image is built and published to an image registry. If you’re using Helm, your Kubernetes application may also be packaged and published into a Helm registry as well. These “releases” are then re-used and deployed across multiple environments to ensure that an unexpected change is not introduced somewhere in the process (by re-building the binary or image for each environment).

1.22 Kubernetes and the Twelve Factors – 6 Processes

In Kubernetes, a container image runs as a container process within a Pod. Kubernetes (and containers in general) provide a facade to provide better isolation of the container process from other containers running on the same host. Using a process model enables easier management for scaling and failure recover (e.g. restarts).  Typically, the process should be stateless to support scaling the workload out through replication.  For any state used by the application, you should use a persistent data store that all instances of your application process will discover via your Config. In Kubernetes-based applications where multiple copies of pods are running, requests can go to any pod, hence the microservice cannot assume sticky sessions.

1.23 Kubernetes and the Twelve Factors – 7 Port Binding

You can use Kubernetes Service objects to declare the network endpoints of your microservices and to resolve the network endpoints of other services in the cluster or off-cluster. Without containers, whenever you deployed a new service (or new version), you would have to perform some amount of collision avoidance for ports that are already in use on each host.  Container isolation allows you to run every process (including multiple versions of the same microservice) on the same port (by using network namespaces in the Linux kernel) on a single host.

1.24 Kubernetes and the Twelve Factors – 8 Concurrency

Kubernetes allows you to scale the stateless application at runtime with various kinds of lifecycle controllers. The desired number of replicas are defined in the declarative model and can be changed at runtime. Kubernetes defines many lifecycle controllers for concurrency including ReplicationControllers, ReplicaSets, Deployments, StatefulSets, Jobs, and DaemonSets. Kubernetes supports autoscaling based on compute resource thresholds around CPU and memory or other external metrics. The Horizontal Pod Autoscaler (HPA) allows you to automatically scale the number of pods within a Deployment or ReplicaSet.

1.25 Kubernetes and the Twelve Factors – 9 Disposability

Within Kubernetes, you focus on the simple unit of deployment of Pods which can be created and destroyed as needed — no single Pod is all that valuable. When you achieve disposability, you can start up fast and the microservices can die at any time with no impact on user experience. With the livenessProbes and readinessProbes, Kubernetes will actually destroy Pods that are not healthy over a given window of time.

1.26 Kubernetes and the Twelve Factors – 10 Dev/Prod Parity

Containers (and to a large extent Kubernetes) standardize how you deliver your application and its running dependencies, meaning that you’re able to deploy everything the same way everywhere. For example, if you’re using MySQL in a highly available configuration in production, you can deploy the same architecture of MySQL in your dev cluster. By establishing parity of production architectures in earlier dev environments, you can typically avoid unforeseen differences that are important to how the application runs (or more importantly how it fails).

1.27 Kubernetes and the Twelve Factors – 11 Logs

For containers, you will typically write all logs to stdout and stderr file descriptors. The important design point is that a container should not attempt to manage internal files for log output, but instead delegate to the container orchestration system around it to collect logs and handle analysis and archival. Often in Kubernetes, you’ll configure Log collection as one of the common services to manage Kubernetes. For example, you can enable an Elasticsearch-Logstash-Kibana (ELK) stack within the cluster.

1.28 Kubernetes and the Twelve Factors – 12 Admin Processes

Within Kubernetes, the Job controller allows you to create Pods that are run once or on a schedule to perform various activities. A Job might implement business logic, but because Kubernetes mounts API tokens into the Pod, you can also use them for interacting with the Kubernetes orchestrator as well. By isolating these kinds of administrative tasks, you can further simplify the behavior of your microservice.

1.29 Summary

The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc). The twelve-factor methodology is highly useful when creating microservices architecture based applications.