Home  > Resources  > Blog

Google Cloud Virtual Networking

 
May 19, 2021 by Bibhas Bhattacharya
Category: Cloud

This tutorial is adapted from Web Age course Google Cloud Platform Fundamentals.

1.1 GCP Virtual Networking

Networking services provide connectivity between cloud-based VMs, on-premises servers, and other cloud services. Google Cloud treats networking as a global feature that spans all its services. GCP networking is based on Google’s Andromeda architecture, which allows cloud administrators to create and use software-defined networking elements, such as firewalls, routing tables, and VMs.

1.2 GCP Networking Services and Components at a Glance

1.3 VPC Main Components Diagram

Source: https://www.sneppets.com/cloud/google-cloud-vpc-networks-fundamentals/

1.4 Network Service Tiers

  • GCP offers two Network Service Tiers
    • Standard Tier, and
    • Premium Tier (the default)
  • The default, Premium Tier, leverages Google’s global network, which brings high throughput and reliability.
  • The Standard Networking Tier is a lower-cost networking capability with network performance comparable to other public clouds (according to Google’s documentation).

1.5 A Virtual Private Cloud (VPC) Network

A Virtual Private Cloud (VPC) network is a virtualized layer on top of the physical network used by Google Cloud. 

  • A VPC provides the following services:
    • Network connectivity services for your
      • Compute Engine VM instances,
        • Your VM instance can have more than one interface, each interface, however, must be connected to a different network
      • Google Kubernetes Engine (GKE) clusters,
      • App Engine flexible environment instances, and
      • Other Google Cloud products built with Compute Engine VMs
    • Native Internal TCP/UDP Load Balancing and proxy systems for Internal HTTP(S) Load Balancing
    • Connectivity to on-prem networks using Cloud VPN tunnels and Cloud Interconnect attachments

1.6App Engine vs Compute Engine Networking

  • App Engine is a fully-managed service and Google manages networking, scaling, load balancing for you as well
  • Compute Engine offers you more fine-grained control over networking services, configuration, which includes the ability to load-balance traffic across resources, create DNS records, and connect your existing on-prem networks to Google Cloud

1.7 Network and Subnet Terminology

VPC networks and subnets are different types of objects in Google Cloud. VPC networks are global objects spanning all the available regions. Subnets are regional objects. A VPC network must have at least one subnet before you can use the network. Subnet creation mode defines the type of VPC network (more on that later in the module …). Subnets define IP range partitions within a VPC. Accordingly, a VPC network, as such, does not have any IP address ranges associated with them.  IP ranges are defined for the subnets. Each subnet is associated with a geographical region; you can create more than one subnet per region.

1.8 CIDR Network Notation

Google network documentation uses the CIDR network notation. Classless Inter-Domain Routing (CIDR) is a method for allocating IP addresses and for IP routing. CIDR notation gives a compact representation of an IP address along with its associated network mask using this form:

<IP_address>/<width (in bits) of the network prefix> 

For example,

192.168.100.14/24 
represents the IPv4 address 192.168.100.14 and its subnet 24-bit mask 255.255.255.0

Firewall rules designate the 0.0.0.0/0 CIDR address to represent any source (e.g. the Internet).

Notes:

It is good to know that there are several “non-routable” (not reachable from outside of the private networks) IP ranges:

  • class A reserved space 10.0.0.0/8
  • class B reserved space 172.16.0.0/12
  • class C reserved space 192.168.0.0/1

The above IP addresses are routable only within private networks.

1.9 A Basic Cross-Region VPC Network

Adapted from https://cloud.google.com/vpc/docs/shared-vpc?hl=en#shared_vpc_overview

Any IAM user who has the compute.InstanceAdmin role for the project can created instances in this project. The project is designated as stand-alone since it is not attached to a host project, in which case, it would be treated as a service project with all the related administrative, billing, and organizational benefits.

1.10 Legacy Networks

Currently used Google Cloud VPC networks offer users more advanced features compared to older networks now downgraded to the legacy status. Legacy networks lack many of the capabilities found in modern VPCs. Legacy networks are associated with a single global IP range that cannot be subdivided into subnets; VPC networks are partitionable into subnets making it possible for each Google Cloud region to be associated with one or more subnets in a single VPC network. Legacy networks do not support Private Google Access (PGA is discussed later in the module). If you already have an older legacy network, there is a migration path to upgrade it to a VPC network; there is, however, no path to convert a VPC network into a legacy network.

Notes:

Google Cloud issued a warning: “Legacy networks are deprecated and will shut down on June 1, 2021 for any GCP project. After that date, you won’t be able to create legacy networks. However, existing legacy networks won’t be affected and will continue to operate normally. Until that date, the field will be available only for projects with existing legacy networks.

One limitation with legacy networks is that it is not possible to create regional subnets in a legacy network.

1.11 Listing Networks

A fast way to view the VPC (and legacy, if any) networks in your project is to use the gcloud compute networks list command. VPC networks will list their subnet modes; legacy networks will be marked with the LEGACY subnet creation mode.

1.12 Viewing Network Details

Use the following gcloud command to view a specific network’s details:

gcloud compute networks describe <NETWORK>

1.13 Projects and VPC Relationship

Projects can contain multiple VPC networks unless an organizational policy prohibits it. While you can create and add more networks to your project, networks, however, cannot be shared between projects. New projects start with a default network (an auto-mode VPC network) that has one subnetwork (subnet) in each region. The default network (an auto-mode VPC network) comes with pre-populated firewall rules that you can delete or modify.

1.14 VPC Specifications 

VPC networks, including their associated routes and firewall rules, are global resources. They are not associated with any particular region or zone. Subnets are regional resources; each subnet defines a range of its IP addresses. Traffic to and from your VM instances is controlled with network firewall rules. Firewall rules are implemented on the VMs themselves, so traffic can only be controlled and logged as it leaves or arrives at a VM. Resources within a VPC network can communicate with one another by using internal IPv4 addresses, subject to applicable network firewall rules. Instances with internal IP addresses can communicate with Google APIs and services. IAM roles can be applied when administering network resources. Using a Shared VPC [https://cloud.google.com/vpc/docs/shared-vpc], an organization can keep a VPC network in a common host project. IAM-authorized members from other projects in the same organization are allowed to create resources that use subnets of that Shared VPC network. VPC networks from different projects or organizations can be connected to each other using VPC Network Peering. VPC networks can be securely connected in hybrid environments by using Cloud VPN or Cloud Interconnect.

1.15 Types of VPC Networks

Types of VPC networks are determined by their subnet creation mode:

  • Auto-mode VPC networks
    • Unless you choose to disable this default, each new project is provisioned with a default auto-mode VPC network along with pre-populated firewall rules. These are built with one automatically provisioned subnet per region at creation time and automatically receive new subnets in new regions. Subnets are assigned a set of predefined IP ranges that fit within the 10.128.0.0/9 CIDR block.
  • Custom-mode VPC networks
    • Give you full control over subnet configuration; you have to manually create subnets.

You can switch from an auto-mode VPC network to a custom-mode VPC; the reverse conversion is not supported (this is a one-way conversion path).

1.16 Considerations for Auto-mode VPC Networks

The most attractive feature of auto-mode VPC networks is the ease of setting them up. For production networks, Google recommends using custom-mode VPC networks. In the slide’s notes, you can find part of the table mapping regions to IP ranges and default gateway.

1.17 Considerations for Custom-mode VPC Networks

Custom-mode VPC networks are more flexible than auto-mode ones and offer the following benefits:

  • This mode will support your plans to connect VPC networks through VPC Network Peering or Cloud VPN
    • This is not possible with auto-mode networks since the subnets of every auto-mode VPC network use the same predefined range of IP addresses (the 10.128.0.0/9 CIDR block) — in other words, you cannot connect auto-mode VPC networks to one another

1.18 Virtual Firewalls

VPC firewall rules control traffic coming in and out of VM instances on a network. The default network once provisioned, has a default set of firewall rules; you can create custom rules if more detailed network resource protection is needed. Firewall rules are defined at the VPC network level. Using firewall rules configurations, you can dis/allow connections to or from instances. Connections are allowed or denied on a per-instance basis. VPC firewall rules can, essentially, control traffic between individual instances within the same network.

1.19 Firewall Rules

  • To create a VPC firewall rule, you need to specify the target VPC network along with the elements that define what the rule does.
  • These elements lend you control over:
    • Traffic protocols, source and destination ports, etc.

1.20 Firewall Rule Elements (Components)

Each firewall rule you create in Google Cloud consists of the following main configuration elements (called components in the Google Cloud documentation):

  • The direction of the connection:
    • Inbound (ingress) rules applied to incoming connections, and
    • Outbound (egress) rules applied to connections going to specified destinations from targets
  • An action on a matched connection:
    • Either allow or deny the matching connection
  • Rule status:
    • Enable and disable (suspend the rule without deleting)
  • A target, which defines the instances (including GKE clusters and App Engine flexible environment instances) to which the rule applies
  • A source for ingress rules or a destination for egress rules
  • The connection’s protocol (tcp, udp, icmp, esp, ah, sctp, ipip) and the destination port

Notes:

In addition to firewall rules that you create, Google Cloud has other rules that can affect incoming (ingress) or outgoing (egress) connections, for example, Google Cloud doesn’t allow certain IP protocols, such as egress traffic on TCP port 25 within a VPC network. For more information, see https://cloud.google.com/vpc/docs/firewalls#blockedtraffic

1.21 Ingress (Inbound) Firewall Rules

Source: https://cloud.google.com/vpc/docs/firewalls#firewall_rule_components

1.22 Egress (Outbound) Rules

Source: https://cloud.google.com/vpc/docs/firewalls#firewall_rule_components

1.23 Authoring Firewall Rules

You create or modify VPC firewall rules by using either

  • Cloud Console, or
  • The gcloud command-line tool, or
  • REST API

1.24 Setting a Default Compute Zone with gcloud

  • Use this gcloud command to set default compute zone:
gcloud config set compute/zone <your default zone>, e.g.
gcloud config set compute/zone us-central1-a

1.25 A Firewall Rules Example

  • The following example can help you understand how to identify the applicable rules to a connection between two VM instances (i1 and i2) running in the same network
    • Traffic from i1 to i2 can be controlled by using either of these firewall rules:
      • An ingress rule with a target of i2 and a source of i1
      • An egress rule with a target of i1 and a destination of i2

1.26 Protocol and Destination Port Specification Combinations

Source: https://cloud.google.com/vpc/docs/firewalls#firewall_rule_components

Notes:

Be aware of the configuration rule that a port cannot be specified by itself (e.g. 8080). Make sure you use it in combination F23 with the protocol (e.g. tcp:8080).

1.27 GKE Firewall Rules

Google Kubernetes Engine creates firewall rules automatically when creating the following resources:

  • GKE Clusters
  • GKE Services
  • GKE Ingresses

1.28 Routes

A route is a virtual networking component that allows you to implement more advanced networking functions for your instances, such as creating VPNs. Routes define paths for packets leaving instances (egress traffic), in other words, a route controls how packets leaving an instance should be directed. For example, a route might specify that packets destined for a particular network range should be handled by a gateway VM instance that you configure and operate.

For a complete overview of Google Cloud routes, visit https://cloud.google.com/vpc/docs/routes

1.29 Route Categories (Types)

  • Google Cloud has two categories (types) of routes:
    • system-generated, and
    • custom
  • Every new network is provisioned with two types of system-generated routes:
    • The default one that defines a path for traffic leaving the VPC network and provides internet access to VMs that need it, as well as the “typical path” for Private Google Access
    • A subnet route created for each of the IP ranges associated with a subnet
  • Custom routes are either manually created static routes or dynamic routes maintained automatically by one or more of your Cloud Routers

Notes:

Note A

If your VPC network is connected to an on-prem network through Cloud VPN or Cloud Interconnect, you need to make sure that your subnet IP address ranges do not conflict with those on-prem.

 

Note B

Every subnet has at least one subnet route for its primary IP range; additional subnet routes are created for a subnet if you add secondary IP ranges to it. Subnet routes define paths for traffic to reach VMs that use the subnets.

1.30 Configuring Private Google Access

  • When you create a Compute Engine VM, it has, by default, no external IP address assigned to its network interface
    • This circumstance limits the instance to only being able to send IP packets to other internal IP addresses
  • To allow connectivity from these VMs to external IP addresses used by Google Cloud Developer APIs and services, you need to enable Private Google Access (PGA) on the subnet used by the VM’s network interface
    • PGA also allows access to the external IP addresses used by App Engine
  • The list of the supported PGA services can be found here: https://cloud.google.com/vpc/docs/private-google-access#pga-supported
  • PGA configuration steps can be found here: https://cloud.google.com/vpc/docs/configure-private-google-access

Notes:

Currently, the following services are not supported by PGA:

  • App Engine Memcache
  • Filestore
  • Memorystore

1.32 The Implementation of Private Google Access Diagram

Source: https://cloud.google.com/vpc/docs/private-google-access#pga-supported

Notes:

In the above diagram (description is borrowed from https://cloud.google.com/vpc/docs/private-google-access#pga-supported):

The VPC network has been configured to meet the DNS, routing, and firewall network requirements for Google APIs and services. Private Google Access has been enabled on subnet-a, but not on subnet-b. 
VM A1 can access Google APIs and services, including Cloud Storage, because its network interface is located in subnet-a, which has Private Google Access enabled. Private Google Access applies to the instance because it only has an internal IP address. 
VM B1 cannot access Google APIs and services because it only has an internal IP address and Private Google Access is disabled for subnet-b.
VM A2 and VM B2 can both access Google APIs and services, including Cloud Storage, because they each have external IP addresses. Private Google Access has no effect on whether or not these instances can access Google APIs and services because both have external IP addresses.

1.33 Cloud NAT

  • Cloud NAT (Network Address Translation) is a fully-managed cloud service that provides software-defined network address translation support for Google Cloud
  • Cloud NAT enables your VM instances and private GKE clusters not bound to external IP addresses to connect to the Internet
  • Cloud NAT does not implement unsolicited inbound connections from the Internet: it would only allow inbound IP packets in response to locally initiated socket connections to the outside party
  • For more information about the Cloud NAT service, visit https://cloud.google.com/nat/docs/overview

1.34 Traditional NAT vs Cloud NAT

Source: https://cloud.google.com/nat/docs

1.35 Automated Network Deployment

You can automate network deployment using Cloud Deployment Manager and Terraform by HashiCorp. Deployment Manager is part of Google Cloud. Terraform is an open-source tool. Both Deployment Manager and Terraform perform the necessary inter-dependency checks and, where possible, create resources in parallel, which can significantly speed up the process.

For the tutorial on how to automate network deployment, see https://cloud.google.com/solutions/automated-network-deployment-overview

1.36 IP Addresses

All instances in Google Cloud are assigned an internal IP. An external IP, if needed, can be requested. Be aware that an external IP, once provisioned, by default, is tied to the life of the instance (may change after instance reboot). Static (permanent) IP addresses can be also requested; permanent IP addresses are re-assignable and can be re-attached to your instances as needed.

1.37 Google Cloud Load Balancing

Load balancing distributes user traffic across multiple instances of your applications fronted by a load balancer visible to the outside world as a single IP address end-point. Google Cloud load balancing is a fully distributed, software-defined managed service. Load balancing serves the following two main purposes:

  • Lending high availability and fault tolerance quality of services to your applications
  • Achieving horizontal scaling to better support high request volumes

Google Cloud provides a set of DDos protection mechanisms, depending on the load balancer type

For more information on Google load balancing, visit https://cloud.google.com/load-balancing/docs/load-balancing-overview#a_closer_look_at_cloud_load_balancers

Notes:

Google Cloud supports two load balancer types, each of which offers different kinds of DDoS protection.

Proxy-based external load balancers

All of the Google Cloud proxy-based external load balancers automatically inherit DDoS protection from Google Front Ends (GFEs), which are part of Google’s production infrastructure.

In addition to the automatic DDoS protection provided by the GFEs, you can configure Google Cloud Armor for external HTTP(S) load balancers.

 

Pass-through external load balancers

These load balancers are implemented using the same Google routing infrastructure used to implement external IP addresses for Compute Engine VMs. For inbound traffic to a network load balancer, Google Cloud limits incoming packets per VM.

1.38 Google Cloud Load Balancing Features

Google Cloud offers the following core load balancing features:

  • Automatic autoscaling of your applications fronted by the load balancer
  • External (Internet) and internal (VPC) load balancing of requests
  • Pass-through load balancing (using the same Google routing infrastructure used to implement external IP addresses for Compute Engine VMs)
  • Proxy-based load balancing (as an alternative to pass-through load balancing)
  • Layer 7-based load balancing*
  • Offers content-based routing based on such application request attributes as values in HTTP headers or URIs
  • Layer 4-based load balancing*
  • Directing traffic based on data at the transport layer (e.g. target IP addresses and TCP or UDP ports)
  • Integration with Cloud CDN for cached content delivery (a way to bring the load balancer end-point closer to your customers)

Notes:

* The Open Systems Interconnection (OSI) model describes seven layers used by computer systems to communicate over a network. Currently in use is a simpler TCP/IP OSI model. The OSI 7-layer model is still in wide use as it helps visualize how networks operate.

The OSI 7 Layers diagram

1.39 Summary

In this tutorial, we discussed the following topics:

  • VPC objects in Google Cloud
  • Types of VPC networks
  • Implementing VPC networks and firewall rules
  • Implementing Private Google Access and Cloud NAT
  • Load balancing

Follow Us

Blog Categories