This article introduces best practices and typical enterprise architectures for virtual private clouds (VPCs) design with AWS, Azure or Google Cloud.
Below, I tried to list a common steps in organizing right process for creating complex network solution, which meets current needs and ready for future enchantments.
General principles
Identify decision makers, timelines, and pre-work
As a first step in your VPC network design, identify the decision makers, timelines, and pre-work necessary to ensure that you can address stakeholder requirements.
Stakeholders might include application owners, security architects, solution architects, and operations managers. The stakeholders themselves might change depending on whether you are planning your VPC network for a project, a line of business, or the entire organization.
In essence, your VPCs should be designed to satisfy the needs of your applications today and must be scalable to accommodate future needs.
Consider VPCs network design early
Make VPC network design an early part of designing your organizational setup. It can be disruptive to your organization if you later need to change fundamental things such as how your network is segmented or where your workloads are located.
Different VPC network configurations can have significant implications for routing, scale, and security. Careful planning and deep understanding of your specific considerations helps you to create a solid architectural foundation for incremental workloads.
Keep it simple
Keeping the design of your VPC network topology simple is the best way to ensure a manageable, reliable, and well-understood architecture.
Use clear naming conventions
Make your naming conventions simple, intuitive, and consistent. This ensures that administrators and end users understand the purpose of each resource, where it is located, and how it is differentiated from other resources.
Commonly accepted abbreviations of long words help with brevity. Using familiar terminology where possible helps with readability.
Consider the components illustrated in the following example when establishing your naming conventions:
- Company name: BiLinkSoft Company: bilink
- Business unit: Human Resources: hr
- Application code: Compensation system: comp
- Region code: northamerica-northeast1: na-ne1, europe-west1: eu-we1
- Environment codes: dev, test, uat, stage, prod
For other common networking resources, consider patterns like these:
- VPC network
syntax: {company name}-{description(App or BU)-label}-{environment-label}-{seq#}
example: bilink-hr-dev-vpc-1 - Subnet
syntax: {company-name}-{description(App or BU)-label}-{region/zone-label}
example: bilink-hr-na-ne1-dev-subnet - Firewall rule
syntax: {company-name}-{description(App or BU)-label}{source-label}-{dest-label}-{protocol}-{port}-{action}
example: bilink-hr-internet-internal-tcp-80-allow-rule - IP route
syntax: {priority}-{VPC-label}-{tag}-{next hop}
example: 1000-bilink-hr-dev-vpc-1-int-gw
Addresses and subnets
After creating your VPC, you divide it into subnets. Subnets are not isolation boundaries around your application. Rather, they are containers for routing policies.
Isolation with Security Groups or Firewall Rules
Isolation is achieved by attaching an Security Group (SG) to the VM instances that host your application or creating Network Firewall Rules (FWR) on subnet level. In AWS SGs or FWR in GCP are stateful firewalls, meaning that connections are tracked to ensure return traffic is allowed. They control inbound and outbound access to the elastic network interfaces that are attached to an VM instance. These should be tightly configured, only allowing access as needed.
It is our best practice that subnets should be created in categories. There two main categories; public subnets and private subnets. At minimum they should be designed as outlined in the below diagrams for IPv4 and IPv6 subnet design.
Group workloads into fewer subnets with larger address ranges
Conventionally, some enterprise networks are separated into many small address ranges for a variety of reasons. For example, this might have been done to identify or isolate an application or keep a small broadcast domain.
However, recommend that you group applications of the same type into fewer, more manageable subnets with larger address ranges in the regions you want to operate.
Unlike other networking environments in which a subnet mask is used, any cloud provider uses a software-defined networking (SDN) approach to provide a full mesh of reachability between all VMs in the global VPC network. The number of subnets does not affect routing behaviour. You can use service accounts or network tags to apply specific routing policies or firewall rules.
Grant the network user role at the subnet level
Following the principle of least privilege, recommend granting the network user role at the subnet level to the associated user, service account, or group.
Because subnets are regional, this granular control allows you to specify which regions each service project can use to deploy resources.
Private access against public
Always prioritize using private access over public sharing your resource.
Do not let traffic go outside cloud provider backbone network
Cloud backbone comprises collection of globally distributed points of presence (PoPs) that serve as attachment points for remote users, branches, campuses, data centers and cloud workloads stretched across a single public cloud region, multiple public cloud regions or multiple clouds.
Cloud provider’s network usually much more secure and optimized for traffic performance and high availability. Using PoPs allows to distribute content closer to the end user with no additional costs. Your content is often cached on PoPs providing better user experience, query optimization and cost savings.
Use private endpoints
A VPC endpoint enables customers to privately connect to supported services and VPC endpoint services. VM instances do not require public IP addresses to communicate with resources of the service. It provides better security and isolation.
Enabling Private Google Access in GCP allows workloads to communicate between different VPCs and other managed resources using API.
In AWS, AWS PrivateLink allows to communicate with AWS managed services, services hosted by other AWS customers and partners in their own Amazon VPCs (referred to as endpoint services), and supported AWS Marketplace partner services. The owner of a service is a service provider. The principal creating the interface endpoint and using that service is a service consumer.
Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.
Considering Load Balancers (LB)
If your application has multiple tiers, for example web servers that must be connected to the internet and database servers that are only connected to the web servers, you can design an architecture that uses both internal and internet-facing load balancers. Create an internet-facing load balancer and register the web servers with it. Create an internal load balancer and register the database servers with it. The web servers receive requests from the internet-facing load balancer and send requests for the database servers to the internal load balancer. The database servers receive requests from the internal load balancer.

VPCs Sharing and VPCs peering
VPCs sharing allows customers to share subnets with other accounts, projects or organizations. This is a very powerful concept that allows for a number of benefits:
- Separation of duties: centrally controlled VPC structure, routing, IP address allocation.
- Application owners continue to own resources, accounts, and security groups.
- VPC sharing participants can reference security group IDs of each other.
- Efficiencies: higher density in subnets, efficient use of VPNs and AWS Direct Connect or GCP Interconnect
- Costs can be optimized through reuse of NAT gateways, VPC interface endpoints, and intra-Availability Zone traffic.
Cloud VPCs Network Peering connects two Virtual Private Clouds (VPCs) networks so that resources in each network can communicate with each other.
VPC Network Peering provides the following benefits:
- You can publish software as a service (SaaS) offerings privately from one VPC network to another.
- VPCs Network Peering works with VM, Kubernetes cluster nodes, and WebApp or App Engines.
- Packets delivered using the peering connection remain inside backboned network.
- Network traffic is commonly cheaper using VPC Network Peering.
Below is a common hub-spoke network configuration in hybrid environment with GCP:

Network security
Cloud concept provides us, as a customers, robust security features across its infrastructure and services, from the physical security of data centers and custom security hardware to dedicated teams of researchers. However, securing your resources is a shared responsibility. You must take appropriate measures to help ensure that your apps and data are protected.
Identify clear security objectives
Before evaluating either cloud-native or cloud-capable security controls, start with a set of clear security objectives that all stakeholders agree to as a fundamental part of the product. These objectives should emphasize achievability, documentation, and iteration, so that they can be referenced and improved throughout development.
Limit external access
Limit access to the internet to only those resources that need it. Resources with only a private, internal IP address can still access many APIS and services through private links or private access. Private access enables resources to interact with key cloud provider services while remaining isolated from the public internet.
Before blocking internet access, however, consider the impact on your VM instances. Blocking internet access can reduce your risk of data exfiltration, but it can also block legitimate traffic, including essential traffic for software updates and third-party APIs and services. Without internet access, you are only able to access your VM instances through an on-premises network connected through a Cloud VPN tunnel, an Cloud Interconnect/DirectConnect/ExpresssRoute connection, or use Identity-Aware Proxy (GCP). Using Cloud NAT, virtual machines can initiate egress connections to the internet for specific essential traffic without exposing public ingress connections.
Define service perimeters for sensitive data
The network perimeter is the boundary between an organization’s secured internal network and the Internet — or any other uncontrolled external network. In other words, the network perimeter is the edge of what an organization has control over. Network perimeter means identity and access management (IAM) has become very important for controlling access to data and preventing data loss using DLP.
Follow the principle of least privileges
Always follow the principle of least privileges:
- Use fewer, broader firewall rule sets when possible
- Manage traffic with cloud native firewall rules when possible
- Use fewer, broader firewall rule sets when possible
- Isolate VMs using service accounts when possible
- Use automation to monitor security policies when using tags
- Use cloud native tools
Lear more about security cloud with our #CyberTechTalk WIKI and follow our previous posts.
Be an ethical, save your privacy!