Unit 6 - Notes
Unit 6: Cloud Networking
Configure Virtual Networks (VNets)
A Virtual Network (VNet) is the fundamental building block for your private network in Azure. It enables Azure resources like Virtual Machines (VMs) to securely communicate with each other, the internet, and on-premises networks.
Core Concepts
- Isolation: A VNet is logically isolated from other virtual networks in Azure.
- Address Space:
- When creating a VNet, you assign a private IP address space using CIDR (Classless Inter-Domain Routing) notation (e.g.,
10.0.0.0/16). - This address space must be from the private IP address ranges (RFC 1918):
10.0.0.0-10.255.255.255(/8prefix)172.16.0.0-172.31.255.255(/12prefix)192.168.0.0-192.168.255.255(/16prefix)
- The VNet address space cannot overlap with other VNets you connect to or your on-premises network ranges.
- When creating a VNet, you assign a private IP address space using CIDR (Classless Inter-Domain Routing) notation (e.g.,
- Subnets:
- You must divide a VNet into one or more subnets.
- Each subnet is a range of IP addresses within the VNet's address space.
- Resources like VMs are deployed into subnets.
- Subnets provide network segmentation and allow you to apply specific security rules (via NSGs) and routing policies (via Route Tables) to groups of resources.
- Azure Reserved Addresses: In each subnet, Azure reserves the first four and the last IP address for protocol conformance. For example, in a
/24subnet (256 addresses), only 251 are available for use.x.x.x.0: Network addressx.x.x.1: Reserved by Azure for the default gatewayx.x.x.2,x.x.x.3: Reserved by Azure to map the Azure DNS IPsx.x.x.255: Network broadcast address
IP Addressing
- Private IP Address: Assigned to resources within a VNet from the subnet's address range. Used for communication within the VNet and with connected on-premises networks. Can be dynamic (default) or static.
- Public IP Address: Assigned to Azure resources to enable communication with the internet. A public IP can be associated with a VM's Network Interface Card (NIC), a public load balancer, etc. Can be dynamic or static.
Azure DNS
- Azure-provided DNS (Default):
- Provides name resolution for resources within the same VNet and for public internet names.
- Does not provide name resolution between different VNets.
- The DNS suffix is internal and cannot be modified.
- Custom DNS:
- You can specify your own DNS servers (e.g., on-premises Active Directory domain controllers).
- Allows for name resolution between VNets and between Azure and on-premises resources.
- Configured at the VNet level; all VMs in the VNet will inherit this setting.
Configure Network Security Groups (NSGs)
An NSG acts as a stateful firewall for network traffic. It contains a list of security rules that allow or deny inbound or outbound network traffic to Azure resources.
How NSGs Work
- Stateful: If you create an outbound rule to allow traffic to a specific IP on port 80, the NSG automatically allows the return traffic for that established connection, so you don't need a corresponding inbound rule.
- Association: An NSG can be associated with:
- A Subnet: The rules apply to all resources within that subnet. This is the recommended approach for easier management.
- A Network Interface (NIC): The rules apply only to the specific VM attached to that NIC.
- Effective Rules:
- If a NIC and its subnet both have an NSG, the rules are evaluated together.
- For inbound traffic, the subnet NSG rules are processed first, followed by the NIC NSG rules.
- For outbound traffic, the NIC NSG rules are processed first, followed by the subnet NSG rules.
- Traffic is permitted only if it's allowed by the rules in both NSGs (if both are present).
Security Rules
Each rule has the following properties:
- Name: A unique name for the rule.
- Priority: A number between 100 and 4096. Rules are processed in priority order, with lower numbers processed first. As soon as traffic matches a rule, processing stops.
- Source/Destination: Can be an IP address/CIDR block, a Service Tag (e.g.,
VirtualNetwork,Internet,Storage.EastUS), or an Application Security Group (ASG). - Protocol: TCP, UDP, ICMP, or Any.
- Direction: Inbound or Outbound.
- Port Range: A single port or a range (e.g., 80, 443, 1000-2000).
- Action: Allow or Deny.
Default Rules
Every NSG is created with a set of default rules with low priority (65000 and higher). These cannot be deleted but can be overridden by creating new rules with higher priority (lower numbers).
- Inbound:
AllowVnetInBound: Allows traffic from any source within the VNet.AllowAzureLoadBalancerInBound: Allows health probes from Azure Load Balancer.DenyAllInBound: Denies all other inbound traffic.
- Outbound:
AllowVnetOutBound: Allows traffic to any destination within the VNet.AllowInternetOutBound: Allows outbound traffic to the Internet.DenyAllOutBound: Denies all other outbound traffic.
Configure Azure Virtual Network Peering
VNet Peering enables you to seamlessly connect two or more Azure Virtual Networks. Once peered, the VNets appear as one for connectivity purposes. Traffic between VMs in peered VNets uses the Microsoft backbone network, not the public internet.
Key Characteristics
- Low Latency, High Bandwidth: Connectivity is private and performs as if the resources were in the same VNet.
- Non-Transitive: Peering is a direct link between two VNets. If VNet A is peered with VNet B, and VNet B is peered with VNet C, VNet A and VNet C are not automatically peered. A separate peering link must be created between A and C.
- Reciprocal: Peering connections must be created in both directions. When you create the peering from VNet A to VNet B, you also configure the reciprocal link from B to A.
- Non-Overlapping IP Addresses: The address spaces of the peered VNets cannot overlap.
Types of VNet Peering
- Regional VNet Peering: Connecting VNets within the same Azure region.
- Global VNet Peering: Connecting VNets across different Azure regions. This allows for global private connectivity.
Gateway Transit
- A powerful feature that allows a peered VNet to use the VPN or ExpressRoute gateway of another peered VNet to connect to an on-premises network.
- Use case: A hub-spoke network topology. The "hub" VNet has the gateway, and multiple "spoke" VNets peer with the hub and use its gateway for on-premises connectivity, avoiding the need for a gateway in every spoke VNet.
- The hub VNet peering must be configured with "Allow gateway transit".
- The spoke VNet peering must be configured with "Use remote gateways".
Configure Network Routing and Endpoints
Network Routing
Azure automatically creates system routes and routes traffic between subnets, connected VNets, on-premises networks, and the internet by default. You can override this behavior using User-Defined Routes (UDRs).
-
System Routes (Default Routing):
- Within a VNet: Traffic between VMs in the same VNet is routed directly.
- VNet Peering: Traffic between peered VNets is routed directly.
- To Internet: All other traffic is routed to the internet by default (unless overridden).
- To On-Premises: Traffic destined for on-premises address ranges is routed via a VPN Gateway or ExpressRoute circuit.
-
User-Defined Routes (UDRs):
- UDRs allow you to control the routing of traffic leaving a subnet.
- You create a Route Table and add custom routes to it. Then, you associate the Route Table with one or more subnets.
- Common Use Case: Forcing all outbound internet traffic through a Network Virtual Appliance (NVA), such as a third-party firewall, for inspection and logging.
-
Route Table Configuration: A route consists of:
- Address Prefix: The destination CIDR block the route applies to (e.g.,
0.0.0.0/0for all internet traffic). - Next Hop Type: Where to send the traffic.
Virtual Appliance: An NVA like a firewall.Virtual network gateway: A VPN or ExpressRoute gateway.Internet: The default internet path.VirtualNetwork: The VNet's address space.None: Blackholes the traffic (drops it).
- Address Prefix: The destination CIDR block the route applies to (e.g.,
Service Endpoints and Private Endpoints
These features enhance the security of Azure PaaS services by limiting access to them from your VNet.
Virtual Network Service Endpoints
- What it is: Provides secure and direct connectivity to Azure PaaS services over the Azure backbone network.
- How it works: Extends your VNet's identity to the Azure service. You enable a service endpoint on a specific subnet. Then, you can configure the PaaS service's firewall to only allow traffic from that subnet.
- Key Characteristics:
- The PaaS service still has a public IP address, but its firewall blocks all traffic except that originating from the designated VNet/subnet.
- Access is restricted at the subnet level.
- Not accessible from on-premises networks through a VPN/ExpressRoute.
- Example: You enable the
Microsoft.Storageservice endpoint on Subnet A. You then go to your Azure Storage Account's firewall settings and add a rule to allow access only from VNet/Subnet A.
Private Endpoints (via Azure Private Link)
- What it is: A more secure and flexible evolution of Service Endpoints. It brings the PaaS service into your VNet.
- How it works: A Private Endpoint is a network interface (NIC) with a private IP address from your VNet's address space. This NIC connects privately and securely to the Azure PaaS service powered by Azure Private Link.
- Key Characteristics:
- Traffic to the PaaS service is purely private; it never traverses the public internet.
- The PaaS service can be configured to have no public endpoint at all.
- Accessible from the VNet, peered VNets, and on-premises networks connected via VPN/ExpressRoute.
- Connects to a specific instance of a PaaS service (e.g., one specific storage account), not the entire service.
| Feature | Service Endpoint | Private Endpoint |
|---|---|---|
| Mechanism | Extends VNet identity to PaaS service | Creates a NIC in your VNet for the PaaS service |
| PaaS Endpoint | Public | Private IP in your VNet |
| Traffic Path | Azure Backbone (Optimized Route) | Purely Private (within your VNet) |
| On-Premises Access | No | Yes (via VPN/ExpressRoute) |
| Granularity | Entire service (e.g., all of Azure Storage in a region) | Specific PaaS instance (e.g., mystorage123) |
Configure Azure Load Balancer
Azure Load Balancer operates at Layer 4 (Transport Layer) of the OSI model. It distributes inbound TCP and UDP traffic among healthy backend pool members (VMs or VM Scale Sets).
Components
- Frontend IP Configuration: The IP address where the load balancer accepts traffic.
- Public Load Balancer: Uses a public IP address to distribute internet traffic to VMs.
- Internal Load Balancer: Uses a private IP address to distribute traffic within a VNet.
- Backend Pool: The group of VMs or virtual machine scale set instances that will receive the network traffic.
- Health Probes: Used to check the health of instances in the backend pool. The load balancer stops sending traffic to unhealthy instances. Probes can be TCP, HTTP, or HTTPS.
- Load-Balancing Rules: Defines how traffic is distributed to the backend pool. It maps a specific frontend port and protocol to a specific backend port and protocol.
- Session Persistence (Source IP Affinity): Configures the load balancer to direct all requests from the same client IP to the same backend VM. Also known as "sticky sessions".
SKUs
- Basic SKU:
- Free.
- Limited features.
- No SLA.
- Backend pool can only include VMs in a single availability set or scale set.
- Standard SKU:
- Paid, with a 99.99% SLA.
- More secure by default (closed to inbound flows unless allowed by an NSG).
- Supports larger backend pools (up to 1000 instances).
- Can span across Availability Zones, providing high availability.
- Advanced diagnostics and metrics through Azure Monitor.
Configure Azure Application Gateway
Azure Application Gateway is a web traffic load balancer that operates at Layer 7 (Application Layer). It enables you to manage traffic to your web applications based on HTTP/S attributes.
Key Features (beyond Layer 4 load balancing)
- SSL/TLS Termination: The Application Gateway can decrypt incoming HTTPS traffic and pass unencrypted HTTP traffic to the backend servers, offloading the CPU-intensive decryption work.
- URL-Based Routing: Route traffic to different backend pools based on the URL path.
- Example:
contoso.com/images/*goes to an image server pool, andcontoso.com/video/*goes to a video server pool.
- Example:
- Host-Based Routing (Multi-site Hosting): Host multiple websites on the same Application Gateway.
- Example: Traffic for
contoso.comgoes to one pool, and traffic forfabrikam.comgoes to another.
- Example: Traffic for
- Web Application Firewall (WAF): Provides centralized protection of your web applications from common exploits and vulnerabilities (e.g., SQL injection, cross-site scripting). It uses rules from the OWASP (Open Web Application Security Project) core rule sets.
- Session Affinity: Cookie-based session affinity to keep a user on the same backend server.
- Autoscaling: Can scale up or down automatically based on traffic load.
- Redirection: Can redirect traffic from one listener to another, or from HTTP to HTTPS.
Components
- Frontend IP: The public or private IP that accepts incoming requests.
- Listeners: Checks for incoming connection requests on a specific port, protocol (HTTP/S), and host.
- Routing Rules: Binds a listener to a backend pool. Determines where to route the request based on URL path or hostname.
- Backend Pools: The groups of servers that serve the requests.
- HTTP Settings: Defines settings like the backend port, protocol (HTTP/S), cookie-based affinity, and connection draining.
- Health Probes: Monitors the health of backend servers.
Introduction to Azure Front Door and Content Delivery Network (CDN)
These services operate at the global edge of Microsoft's network to improve the performance, reliability, and security of applications.
Azure Front Door
Azure Front Door is a modern cloud CDN service that provides a scalable and secure entry point for fast delivery of your global web applications. It is a global Layer 7 load balancer.
- How it works: Uses the Anycast protocol to route user traffic to the nearest Front Door Point of Presence (POP). From there, traffic travels over Microsoft's high-speed global network to the fastest, most available application backend.
- Key Features:
- Global HTTP/S Load Balancing: Distributes traffic across backends in different Azure regions or even external locations.
- Performance Acceleration: Uses split TCP and the Microsoft global network to improve application performance and latency for global users.
- URL-based Routing: Similar to Application Gateway, it can route based on URL paths.
- SSL Offloading: Manages SSL certificates and terminates TLS connections at the edge.
- Integrated WAF: Provides Web Application Firewall capabilities at the network edge, blocking malicious requests before they enter your VNet.
- When to use Front Door: For globally distributed web applications requiring high availability, failover, and the best possible performance for users worldwide. It focuses on dynamic content and application delivery.
Azure Content Delivery Network (CDN)
An Azure CDN is a distributed network of servers (POPs) that can efficiently deliver web content to users. It caches static content at edge locations close to end users to minimize latency.
- How it works:
- A user requests a file (e.g., an image) from your website.
- The request is directed to the nearest CDN POP.
- If the POP has the file cached (cache hit), it delivers it directly to the user.
- If the file is not in the cache (cache miss), the POP requests the file from the origin server (e.g., Azure Blob Storage, a Web App), delivers it to the user, and then caches it for future requests.
- Key Features:
- Static Content Caching: Ideal for caching content that doesn't change often, such as images, videos, CSS stylesheets, and JavaScript files.
- Reduced Load on Origin: Since most requests are served from the cache, it reduces traffic and load on your origin application server.
- Improved Performance: Users get content from a server geographically closer to them, resulting in faster load times.
- When to use CDN: To accelerate the delivery of static assets for a website or application, reducing latency and saving bandwidth costs.
| Service | Primary Function | Operates at Layer | Use Case |
|---|---|---|---|
| Load Balancer | Regional TCP/UDP load balancing | Layer 4 | Distribute traffic inside a VNet or from the internet to VMs in one region. |
| Application Gateway | Regional HTTP/S load balancing & WAF | Layer 7 | Advanced routing for web apps, SSL offload, and security within a region. |
| Azure Front Door | Global HTTP/S load balancing & WAF | Layer 7 | Global application delivery, failover, and performance for dynamic content. |
| Azure CDN | Global static content caching | Layer 7 | Accelerate delivery of static files (images, JS, CSS) to users worldwide. |