1What is the primary purpose of an Azure Virtual Network (VNet)?
Configure virtual networks
Easy
A.To store and manage user identities and access policies.
B.To execute code in response to events without managing servers.
C.To provide a logically isolated section of the Azure cloud where you can launch Azure resources.
D.To provide a globally distributed database service.
Correct Answer: To provide a logically isolated section of the Azure cloud where you can launch Azure resources.
Explanation:
An Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. It provides a secure and isolated environment for your virtual machines and other resources.
Incorrect! Try again.
2A smaller, logical division of a VNet's address space used to organize and secure resources is called a:
Configure virtual networks
Easy
A.Subnet
B.Network Interface
C.Resource Group
D.Address Pool
Correct Answer: Subnet
Explanation:
Subnets allow you to segment a virtual network into one or more smaller networks. This helps with IP address allocation, network organization, and applying security policies to groups of resources.
Incorrect! Try again.
3What is the primary function of a Network Security Group (NSG) in Azure?
Configure network security groups
Easy
A.To balance network traffic between virtual machines.
B.To cache web content closer to users.
C.To connect an on-premises network to Azure.
D.To act as a stateful firewall that filters network traffic.
Correct Answer: To act as a stateful firewall that filters network traffic.
Explanation:
A Network Security Group (NSG) contains a list of security rules that allow or deny network traffic to and from Azure resources. It functions as a basic firewall at the network level.
Incorrect! Try again.
4An NSG security rule includes a priority number. How does Azure process rules with different priority numbers?
Configure network security groups
Easy
A.Rules with a lower number are processed first.
B.Rules are processed in a random order.
C.Rules with a higher number are processed first.
D.Rules are processed alphabetically by name.
Correct Answer: Rules with a lower number are processed first.
Explanation:
Azure processes NSG rules in priority order, starting with the lowest number. Once a rule matches the traffic, processing stops, and that rule is applied.
Incorrect! Try again.
5What is the primary benefit of using VNet peering?
Configure Azure Virtual Network peering
Easy
A.It protects against distributed denial-of-service (DDoS) attacks.
B.It connects virtual networks so resources can communicate privately using the Microsoft backbone.
C.It encrypts all data stored within a single virtual network.
D.It provides a public-facing endpoint for web applications.
Correct Answer: It connects virtual networks so resources can communicate privately using the Microsoft backbone.
Explanation:
VNet peering allows resources in two different virtual networks to communicate with each other with low latency and high bandwidth, as if they were in the same network. All traffic remains on the private Microsoft backbone network.
Incorrect! Try again.
6What is a critical requirement when attempting to peer two Azure Virtual Networks?
Configure Azure Virtual Network peering
Easy
A.The VNets must have been created by the same user.
B.The VNets must be in the same resource group.
C.The VNets must have the same number of subnets.
D.The IP address spaces of the two VNets must not overlap.
Correct Answer: The IP address spaces of the two VNets must not overlap.
Explanation:
If the IP address spaces of two VNets overlap, Azure cannot create a peering connection because it would be unable to determine how to route the traffic between them.
Incorrect! Try again.
7What is the purpose of a route table in Azure?
Configure network routing and endpoints
Easy
A.To store DNS records for a domain.
B.To define how network traffic is directed from a subnet to specific destinations.
C.To list all active connections to a virtual machine.
D.To define firewall rules for an application.
Correct Answer: To define how network traffic is directed from a subnet to specific destinations.
Explanation:
A route table contains a set of rules, called routes, that determine where network traffic is sent. You can associate a route table with a subnet to control the flow of its outbound traffic.
Incorrect! Try again.
8Which feature allows you to secure Azure service resources to only your virtual network by extending your VNet's private address space to the service?
Configure network routing and endpoints
Easy
A.Public IP Address
B.Private Endpoint
C.Network Gateway
D.Service Endpoint
Correct Answer: Service Endpoint
Explanation:
A Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route on the Azure backbone network. It ensures traffic from your VNet to the Azure service remains on the Azure network.
Incorrect! Try again.
9What is the main function of an Azure Load Balancer?
Configure Azure Load Balancer
Easy
A.To provide a secure VPN connection to Azure.
B.To filter malicious web traffic.
C.To distribute incoming network traffic across a group of backend resources or servers.
D.To cache static files globally.
Correct Answer: To distribute incoming network traffic across a group of backend resources or servers.
Explanation:
An Azure Load Balancer improves application availability and scalability by distributing network traffic among multiple virtual machines in a backend pool.
Incorrect! Try again.
10At which layer of the OSI model does the standard Azure Load Balancer operate?
Configure Azure Load Balancer
Easy
A.Layer 4 (Transport)
B.Layer 2 (Data Link)
C.Layer 7 (Application)
D.Layer 3 (Network)
Correct Answer: Layer 4 (Transport)
Explanation:
The standard Azure Load Balancer is a Layer 4 load balancer. It makes routing decisions based on information from the Transport layer, such as source IP address, source port, destination IP address, destination port, and protocol.
Incorrect! Try again.
11Which Azure service is a web traffic load balancer that enables you to manage traffic to your web applications based on URL paths or hostnames?
Configure Azure Application Gateway
Easy
A.Azure Load Balancer
B.Azure Front Door
C.Azure Application Gateway
D.Azure Traffic Manager
Correct Answer: Azure Application Gateway
Explanation:
Azure Application Gateway is a Layer 7 load balancer. This allows it to make intelligent routing decisions based on HTTP request attributes like URL path or host headers, making it ideal for managing web application traffic.
Incorrect! Try again.
12What is the purpose of the Web Application Firewall (WAF) feature often used with Azure Application Gateway?
Configure Azure Application Gateway
Easy
A.To provide a private IP address for a PaaS service.
B.To monitor the health of backend virtual machines.
C.To distribute traffic across different geographical regions.
D.To provide centralized protection for web applications from common exploits like SQL injection.
Correct Answer: To provide centralized protection for web applications from common exploits like SQL injection.
Explanation:
The Web Application Firewall (WAF) is a security feature that protects web applications from common vulnerabilities and exploits by inspecting incoming web traffic and blocking malicious requests.
Incorrect! Try again.
13What is the primary goal of using an Azure Content Delivery Network (CDN)?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Easy
A.To connect two virtual networks together securely.
B.To improve the performance and speed of delivering static content to users by caching it at global locations.
C.To distribute traffic among virtual machines within a single region.
D.To filter incoming network traffic for security threats.
Correct Answer: To improve the performance and speed of delivering static content to users by caching it at global locations.
Explanation:
Azure CDN reduces latency by caching static content (like images, videos, and scripts) at strategically placed physical nodes (Points of Presence) around the world, delivering it to users from the closest location.
Incorrect! Try again.
14Which Azure service provides a global entry point for web traffic, using Microsoft's global edge network to direct users to the fastest and most available backend?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Easy
A.Azure Front Door
B.Azure Application Gateway
C.Azure Virtual Network
D.Azure Load Balancer
Correct Answer: Azure Front Door
Explanation:
Azure Front Door is a modern cloud CDN service that provides global load balancing and site acceleration for web applications. It operates at Layer 7 and routes client requests to the best-performing application backend.
Incorrect! Try again.
15When you create an Azure VNet, you define its IP address space using what standard notation?
Azure VNets use CIDR notation (e.g., 10.1.0.0/16) to define the private IP address range for the network and its subnets.
Incorrect! Try again.
16A Network Security Group (NSG) can be associated with which of the following Azure resources?
Configure network security groups
Easy
A.A Storage Account or a SQL Database
B.A Resource Group or a Subscription
C.A Virtual Network or an ExpressRoute circuit
D.A Subnet or a Network Interface Card (NIC)
Correct Answer: A Subnet or a Network Interface Card (NIC)
Explanation:
NSGs can be applied at two different levels for filtering traffic: to a subnet to protect all resources within it, or directly to a specific NIC attached to a VM for more granular control.
Incorrect! Try again.
17True or False: VNet peering is a non-transitive relationship.
Configure Azure Virtual Network peering
Easy
A.True
B.It depends on the region
C.False
D.It depends on the subscription
Correct Answer: True
Explanation:
VNet peering is non-transitive. If VNet A is peered with VNet B, and VNet B is peered with VNet C, VNet A and VNet C cannot communicate with each other unless a direct peering is established between them.
Incorrect! Try again.
18In an Azure Load Balancer, what is the purpose of a health probe?
Configure Azure Load Balancer
Easy
A.To detect the health status of backend instances and remove unhealthy ones from rotation.
B.To check for security vulnerabilities in the backend VMs.
C.To encrypt traffic between the load balancer and the backend VMs.
D.To measure the network latency between the load balancer and the user.
Correct Answer: To detect the health status of backend instances and remove unhealthy ones from rotation.
Explanation:
A health probe is used by the load balancer to periodically check if the backend virtual machines are responsive. If a VM fails the health probe, the load balancer stops sending traffic to it until it becomes healthy again.
Incorrect! Try again.
19SSL/TLS termination is a feature of which Azure networking service?
Configure Azure Application Gateway
Easy
A.Azure Virtual Network
B.Network Security Group (NSG)
C.Azure Route Table
D.Azure Application Gateway
Correct Answer: Azure Application Gateway
Explanation:
Azure Application Gateway can terminate the SSL/TLS connection at the gateway. This offloads the CPU-intensive encryption and decryption work from your web servers, allowing them to focus on serving application logic.
Incorrect! Try again.
20Which Azure feature uses a private IP address from your VNet to connect you privately and securely to a service powered by Azure Private Link, like Azure Storage?
Configure network routing and endpoints
Easy
A.Private Endpoint
B.VNet Peering
C.Network Security Group
D.Service Endpoint
Correct Answer: Private Endpoint
Explanation:
A Private Endpoint is a network interface that connects you privately and securely to a service like Azure Storage or SQL Database. It uses a private IP address from your VNet, effectively bringing the service into your virtual network.
Incorrect! Try again.
21You are designing an Azure Virtual Network (VNet) for a three-tier application (Web, Application, Database). The VNet has an address space of 10.10.0.0/16. You need to create three subnets with the following host requirements: Web tier (50 hosts), App tier (100 hosts), and DB tier (20 hosts). To minimize IP address wastage while allowing for future growth, which of the following CIDR block assignments is most appropriate?
To determine the appropriate subnet size, we find the smallest CIDR block that meets the host requirement. A /26 block provides usable IPs (sufficient for 50 hosts). A /25 block provides usable IPs (sufficient for 100 hosts). A /27 block provides usable IPs (sufficient for 20 hosts). This option correctly allocates the most efficient CIDR blocks for each tier's needs.
Incorrect! Try again.
22A Network Security Group (NSG) has the following two inbound security rules applied to a subnet:
- Rule 1: Priority 200, Source Any, Destination Any, Port 22, Protocol TCP, Action Deny
- Rule 2: Priority 300, Source IP 203.0.113.5, Destination Any, Port 22, Protocol TCP, Action Allow
An administrator with the public IP 203.0.113.5 attempts to SSH (port 22) into a VM in the subnet. Will the connection succeed?
Configure network security groups
Medium
A.No, because the rule with the lower priority number (200) is processed first and denies the traffic.
B.Yes, because 'Allow' rules are always processed before 'Deny' rules, regardless of priority.
C.No, because default NSG rules will block the traffic before these custom rules are processed.
D.Yes, because the more specific 'Allow' rule for the source IP address overrides the general 'Deny' rule.
Correct Answer: No, because the rule with the lower priority number (200) is processed first and denies the traffic.
Explanation:
Azure NSG rules are processed in order of priority, from the lowest number to the highest. Rule 1 has a priority of 200, which is lower than Rule 2's priority of 300. Therefore, Rule 1 is evaluated first. Since it denies all traffic on port 22 from any source, the administrator's connection attempt will be blocked, and Rule 2 will never be evaluated.
Incorrect! Try again.
23You have three VNets: VNet-A (10.1.0.0/16), VNet-B (10.2.0.0/16), and VNet-C (10.3.0.0/16). You have established the following peerings:
- VNet-A is peered with VNet-B.
- VNet-B is peered with VNet-C.
A virtual machine in VNet-A needs to communicate with a virtual machine in VNet-C. What must be done to enable this communication?
Configure Azure Virtual Network peering
Medium
A.Enable 'Allow Transitive Peering' on the peering connection between VNet-A and VNet-B.
B.A new VNet peering must be created directly between VNet-A and VNet-C.
C.Nothing, the communication will work automatically because VNet-B links them.
D.Configure a User-Defined Route (UDR) in VNet-A to route traffic for VNet-C through VNet-B.
Correct Answer: A new VNet peering must be created directly between VNet-A and VNet-C.
Explanation:
Azure VNet peering is non-transitive. This means that if VNet-A is peered with VNet-B, and VNet-B is peered with VNet-C, there is no implied or automatic peering between VNet-A and VNet-C. To enable direct communication between resources in VNet-A and VNet-C, a dedicated peering connection must be explicitly created between them.
Incorrect! Try again.
24You want to ensure that all internet-bound traffic from a specific subnet is first inspected by a Network Virtual Appliance (NVA), such as a firewall, located in another subnet. Which combination of Azure resources is required to implement this?
Configure network routing and endpoints
Medium
A.A Network Security Group (NSG) rule to redirect traffic to the NVA.
B.A public Azure Load Balancer with the NVA in its backend pool.
C.A Service Endpoint for the subnet pointing to the NVA.
D.A Route Table with a User-Defined Route (UDR) pointing to the NVA's private IP.
Correct Answer: A Route Table with a User-Defined Route (UDR) pointing to the NVA's private IP.
Explanation:
The standard way to control routing behavior in an Azure VNet is by using Route Tables and User-Defined Routes (UDRs). To force traffic through an NVA, you would create a UDR for the address prefix 0.0.0.0/0 (representing all internet traffic) and set the 'Next hop type' to 'Virtual appliance' and the 'Next hop IP address' to the private IP of your NVA. This Route Table is then associated with the source subnet.
Incorrect! Try again.
25You are using a Standard SKU Azure Load Balancer to distribute traffic to a set of backend VMs. You need to ensure that once a client establishes a session with a specific VM, all subsequent requests from that same client are directed to the same VM for the duration of the session. Which load balancer rule configuration should you modify?
Configure Azure Load Balancer
Medium
A.Health probe protocol
B.Floating IP (Direct Server Return)
C.Session persistence
D.Frontend IP configuration
Correct Answer: Session persistence
Explanation:
Session persistence, also known as source IP affinity or sticky sessions, is the feature designed for this scenario. By configuring session persistence on a load balancing rule, you instruct the load balancer to use a hash of the source IP (and sometimes protocol and port) to map traffic to a specific backend server, ensuring subsequent requests from the same client are sent to the same server.
Incorrect! Try again.
26Your company hosts two distinct websites, www.contoso.com and www.fabrikam.org, on the same backend pool of VMs. You need to use a single Azure Application Gateway to serve traffic for both sites, routing requests to the appropriate application logic based on the host header. Which Application Gateway feature is specifically designed for this purpose?
Configure Azure Application Gateway
Medium
A.URL Path-based Routing
B.Multi-site listeners
C.Web Application Firewall (WAF)
D.SSL/TLS termination
Correct Answer: Multi-site listeners
Explanation:
Multi-site listening functionality allows you to configure more than one listener on the same port of the Application Gateway. This enables you to host multiple websites on a single Application Gateway instance. The gateway uses the host header from the incoming HTTP request to determine which listener should process the request and which backend pool to route it to.
Incorrect! Try again.
27A global company has deployed its web application to Azure regions in East US, West Europe, and Southeast Asia. They need a solution that provides global HTTP load balancing, routes users to the lowest latency backend, and can automatically fail over to a healthy region if one region becomes unavailable. Which Azure service is the best fit for these requirements?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Medium
A.Azure Front Door
B.Azure Traffic Manager with performance routing
C.Azure CDN
D.A globally peered set of regional Azure Load Balancers
Correct Answer: Azure Front Door
Explanation:
Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network. It is designed for this exact scenario. It provides Layer 7 load balancing, uses anycast to route clients to the nearest Point of Presence (POP), and offers priority-based routing with health probes for seamless regional failover. While Traffic Manager also provides DNS-based routing, Front Door operates at the HTTP layer, offering additional features like SSL offloading and WAF.
Incorrect! Try again.
28A virtual machine's network interface is associated with an NSG named NSG-NIC. The VM's subnet is associated with an NSG named NSG-Subnet. To allow inbound RDP traffic (port 3389) to the VM, how must the NSGs be configured?
Configure network security groups
Medium
A.Traffic must be allowed by rules in both NSG-Subnet and NSG-NIC.
B.An 'Allow' rule in either NSG is sufficient to permit the traffic.
C.Traffic must be allowed by NSG-Subnet only, as it applies to all resources in the subnet.
D.Traffic must be allowed by NSG-NIC only, as it is more specific and overrides the subnet NSG.
Correct Answer: Traffic must be allowed by rules in both NSG-Subnet and NSG-NIC.
Explanation:
When NSGs are applied at both the subnet and NIC levels, they are processed in a specific order. For inbound traffic, the subnet-level NSG is evaluated first. If it allows the traffic, the NIC-level NSG is then evaluated. The traffic is only permitted if it passes through the 'Allow' rules of both NSGs. A 'Deny' at either level will block the traffic.
Incorrect! Try again.
29You have a hub-spoke network topology in Azure. The hub VNet (Hub-VNet) has a VPN Gateway providing connectivity to your on-premises network. A spoke VNet (Spoke-VNet) is peered with Hub-VNet. What must you configure on the VNet peering to allow VMs in Spoke-VNet to access the on-premises network via the hub's gateway?
Configure Azure Virtual Network peering
Medium
A.On the Spoke-VNet to Hub-VNet peering, select 'Use the remote virtual network's gateway'. On the Hub-VNet to Spoke-VNet peering, select 'Allow gateway transit'.
B.Install a separate VPN Gateway in the Spoke-VNet and connect it to the on-premises network.
C.On the Spoke-VNet to Hub-VNet peering, select 'Allow gateway transit'. On the Hub-VNet to Spoke-VNet peering, select 'Use the remote virtual network's gateway'.
D.Create a User-Defined Route in the Spoke-VNet that points to the VPN Gateway in the Hub-VNet.
Correct Answer: On the Spoke-VNet to Hub-VNet peering, select 'Use the remote virtual network's gateway'. On the Hub-VNet to Spoke-VNet peering, select 'Allow gateway transit'.
Explanation:
To enable gateway transit, you must configure both sides of the peering connection. The VNet with the gateway (the hub) must be configured to 'Allow gateway transit'. The VNet without the gateway (the spoke) must be configured to 'Use the remote virtual network's gateway'. This allows the spoke VNet to leverage the hub's gateway for external connectivity.
Incorrect! Try again.
30A VM in your VNet needs to connect to an Azure SQL Database. For security reasons, you want to ensure this connection occurs over the Azure backbone network and is restricted so that the database only accepts connections from your VNet. Which of the following is the most direct and legacy-compatible method to achieve this without assigning a private IP to the database?
Configure network routing and endpoints
Medium
A.Create a User-Defined Route to force traffic destined for the SQL database through an Azure Firewall.
B.Configure a Service Endpoint for Microsoft.Sql on the VM's subnet and add a VNet rule to the database's firewall.
C.Allow the public endpoint of the Azure SQL Database on the VM's Network Security Group.
D.Configure a Private Endpoint for the Azure SQL Database within the VM's VNet.
Correct Answer: Configure a Service Endpoint for Microsoft.Sql on the VM's subnet and add a VNet rule to the database's firewall.
Explanation:
Service Endpoints provide secure and direct connectivity to Azure services over the Azure backbone network. By enabling the Microsoft.Sql service endpoint on a subnet, traffic from that subnet to Azure SQL uses private source IPs. You can then configure the SQL Database's firewall to only allow traffic from that specific subnet, effectively locking it down from the public internet. While Private Endpoints also work, Service Endpoints are a more direct solution when a private IP in the VNet is not strictly required.
Incorrect! Try again.
31You are deploying an Azure Application Gateway v2 to serve a web application that requires protection against common vulnerabilities like cross-site scripting (XSS) and SQL injection. The gateway should also automatically scale based on traffic load. Which SKU must you choose for the Application Gateway?
Configure Azure Application Gateway
Medium
A.Standard_v2
B.Standard_v2 with WAF policy
C.Standard
D.WAF
Correct Answer: Standard_v2 with WAF policy
Explanation:
The v2 generation of Application Gateway SKUs (Standard_v2 and WAF_v2) provides autoscaling. To add Web Application Firewall (WAF) capabilities to a Standard_v2 SKU, you must create a separate WAF Policy and associate it with the gateway or a specific listener. The older WAF SKU is part of the v1 generation and does not support autoscaling. Therefore, to get both autoscaling (v2) and WAF protection, you must use a v2 SKU (Standard_v2 is sufficient) and attach a WAF Policy.
Incorrect! Try again.
32Your website serves a large amount of static content (images, CSS, JavaScript) to users globally. To improve page load times and reduce the load on your origin web server, you want to cache this content at edge locations closer to your users. Which Azure service is primarily designed for this content caching and delivery function?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Medium
Azure CDN is the service specifically designed for caching static content at strategically placed points of presence (POPs) around the globe. When a user requests content, it is served from the closest POP, which significantly reduces latency and improves performance. While Azure Front Door also has caching capabilities, its primary purpose is global traffic routing and application acceleration, whereas CDN's core function is static content caching and delivery.
Incorrect! Try again.
33You have an existing Virtual Network with an address space of 192.168.0.0/22. Due to expansion, you need to add more IP addresses. You have been given the address block 192.168.4.0/22 to use. What is the correct procedure to add this new address space to your VNet without causing downtime?
Configure virtual networks
Medium
A.You cannot add a non-contiguous address space; you must delete and recreate the VNet with a larger CIDR block.
B.Add the 192.168.4.0/22 address space to the VNet's 'Address spaces' configuration property.
C.Create a new VNet with the 192.168.4.0/22 space and peer it with the original VNet.
D.Create a new subnet in the existing VNet and specify 192.168.4.0/22 as its address range.
Correct Answer: Add the 192.168.4.0/22 address space to the VNet's 'Address spaces' configuration property.
Explanation:
Azure VNets support multiple, non-contiguous address spaces. You can add a new address space to an existing VNet without any service disruption. The correct procedure is to navigate to the VNet's settings, go to the 'Address spaces' section, and add the new CIDR block (192.168.4.0/22). After it's added, you can then create new subnets within that new address space.
Incorrect! Try again.
34You are configuring a Standard SKU Public Azure Load Balancer. You want to allow outbound internet connectivity for VMs in the backend pool, even if they do not have public IP addresses assigned directly to their NICs. How is this achieved with a Standard Load Balancer?
Configure Azure Load Balancer
Medium
A.By assigning a public IP prefix to the backend pool directly.
B.By configuring a Network Address Translation (NAT) rule for each VM in the backend pool.
C.By creating an outbound rule on the Load Balancer that maps the backend pool to the frontend public IP.
D.It works by default; all VMs behind a Standard Load Balancer automatically have outbound access.
Correct Answer: By creating an outbound rule on the Load Balancer that maps the backend pool to the frontend public IP.
Explanation:
Unlike the Basic SKU, the Standard SKU Load Balancer is secure by default and does not provide automatic outbound connectivity. To enable it, you must explicitly define outbound connectivity. The recommended way is to create an outbound rule that specifies which backend pool can use the load balancer's public frontend IP for Source Network Address Translation (SNAT) to access the internet.
Incorrect! Try again.
35You are attempting to peer two VNets, VNet-A (10.0.0.0/16) and VNet-B (10.0.0.0/24), located in the same region. When you try to create the peering, the operation fails. What is the most likely cause of this failure?
Configure Azure Virtual Network peering
Medium
A.VNet peering is not supported for VNets with a /24 address space.
B.VNet peering requires both VNets to be in the same resource group.
C.One VNet has a Basic SKU public IP, and the other has a Standard SKU public IP.
D.The address spaces of the two VNets overlap.
Correct Answer: The address spaces of the two VNets overlap.
Explanation:
A fundamental requirement for VNet peering is that the virtual networks being peered must have non-overlapping IP address spaces. In this scenario, the address space of VNet-B (10.0.0.0/24) is a subset of the address space of VNet-A (10.0.0.0/16), which constitutes an overlap. Azure cannot create the peering because it would be unable to route traffic correctly between them.
Incorrect! Try again.
36What is the primary advantage of using a Private Endpoint over a Service Endpoint to connect a Virtual Network to an Azure PaaS service like Azure Storage?
Configure network routing and endpoints
Medium
A.A Service Endpoint requires a VPN Gateway, whereas a Private Endpoint does not.
B.A Private Endpoint provides a private IP address for the PaaS service within your VNet's address space.
C.A Private Endpoint provides a higher bandwidth connection than a Service Endpoint.
D.A Service Endpoint encrypts traffic by default, while a Private Endpoint does not.
Correct Answer: A Private Endpoint provides a private IP address for the PaaS service within your VNet's address space.
Explanation:
The key distinction is that a Private Endpoint projects the PaaS service directly into your VNet by assigning it a private IP address from one of your subnets. This makes the PaaS service appear as if it's a native part of your VNet. This allows connectivity from on-premises networks (via VPN/ExpressRoute) and peered VNets, which is not possible with Service Endpoints. Service Endpoints only secure the connection from a specific subnet to the service's public endpoint.
Incorrect! Try again.
37You are configuring an Azure Application Gateway to handle traffic for a backend web application. The connection from the client to the Application Gateway is encrypted using HTTPS. You want to improve performance by having the Application Gateway decrypt the traffic and send it unencrypted (HTTP) to the backend servers, which are on a secure, isolated VNet. What is this configuration called?
Configure Azure Application Gateway
Medium
A.URL rewriting
B.End-to-end SSL/TLS encryption
C.SSL/TLS pass-through
D.SSL/TLS termination
Correct Answer: SSL/TLS termination
Explanation:
SSL/TLS termination is the process where the Application Gateway (or any reverse proxy) decrypts incoming HTTPS traffic from clients. It then forwards the requests to the backend servers over unencrypted HTTP. This offloads the computationally expensive decryption and encryption tasks from the backend servers, freeing up their CPU cycles to serve content. It is secure as long as the backend network is properly isolated.
Incorrect! Try again.
38You need to add a rule to an NSG that allows HTTP traffic from a predefined set of Azure services (like Azure Backup) to your VMs. Instead of manually looking up and adding all the IP ranges for these services, what is the most efficient source to specify in the NSG rule?
Configure network security groups
Medium
A.The name of the Azure service (e.g., 'Azure Backup Service')
B.A Service Tag (e.g., AzureBackup)
C.An Application Security Group (ASG)
D.The IP address range 0.0.0.0/0
Correct Answer: A Service Tag (e.g., AzureBackup)
Explanation:
Service Tags are Microsoft-managed labels that represent a group of IP address prefixes for a given Azure service. Using a Service Tag like AzureBackup or Storage in the source or destination of an NSG rule simplifies rule creation. Microsoft automatically updates the IP addresses included in the tag, ensuring your rule stays current without manual intervention.
Incorrect! Try again.
39An Azure Standard Load Balancer has two healthy VMs in its backend pool. You have configured a health probe to check for a response on TCP port 80 every 5 seconds. The 'unhealthy threshold' is set to 2. One of the VMs stops responding to the health probe. What is the minimum amount of time before the Load Balancer stops sending new traffic to this VM?
Configure Azure Load Balancer
Medium
A.15 seconds
B.5 seconds
C.2 seconds
D.10 seconds
Correct Answer: 10 seconds
Explanation:
The Load Balancer will mark a backend instance as unhealthy only after it fails a number of consecutive probes equal to the 'unhealthy threshold'. In this case, the probe interval is 5 seconds and the threshold is 2. Therefore, the Load Balancer must wait for two failed probes. Minimum time = (Probe Interval) (Unhealthy Threshold) = 5 seconds 2 = 10 seconds.
Incorrect! Try again.
40You are comparing Azure Front Door and Azure CDN for your web application. Which of the following capabilities is a primary feature of Azure Front Door but is NOT a core feature of a standard Azure CDN offering?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Medium
A.Custom domain support with SSL/TLS certificates.
B.Global load balancing and automatic failover between multiple origin regions.
C.Geo-filtering to block traffic from specific countries.
D.Caching of static assets like images and CSS files at edge locations.
Correct Answer: Global load balancing and automatic failover between multiple origin regions.
Explanation:
Azure Front Door's primary role is that of a global HTTP/S load balancer and application delivery network. It excels at routing traffic to the best (lowest latency, highest priority) origin and providing fast, automatic failover if an origin becomes unhealthy. While Azure CDN is excellent for caching and delivering static content from a single origin (or with some manual failover configurations), dynamic site acceleration and intelligent global routing between multiple active origins is the core competency of Azure Front Door.
Incorrect! Try again.
41A virtual machine (VM) has a Network Interface (NIC) associated with an Application Security Group (ASG), asg-web. The VM is in a subnet which has a Network Security Group (NSG), nsg-subnet, applied. The NIC also has its own NSG, nsg-nic.
nsg-subnet has an inbound rule:
- Priority: 200
- Source: asg-web
- Port: 443
- Protocol: TCP
- Action: Allow
nsg-nic has an inbound rule:
- Priority: 300
- Source: Any
- Port: 443
- Protocol: TCP
- Action: Deny
What is the effective security rule for inbound TCP traffic on port 443 to the VM's NIC?
Configure network security groups
Hard
A.Traffic is denied because for inbound traffic, the NIC NSG rules are processed after the subnet NSG rules, and the Deny rule in nsg-nic takes effect.
B.Traffic is denied because the deny rule in nsg-nic has a higher priority number (lower precedence) but is processed last and denies all traffic.
C.Traffic is allowed because a rule referencing an ASG as a source takes precedence over a rule with a source of Any.
D.Traffic is allowed because the rule in nsg-subnet has a lower priority number (higher precedence) and is processed first.
Correct Answer: Traffic is denied because for inbound traffic, the NIC NSG rules are processed after the subnet NSG rules, and the Deny rule in nsg-nic takes effect.
Explanation:
Azure processes NSG rules in a specific order. For inbound traffic, rules in the NSG associated with the subnet are processed first. If traffic is allowed by the subnet NSG, rules in the NSG associated with the NIC are processed next. In this scenario, the nsg-subnet rule (Priority 200) allows traffic from asg-web. However, the traffic must then be evaluated by nsg-nic. The rule in nsg-nic (Priority 300) explicitly denies all traffic on port 443. Since there is a matching Deny rule at the NIC level, the traffic is ultimately blocked. A Deny rule at either level will block the traffic.
Incorrect! Try again.
42You have a subnet with a User-Defined Route (UDR) in its route table that directs all internet-bound traffic (0.0.0.0/0) to a Network Virtual Appliance (NVA). You also enable a Service Endpoint for Microsoft.Storage on the same subnet. A VM within this subnet needs to access a storage account located in the same Azure region.
How will the traffic from the VM to the storage account be routed?
Configure network routing and endpoints
Hard
A.The traffic will be routed directly to the storage account over the Azure backbone, bypassing the NVA, because the Service Endpoint creates a more specific system route.
B.The traffic will be routed to the NVA as dictated by the 0.0.0.0/0 UDR, because UDRs always take precedence over system routes.
C.The traffic will be routed to the storage account's public endpoint over the internet, ignoring both the UDR and the Service Endpoint.
D.The connection will fail because the UDR and the Service Endpoint create a routing conflict.
Correct Answer: The traffic will be routed directly to the storage account over the Azure backbone, bypassing the NVA, because the Service Endpoint creates a more specific system route.
Explanation:
Azure routing decisions are based on the longest prefix match. When you create a Service Endpoint for an Azure service like Microsoft.Storage, Azure adds more specific system routes for the public IP address prefixes of that service to the subnet's effective route table. These routes have a next hop type of VirtualNetworkServiceEndpoint. Because these routes (e.g., 13.78.144.0/21) are more specific than the 0.0.0.0/0 UDR, traffic destined for the storage account will match the Service Endpoint route and travel directly over the Azure backbone, bypassing the NVA.
Incorrect! Try again.
43You have three Azure Virtual Networks in the same region: VNet-A (10.10.0.0/16), VNet-B (10.20.0.0/16), and VNet-C (10.30.0.0/16).
- VNet-A is peered with VNet-B. The peering from A to B has 'Allow gateway transit' enabled. VNet-A has a Virtual Network Gateway.
- VNet-B is peered with VNet-C.
- The peering from B to A has 'Use remote gateways' enabled.
Which statement accurately describes the network connectivity from a VM in VNet-C to an on-premises network connected via the gateway in VNet-A?
Configure Azure Virtual Network peering
Hard
A.The VM in VNet-C can connect to the on-premises network because gateway transit is enabled on VNet-A and VNet-B is using the remote gateway.
B.The VM in VNet-C can connect to the on-premises network, but only if 'Allow gateway transit' is also enabled on the peering from VNet-B to VNet-C.
C.The VM in VNet-C cannot connect to the on-premises network because VNet peering is not transitive.
D.The VM in VNet-C can connect, but requires a UDR pointing to the gateway in VNet-A.
Correct Answer: The VM in VNet-C cannot connect to the on-premises network because VNet peering is not transitive.
Explanation:
Azure VNet peering is not transitive. This means if VNet-A is peered with VNet-B, and VNet-B is peered with VNet-C, there is no implied connectivity between VNet-A and VNet-C. The 'Gateway Transit' feature extends this limitation. While it allows VMs in VNet-B to use the gateway in VNet-A, it does not extend this capability another hop to VNet-C. The remote gateway is only visible to directly peered networks. To enable connectivity from VNet-C to VNet-A, a direct peering between VNet-A and VNet-C would be required.
Incorrect! Try again.
44You are using a Standard SKU Azure Load Balancer with a backend pool of two Network Virtual Appliances (NVAs) configured for high availability. You configure a load balancing rule for HA Ports to forward all TCP and UDP traffic to the NVAs. You also create a specific inbound NAT rule with a higher priority to forward TCP traffic on port 22 (SSH) from the load balancer's public IP to the private IP of the primary NVA (NVA-1).
A client sends an SSH request (TCP port 22) to the public IP of the load balancer. How will the traffic be handled?
Configure Azure Load Balancer
Hard
A.The configuration will result in an error, as HA Ports rules and inbound NAT rules are mutually exclusive.
B.The traffic will be distributed between NVA-1 and NVA-2 according to the HA Ports rule.
C.The traffic will be sent only to NVA-1 due to the specific inbound NAT rule.
D.The traffic will be dropped because the inbound NAT rule conflicts with the HA Ports rule.
Correct Answer: The traffic will be sent only to NVA-1 due to the specific inbound NAT rule.
Explanation:
Azure Load Balancer processes rules with a specific precedence. Inbound NAT rules are evaluated before load balancing rules. Even though an HA Ports rule is configured to handle all traffic, the more specific inbound NAT rule for TCP port 22 will match the incoming SSH request first. Therefore, the traffic is directed specifically to the target defined in the NAT rule (NVA-1) and is not subject to the HA Ports load balancing logic.
Incorrect! Try again.
45An Azure Application Gateway v2 is configured with a Web Application Firewall (WAF) policy in Prevention mode. The policy includes the OWASP 3.1 managed ruleset and a custom rule with priority 50. The custom rule is configured to Allow traffic if the request header X-Internal-Token has a specific value. A request arrives that contains the correct X-Internal-Token header, but its URL also contains a pattern (/..%2f) that matches a high-severity Path Traversal rule in the OWASP managed ruleset.
What is the final action taken by the WAF?
Configure Azure Application Gateway
Hard
A.An alert is generated, but the request is passed through because an Allow rule overrides a Block action from a managed rule.
B.The WAF logs both the custom rule match and the managed rule match, but the final action depends on the severity of the managed rule.
C.The request is blocked because the managed rule for Path Traversal is triggered and WAF is in Prevention mode.
D.The request is allowed because the custom rule with priority 50 is processed first and its action is Allow.
Correct Answer: The request is allowed because the custom rule with priority 50 is processed first and its action is Allow.
Explanation:
In Azure Application Gateway WAF v2, custom rules are processed before managed rules. Rules are processed in priority order (lower number means higher priority). In this case, the custom rule with priority 50 runs first. Since the request matches the custom rule's condition (correct X-Internal-Token header) and the action is Allow, the WAF stops processing further rules and allows the request to pass to the backend. The managed ruleset is never evaluated for this specific request. This behavior is useful for creating exceptions to managed rules.
Incorrect! Try again.
46You have a hub-spoke network topology in Azure. The hub VNet (vnet-hub) contains a custom DNS server. The spoke VNet (vnet-spoke) is peered with vnet-hub. The DNS server setting for vnet-spoke is configured to use the custom DNS server in vnet-hub. You then create a new VNet, vnet-isolated, which is not peered with any other network. You configure vnet-isolated to also use the same custom DNS server's private IP address. A VM in vnet-isolated attempts to resolve a hostname for a VM in vnet-spoke. What is the outcome?
Configure virtual networks
Hard
A.Resolution fails because the VM in vnet-isolated has no network route to the custom DNS server in vnet-hub.
B.Resolution succeeds because both VNets are pointing to the same DNS server.
C.Resolution fails because a custom DNS server can only serve clients within its own VNet or directly peered VNets.
D.Resolution succeeds because Azure's internal DNS infrastructure will route the DNS query to the specified IP regardless of VNet peering.
Correct Answer: Resolution fails because the VM in vnet-isolated has no network route to the custom DNS server in vnet-hub.
Explanation:
While you can configure a VNet's DNS settings to point to any IP address, DNS queries are standard network traffic. For a VM in vnet-isolated to reach the DNS server in vnet-hub, there must be a network path between them. Since vnet-isolated is not peered with vnet-hub, there is no route from the vnet-isolated address space to the vnet-hub address space. The DNS query from the VM will be sent, but it will be unable to reach the destination IP of the DNS server, causing the resolution to time out and fail. VNet peering or another routing mechanism (like a VNet gateway) would be required to establish this connectivity.
Incorrect! Try again.
47An Azure Front Door Premium SKU is configured with a backend pool containing two App Services, one in West US (Priority 1, Weight 60) and one in East US (Priority 1, Weight 40). Health probes are enabled. A separate backend pool for disaster recovery contains an App Service in West Europe (Priority 2, Weight 100). The West US App Service becomes unhealthy.
How will Azure Front Door distribute the incoming traffic?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Hard
A.All traffic will be sent to the East US App Service.
B.60% of traffic will be sent to the West Europe App Service and 40% to the East US App Service.
C.All traffic will be sent to the West Europe App Service.
D.40% of traffic will be sent to the East US App Service, and the remaining 60% will result in an error.
Correct Answer: All traffic will be sent to the East US App Service.
Explanation:
Azure Front Door uses a two-step process for routing: priority and then weight. It first sends all traffic to the available, healthy backends in the highest-priority group (Priority 1). Only if all backends in the Priority 1 group become unhealthy will it fail over to the next priority group (Priority 2). In this scenario, even though the West US backend is unhealthy, the East US backend is in the same priority group (Priority 1) and is still healthy. Therefore, 100% of the traffic will be directed to the remaining healthy backend in the highest-priority group, which is the East US App Service. The weights are only used to distribute traffic among healthy backends within the same priority group.
Incorrect! Try again.
48You have a VM in a subnet that is protected by an Azure Firewall in a hub VNet. All traffic from the subnet (0.0.0.0/0) is forced through the firewall via a UDR (forced tunneling). The firewall is configured with an application rule to allow access to *.blob.core.windows.net. You have also enabled a Service Endpoint for Microsoft.Storage on the VM's subnet. The VM attempts to connect to a storage account that is protected by a VNet service endpoint rule, allowing access only from the VM's subnet.
Will the VM be able to access the storage account?
Configure network routing and endpoints
Hard
A.No, because the UDR for 0.0.0.0/0 forces all traffic, including storage traffic, to the firewall, which does not have an identity on the subnet to satisfy the storage account's VNet rule.
B.Yes, because the Service Endpoint provides a direct route over the Azure backbone that bypasses the Azure Firewall.
C.No, because service endpoints are not compatible with forced tunneling scenarios and the configuration will cause a network error.
D.Yes, because the firewall's application rule for *.blob.core.windows.net will allow the traffic, and the firewall's IP will be used to access the storage account.
Correct Answer: No, because the UDR for 0.0.0.0/0 forces all traffic, including storage traffic, to the firewall, which does not have an identity on the subnet to satisfy the storage account's VNet rule.
Explanation:
This scenario highlights a critical conflict between UDR-based forced tunneling and Service Endpoints. While a Service Endpoint creates a more specific route for storage traffic, the UDR for 0.0.0.0/0 to the firewall will still override it for public endpoints. The traffic is sent to the Azure Firewall. The source IP of the traffic is now the firewall's IP, not the VM's private IP. The storage account's firewall rule is configured to allow access from the VNet/subnet, which works by checking if the source private IP originates from the allowed subnet. Since the traffic now originates from the firewall, it is no longer coming from an IP within the authorized subnet, and the storage account will deny the connection. The solution to this problem is to use Private Endpoints instead of Service Endpoints in forced tunneling scenarios.
Incorrect! Try again.
49You are planning to peer VNet-A (10.0.0.0/16) with VNet-B (10.1.0.0/16). VNet-B is already successfully peered with VNet-C (10.0.0.0/24). What happens when you attempt to establish the peering between VNet-A and VNet-B?
Configure Azure Virtual Network peering
Hard
A.The peering will fail because the address space of VNet-A overlaps with the address space of VNet-C, which is a peered network of VNet-B.
B.The peering will be established, but in a 'Disconnected' state, and no traffic will flow until the address space conflict is resolved.
C.The peering will be established successfully, but VMs in VNet-A will not be able to communicate with VMs in VNet-C's address space.
D.The peering will succeed, and Azure will automatically handle the routing for the overlapping sub-space by prioritizing the direct peering route.
Correct Answer: The peering will fail because the address space of VNet-A overlaps with the address space of VNet-C, which is a peered network of VNet-B.
Explanation:
A fundamental requirement for VNet peering is that the address spaces of the two virtual networks must not overlap. This rule extends to the entire peered network topology. When you attempt to peer VNet-A with VNet-B, Azure checks for overlaps not only with VNet-B itself but also with all of VNet-B's existing peers. Since the address space of VNet-C (10.0.0.0/24) is a subset of VNet-A's address space (10.0.0.0/16), this constitutes an overlap. As a result, the peering operation between VNet-A and VNet-B will fail validation and cannot be created.
Incorrect! Try again.
50An Azure Standard Load Balancer is configured with the 'Floating IP' (Direct Server Return) option enabled on its load balancing rule. The backend pool consists of two VMs running a SQL Server Always On availability group listener. When a client connects to the load balancer's frontend IP, which IP address should the client see as the source IP in the response packets from the active SQL VM?
Configure Azure Load Balancer
Hard
A.The private IP address of the Azure host node.
B.The private IP address of the active SQL VM.
C.The public frontend IP address of the load balancer.
D.The private IP address of the load balancer.
Correct Answer: The public frontend IP address of the load balancer.
Explanation:
Floating IP, also known as Direct Server Return (DSR), changes the IP address mapping. Without Floating IP, the backend VM would see traffic coming from the load balancer's internal IP and would respond back through the load balancer. With Floating IP enabled, the destination IP of the packet arriving at the VM is the load balancer's frontend IP, not the VM's private IP. The VM's network stack is configured (e.g., via a loopback adapter) to accept traffic for this IP. For the return path (Direct Server Return), the VM must send the response packet with its source IP set to the load balancer's frontend IP. This makes the return traffic appear to come directly from the load balancer, maintaining a symmetric flow from the client's perspective.
Incorrect! Try again.
51You have an Azure Application Gateway v2 with a multi-site listener configured for contoso.com. You create a path-based routing rule with two paths:
1. Path: /images/* -> Backend Pool: ImagePool
2. Path: /videos/* -> Backend Pool: VideoPool
You also create a Rewrite Set and associate it with the path-based routing rule itself (not a specific path). The rewrite set is configured to add a new header X-Request-Source: AppGateway. A client sends a request to https://contoso.com/images/logo.png. Which of the following statements is true?
Configure Azure Application Gateway
Hard
A.The request is routed to ImagePool, but the header is not added because rewrite sets can only be associated with basic routing rules.
B.The request is routed to ImagePool, and the X-Request-Source header is added.
C.The request fails because a rewrite set cannot be associated with a parent path-based rule.
D.The request is routed to the default backend pool of the listener because the rewrite set conflicts with the path-based logic.
Correct Answer: The request is routed to ImagePool, and the X-Request-Source header is added.
Explanation:
In Azure Application Gateway v2, rewrite sets can be associated with a routing rule. When associated with a path-based routing rule, the rewrite configuration applies to all requests that match the parent rule, regardless of which path-based rule is ultimately chosen. The gateway first evaluates the listener (contoso.com), then the path-based rule (/images/*), determining the backend is ImagePool. Because the request matches the parent routing rule, the associated rewrite set is executed, adding the X-Request-Source header before the request is forwarded to a backend server in ImagePool.
Incorrect! Try again.
52A subnet contains several VMs and has an NSG with the following outbound rules:
- Priority 100: Source Any, Destination Service Tag: AzureCloud, Port Any, Protocol Any, Action Allow
- Priority 200: Source Any, Destination Any, Port 80, Protocol TCP, Action Deny
- Priority 4096: Default DenyAllOutbound rule (not visible)
A VM in this subnet attempts to make an outbound HTTP request to www.bing.com on port 80. The public IP of www.bing.com is part of the IP address range covered by the AzureCloud service tag. What is the outcome of the connection attempt?
Configure network security groups
Hard
A.The connection is denied because service tags cannot be used in the destination of outbound rules.
B.The connection is allowed because the rule for the AzureCloud service tag has a higher priority (100) than the deny rule (200).
C.The connection is allowed because www.bing.com is an external service and not part of the AzureCloud service tag.
D.The connection is denied because the rule with priority 200 specifically denies outbound traffic on port 80.
Correct Answer: The connection is allowed because the rule for the AzureCloud service tag has a higher priority (100) than the deny rule (200).
Explanation:
NSG rule processing is based on priority, with the lowest number having the highest priority. When the VM initiates an outbound connection to www.bing.com (which is hosted in Azure and thus its IP is in the AzureCloud service tag range) on port 80, the NSG evaluates its outbound rules. The rule with priority 100 is checked first. The request matches this rule (Source: VM's IP is within Any, Destination: www.bing.com IP is within AzureCloud). Since the action is Allow, processing stops, and the traffic is permitted. The rule with priority 200, which would deny the traffic, is never evaluated because a matching rule with a higher priority has already been found.
Incorrect! Try again.
53You have a hub-spoke topology where the hub VNet has an Azure Firewall and a custom DNS server. Spoke VNets are peered to the hub and use the custom DNS. You create a Private Endpoint for an Azure SQL database in a spoke VNet. The corresponding Private DNS Zone (privatelink.database.windows.net) is linked to the hub VNet but not to the spoke VNet where the Private Endpoint resides. A VM in the spoke VNet tries to resolve the FQDN of the SQL database (mydb.database.windows.net). What is the result?
Configure network routing and endpoints
Hard
A.The resolution fails because the spoke VNet cannot access a Private DNS Zone linked only to the hub VNet.
B.The VM resolves the public IP of the SQL database because its VNet is not linked to the Private DNS Zone.
C.The VM resolves the private IP of the Private Endpoint because the query is forwarded to the custom DNS in the hub, which can access the linked Private DNS Zone.
D.The VM resolves the private IP of the Private Endpoint because the presence of the endpoint in the VNet automatically enables private resolution.
Correct Answer: The VM resolves the private IP of the Private Endpoint because the query is forwarded to the custom DNS in the hub, which can access the linked Private DNS Zone.
Explanation:
This setup is a standard and recommended pattern for hub-spoke private endpoint DNS resolution. The VM in the spoke VNet is configured to use the custom DNS server in the hub. When it tries to resolve mydb.database.windows.net, it sends the query to the hub's DNS server. This DNS server, in turn, forwards the query to the Azure-provided DNS (168.63.129.16). Because the hub VNet is linked to the privatelink.database.windows.net Private DNS Zone, Azure DNS will respond with the private 'A' record from that zone. This private IP is then returned to the custom DNS server, and finally back to the VM in the spoke. Therefore, linking the zone to the hub (where DNS resolution happens) is sufficient.
Incorrect! Try again.
54You are using Azure CDN from Microsoft (Standard) to serve static assets from a Blob Storage origin. You configure a caching rule in the CDN Rules Engine with the following settings:
- Match Condition:Request URL Path contains /dynamic/
- Action:Cache Expiration -> Bypass cache
The origin storage container has a Cache-Control header set to max-age=3600 for all blobs. A user requests a file at https://mycdn.azureedge.net/dynamic/script.js. What caching behavior will be observed at the CDN POP?
Introduction to Azure Front Door and Content Delivery Network (CDN)
Hard
A.The CDN will cache the content using its default TTL because the Rules Engine action conflicts with the origin header.
B.The CDN will bypass its cache and fetch the content from the origin for every request to this URL.
C.The configuration is invalid; Bypass cache cannot be used with a Blob Storage origin.
D.The CDN will cache the content for 3600 seconds as instructed by the origin's Cache-Control header.
Correct Answer: The CDN will bypass its cache and fetch the content from the origin for every request to this URL.
Explanation:
The Azure CDN Rules Engine provides granular control over content delivery and caching, and its rules take precedence over the origin's HTTP headers. In this scenario, the request URL /dynamic/script.js matches the condition (Path contains /dynamic/). The corresponding action is to Bypass cache. This action explicitly instructs the CDN edge node (POP) to not cache the content and to forward every request for this asset directly to the origin. The Cache-Control: max-age=3600 header sent by the Blob Storage origin is effectively ignored by the CDN for any path matching this rule.
Incorrect! Try again.
55You are configuring a subnet that will host an Azure App Service Environment v3 (ASEv3). You attempt to delegate this subnet to Microsoft.Web/hostingEnvironments. After successful delegation, you try to associate a Network Security Group (NSG) with this same subnet to add a custom rule blocking outbound traffic to a specific IP address. What will be the outcome?
Configure virtual networks
Hard
A.The operation will fail because some service delegations, including Microsoft.Web/hostingEnvironments, impose restrictions that prevent associating an NSG with the delegated subnet.
B.The operation will succeed, and the NSG will be associated, allowing you to add the custom outbound rule.
C.The operation will succeed, but the custom outbound rule will be ignored by the ASEv3 instances.
D.The delegation will be automatically removed to allow the NSG to be associated.
Correct Answer: The operation will succeed, and the NSG will be associated, allowing you to add the custom outbound rule.
Explanation:
While subnet delegation does place certain restrictions on a subnet to ensure compatibility with the service being deployed, the specific delegation for ASEv3 (Microsoft.Web/hostingEnvironments) explicitly allows and requires the association of an NSG. The platform needs an NSG to function correctly and applies its own set of required rules. However, you are still permitted to add your own custom rules to the NSG, as long as they don't conflict with the required platform rules. Therefore, you can associate an NSG and add a custom rule to block specific outbound traffic.
Incorrect! Try again.
56You have a Standard SKU public Azure Load Balancer with a backend pool of VMs that do not have public IP addresses. You have not configured any explicit outbound rules or a NAT Gateway on the subnet. How do the VMs in the backend pool access the internet?
Configure Azure Load Balancer
Hard
A.The VMs cannot access the internet because they lack a public IP and no explicit outbound SNAT is configured.
B.The VMs can only access other Azure services via service endpoints; general internet access is blocked.
C.The VMs will use the public IP of the load balancer for outbound SNAT, with a default allocation of SNAT ports.
D.The VMs will be assigned a transient public IP by the Azure fabric for outbound connectivity.
Correct Answer: The VMs will use the public IP of the load balancer for outbound SNAT, with a default allocation of SNAT ports.
Explanation:
When a Standard SKU public load balancer has a backend pool, it automatically provides outbound SNAT (Source Network Address Translation) for the VMs in that pool, provided they do not have their own instance-level public IPs. The VMs' private IPs are translated to the public IP of the load balancer for outbound connections. This is an implicit configuration. While it's best practice to configure explicit outbound connectivity using a NAT Gateway or outbound rules for better control over SNAT port allocation and to avoid SNAT exhaustion, the default behavior does provide internet access.
Incorrect! Try again.
57An Azure Application Gateway v2 is deployed into a dedicated subnet. You want to ensure all traffic originating from the Application Gateway (i.e., traffic going to the backend pools) is routed through a central Azure Firewall for inspection before reaching the backend servers, which are in a different VNet. How can this be achieved?
Configure Azure Application Gateway
Hard
A.Create a route table with a UDR for 0.0.0.0/0 pointing to the Azure Firewall, and associate it with the Application Gateway's subnet.
B.Configure the backend pools of the Application Gateway to use the FQDN of an Internal Load Balancer fronting the Azure Firewall.
C.Enable 'Firewall integration' on the Application Gateway settings, specifying the target Azure Firewall.
D.This configuration is not supported. Application Gateway subnets do not support UDRs that route traffic to a virtual appliance.
Correct Answer: This configuration is not supported. Application Gateway subnets do not support UDRs that route traffic to a virtual appliance.
Explanation:
Azure has specific constraints on the Application Gateway subnet. One of the most critical is that while you can associate a route table, it cannot contain a User-Defined Route (UDR) for 0.0.0.0/0 that points to a virtual appliance like Azure Firewall. Doing so is an unsupported configuration that will break the control plane communication required for the Application Gateway to function correctly. Traffic to the backend pools must have a direct network path. To inspect traffic, the firewall should be placed in front of the Application Gateway (for ingress) or other architectural patterns must be used, but you cannot force egress traffic from the App Gateway subnet through a firewall via a 0.0.0.0/0 UDR.
Incorrect! Try again.
58You have two VNets, VNet-A and VNet-B, in a global VNet peering relationship (VNet-A is in West US, VNet-B is in West Europe). A VM in VNet-A with a private IP of 10.1.1.4 communicates with a VM in VNet-B with a private IP of 10.2.1.4. Which statement accurately describes the data transfer path and associated costs?
Configure Azure Virtual Network peering
Hard
A.Traffic is routed entirely over the Microsoft private backbone. Global VNet peering data transfer charges apply, which are higher than regional peering charges.
B.Traffic is routed to the nearest regional edge router and then over the internet. No data transfer charges apply for peered networks.
C.Traffic is routed over the public internet but encrypted by Azure. Standard internet egress data transfer charges apply.
D.Traffic is routed entirely over the Microsoft private backbone. Zonal data transfer charges apply.
Correct Answer: Traffic is routed entirely over the Microsoft private backbone. Global VNet peering data transfer charges apply, which are higher than regional peering charges.
Explanation:
One of the key benefits of VNet peering, including Global VNet Peering, is that all traffic between the peered networks travels across the Microsoft private global backbone, not the public internet. This provides a more secure and reliable connection. However, data transfer is not free. For traffic crossing regional boundaries via Global VNet Peering, specific cross-region (inter-continental in this case) data transfer rates apply. These rates are different and typically higher than the rates for data transfer within the same region or for standard internet egress.
Incorrect! Try again.
59A virtual network contains two subnets, Subnet-A and Subnet-B. Subnet-A has a route table with a UDR that routes traffic destined for Subnet-B's address prefix to the IP of an NVA. Subnet-B has no route table associated. A VM in Subnet-A sends a packet to a VM in Subnet-B. The NVA forwards the packet to the VM in Subnet-B. How does the return traffic from the VM in Subnet-B get routed back to the VM in Subnet-A?
Configure network routing and endpoints
Hard
A.It is automatically routed back through the NVA due to Azure's stateful connection tracking.
B.It is routed to the NVA, because the original request came from the NVA's IP.
C.The connection fails because Subnet-B does not have a route back to the NVA.
D.Directly back to the VM in Subnet-A, bypassing the NVA, potentially causing asymmetric routing.
Correct Answer: Directly back to the VM in Subnet-A, bypassing the NVA, potentially causing asymmetric routing.
Explanation:
This scenario describes a classic asymmetric routing problem. The outbound traffic from Subnet-A to Subnet-B is forced through the NVA by the UDR. However, the VM in Subnet-B has no UDR telling it to do the same for the return traffic. It will check its effective route table. Since Subnet-A is in the same VNet, the system route for the VNet address space will be used, which has a next hop type of VNet. This means the return packet will be sent directly to the original VM's IP in Subnet-A, bypassing the NVA. The NVA sees the request go out but not the reply come back, which can break stateful firewalls or other inspection appliances. To fix this, a corresponding UDR would need to be applied to Subnet-B forcing traffic destined for Subnet-A back through the NVA.
Incorrect! Try again.
60You have a VNet with an address space of 10.10.0.0/16. You add a secondary address space of 192.168.0.0/24 to this VNet. You then create a new subnet with the address prefix 192.168.0.0/25. A VM deployed in this new subnet needs to communicate with a VM in an older subnet (10.10.1.0/24). What is the routing behavior for this traffic?
Configure virtual networks
Hard
A.The communication will succeed but will be routed via the VNet's default gateway, introducing extra latency.
B.The communication will succeed, and traffic will be routed directly within the VNet's internal fabric.
C.The communication will fail because resources in different address spaces within the same VNet cannot communicate without a router.
D.The communication requires a UDR to be configured to route traffic between the 10.10.0.0/16 and 192.168.0.0/24 address spaces.
Correct Answer: The communication will succeed, and traffic will be routed directly within the VNet's internal fabric.
Explanation:
Azure VNets support multiple, non-contiguous address spaces. When you add an address space to a VNet, Azure automatically updates the system route tables for all subnets within that VNet to include routes for all configured address spaces. The next hop for these routes is VirtualNetwork. Therefore, a VM in a subnet from the 192.168.0.0/24 range can communicate directly with a VM in a subnet from the 10.10.0.0/16 range as if they were in the same address block. No additional configuration like UDRs or gateways is needed for this intra-VNet communication.