1What is the primary function of an Azure Storage Account?
Azure storage accounts
Easy
A.To manage user identities and access
B.To run virtual machines
C.To host a web application's code
D.To act as a unique, top-level namespace for all your Azure Storage data objects
Correct Answer: To act as a unique, top-level namespace for all your Azure Storage data objects
Explanation:
An Azure Storage Account is the fundamental container that groups a set of Azure Storage services (Blob, Queue, Table, Files, Disks). It provides a unique, globally accessible namespace for your data.
Incorrect! Try again.
2Azure Blob Storage is primarily designed to store what kind of data?
Azure Blob Storage
Easy
A.Messaging data for asynchronous communication
B.Unstructured data like images, documents, and videos
C.Structured data in rows and columns, like a relational database
D.User login credentials
Correct Answer: Unstructured data like images, documents, and videos
Explanation:
Azure Blob (Binary Large Object) Storage is Microsoft's object storage solution, optimized for storing massive amounts of unstructured data.
Incorrect! Try again.
3What does LRS stand for in the context of Azure Storage redundancy?
Data Redundancy Options (LRS, ZRS, GRS)
Easy
A.Low-risk storage
B.Location-replicated storage
C.Large-region storage
D.Locally-redundant storage
Correct Answer: Locally-redundant storage
Explanation:
LRS stands for Locally-redundant storage. It is the lowest-cost option and replicates your data three times within a single physical location (datacenter) in the primary region.
Incorrect! Try again.
4What is Azure Storage Explorer?
Azure Storage Explorer
Easy
A.A type of data redundancy
B.A command-line tool for deploying Azure resources
C.A standalone desktop application for managing Azure Storage data
D.A web-based portal for monitoring Azure service health
Correct Answer: A standalone desktop application for managing Azure Storage data
Explanation:
Azure Storage Explorer is a free, standalone GUI application from Microsoft that allows you to easily work with Azure Storage data on Windows, macOS, and Linux.
Incorrect! Try again.
5What is the primary purpose of a Shared Access Signature (SAS) in Azure Storage?
Control access to Azure Storage with shared access signatures
Easy
A.To provide secure, delegated, and limited access to resources in your storage account
B.To permanently delete a storage account
C.To create a new storage account
D.To encrypt all data within a storage account
Correct Answer: To provide secure, delegated, and limited access to resources in your storage account
Explanation:
A SAS provides granular control over the type of access you grant to clients, including which resources they can access, what permissions they have, and for how long the access is valid.
Incorrect! Try again.
6What are the two primary credentials provided with every Azure Storage Account for programmatic access?
Azure Storage security
Easy
A.Access keys
B.Digital certificates
C.Usernames and passwords
D.SSH keys
Correct Answer: Access keys
Explanation:
Each storage account comes with two 512-bit storage account access keys (key1 and key2) that grant full administrative access to the account. They are used for authorizing requests.
Incorrect! Try again.
7Which redundancy option copies your data to a secondary region, hundreds of miles away from the primary region, to protect against regional outages?
Data Redundancy Options (LRS, ZRS, GRS)
Easy
A.Single-instance storage (SIS)
B.Zone-redundant storage (ZRS)
C.Geo-redundant storage (GRS)
D.Locally-redundant storage (LRS)
Correct Answer: Geo-redundant storage (GRS)
Explanation:
GRS is designed for maximum durability by replicating data to a secondary region that is geographically distant from the primary one, protecting data even if an entire region becomes unavailable.
Incorrect! Try again.
8In Azure Blob Storage, what is the name for a directory-like structure that is used to group a set of blobs?
Azure Blob Storage
Easy
A.File Share
B.Table
C.Queue
D.Container
Correct Answer: Container
Explanation:
A container organizes a set of blobs, similar to how a directory or folder organizes files in a file system. All blobs must reside within a container.
Incorrect! Try again.
9What is the main role of a Recovery Services vault in Azure?
Backup Vaults
Easy
A.To store and manage TLS/SSL certificates
B.To store unstructured blob data for applications
C.To manage and store backups and recovery points of various Azure services
D.To act as a high-performance cache for web apps
Correct Answer: To manage and store backups and recovery points of various Azure services
Explanation:
A Recovery Services vault is a storage entity in Azure that houses backup data for services like Azure VMs, SQL Server in Azure VMs, and Azure File shares. It facilitates backup management and restore operations.
Incorrect! Try again.
10Which of the following must be globally unique across all of Azure?
Azure storage accounts
Easy
A.Storage account name
B.Resource group name
C.Virtual network name
D.Blob container name
Correct Answer: Storage account name
Explanation:
The name of an Azure Storage Account is used to form the public endpoint URL (e.g., myaccount.blob.core.windows.net), so it must be globally unique across the entire Azure platform.
Incorrect! Try again.
11Which redundancy option protects against a datacenter-level failure by synchronously replicating data across three different datacenters (Availability Zones) within a single region?
Data Redundancy Options (LRS, ZRS, GRS)
Easy
A.Read-access geo-redundant storage (RA-GRS)
B.Locally-redundant storage (LRS)
C.Zone-redundant storage (ZRS)
D.Geo-redundant storage (GRS)
Correct Answer: Zone-redundant storage (ZRS)
Explanation:
ZRS provides high availability by copying data across three distinct Availability Zones within the primary region. This ensures data is safe even if one entire datacenter (zone) fails.
Incorrect! Try again.
12A SAS token is appended to a URI. What does it consist of?
Control access to Azure Storage with shared access signatures
Easy
A.A set of HTTP headers
B.A username and password
C.An encrypted file
D.A special set of query parameters
Correct Answer: A special set of query parameters
Explanation:
A SAS is a string containing a set of query parameters that are appended to the resource URI. These parameters indicate the permissions, start time, expiry time, and a cryptographic signature.
Incorrect! Try again.
13If a storage account's access key is compromised, what is the recommended immediate security action?
Azure Storage security
Easy
A.Regenerate the compromised key
B.Delete the storage account
C.Contact Azure support
D.Disable public access
Correct Answer: Regenerate the compromised key
Explanation:
Regenerating the key immediately invalidates the old, compromised key, preventing any further unauthorized access. This is a critical security practice.
Incorrect! Try again.
14Which of the following is a valid type of blob in Azure Blob Storage?
Azure Blob Storage
Easy
A.Block blob
B.Queue blob
C.Table blob
D.File blob
Correct Answer: Block blob
Explanation:
Azure Blob Storage offers three types of blobs: Block blobs (for text and binary data), Append blobs (optimized for append operations), and Page blobs (for random read/write operations, like VHD files).
Incorrect! Try again.
15Which of the following tasks can be performed using Azure Storage Explorer?
Azure Storage Explorer
Easy
A.Upload, download, and manage blobs and files
B.Write and debug C# code
C.Create a new Azure subscription
D.Configure virtual network peering
Correct Answer: Upload, download, and manage blobs and files
Explanation:
Azure Storage Explorer is a client tool for managing storage resources. Its core functionalities include data operations like uploading, downloading, deleting, and visualizing blobs, files, queues, and tables.
Incorrect! Try again.
16What are the two main performance tiers available for Azure Storage Accounts?
Azure storage accounts
Easy
A.Hot and Cold
B.Standard and Premium
C.Basic and Enterprise
D.Shared and Dedicated
Correct Answer: Standard and Premium
Explanation:
Azure Storage Accounts offer a 'Standard' tier, which uses magnetic drives (HDD) for general-purpose storage, and a 'Premium' tier, which uses solid-state drives (SSD) for low-latency, high-throughput workloads.
Incorrect! Try again.
17When creating a SAS, which of the following can you specify to limit its use?
Control access to Azure Storage with shared access signatures
Easy
A.A specific MAC address
B.The type of computer accessing the resource
C.The user's geographic location
D.An expiration date and time
Correct Answer: An expiration date and time
Explanation:
A key security feature of a SAS is its limited lifetime. You must specify a start time and an expiry time, after which the SAS token becomes invalid.
Incorrect! Try again.
18Which Azure Storage redundancy option is generally the lowest-cost?
Data Redundancy Options (LRS, ZRS, GRS)
Easy
A.Geo-redundant storage (GRS)
B.Locally-redundant storage (LRS)
C.Geo-zone-redundant storage (GZRS)
D.Zone-redundant storage (ZRS)
Correct Answer: Locally-redundant storage (LRS)
Explanation:
LRS is the least expensive option because it stores the fewest copies of your data (three copies) in a single physical location, offering the least redundancy compared to ZRS, GRS, or GZRS.
Incorrect! Try again.
19A single file, such as a JPEG image, that is uploaded to a container in Azure Blob Storage is referred to as a what?
Azure Blob Storage
Easy
A.Item
B.Blob
C.Table
D.Record
Correct Answer: Blob
Explanation:
Any individual file or piece of data stored in Azure Blob Storage is called a 'blob'. The service is named for storing these Binary Large Objects.
Incorrect! Try again.
20What modern authorization method is recommended by Microsoft over using shared account keys for securing Azure Storage?
Azure Storage security
Easy
A.Storing access keys directly in application code
B.Using only SAS tokens for all access
C.Disabling all security features for better performance
D.Azure Role-Based Access Control (RBAC) with Azure Active Directory
Correct Answer: Azure Role-Based Access Control (RBAC) with Azure Active Directory
Explanation:
Microsoft recommends using Azure AD to authorize requests to storage. This allows for fine-grained permissions to be assigned to users, groups, or applications via RBAC, which is more secure and manageable than sharing the all-powerful account keys.
Incorrect! Try again.
21A company is planning to store virtual machine disks (VHDs) for their IaaS VMs and also host a static website with high transaction rates. They want to use a single storage account for both workloads to simplify management. Which storage account kind and performance tier combination would be most appropriate?
Azure storage accounts
Medium
A.FileStorage account with Premium performance.
B.General-purpose v2 account with Premium performance.
C.General-purpose v2 account with Standard performance.
D.BlobStorage account with Standard performance.
Correct Answer: General-purpose v2 account with Premium performance.
Explanation:
A General-purpose v2 (GPv2) account supports all storage services, including blobs (for the static website) and files/disks. For VHDs used by VMs, Premium performance is recommended as it uses solid-state drives (SSDs) for low latency and high throughput, which is crucial for operating system and data disks. Standard performance uses HDDs and is not ideal for active VHDs.
Incorrect! Try again.
22An organization is deploying a critical application in the East US Azure region. They require a storage solution that can withstand a complete datacenter failure within that region without any data loss and with automatic failover of the storage endpoint. Cost is a secondary concern to regional availability. Which redundancy option should be selected?
Data Redundancy Options (LRS, ZRS, GRS)
Medium
A.Geo-Redundant Storage (GRS)
B.Zone-Redundant Storage (ZRS)
C.Read-Access Geo-Redundant Storage (RA-GRS)
D.Locally-Redundant Storage (LRS)
Correct Answer: Zone-Redundant Storage (ZRS)
Explanation:
Zone-Redundant Storage (ZRS) synchronously replicates data across three different Availability Zones within a single region. This design protects against a datacenter-level failure. GRS and RA-GRS protect against regional failures but have a higher Recovery Point Objective (RPO) due to asynchronous replication to the secondary region. LRS only protects against node/rack failures within a single datacenter.
Incorrect! Try again.
23A media company uploads large video files (50-100 GB each) for a video processing workflow. The workflow application needs to read and write to specific byte ranges within these files without rewriting the entire file. Which type of blob is specifically designed for this 'random access' read/write workload?
Azure Blob Storage
Medium
A.Archive Blobs
B.Block Blobs
C.Page Blobs
D.Append Blobs
Correct Answer: Page Blobs
Explanation:
Page Blobs are optimized for random read and write operations. They are a collection of 512-byte pages, making them ideal for scenarios like storing VHD files or any data structure that requires reading/writing to arbitrary offsets. Block blobs are for streaming large objects, and Append blobs are for append-only operations like logging.
Incorrect! Try again.
24A developer needs to provide a third-party application with temporary, delegated access to upload new blobs into a specific container named uploads. The access should be valid for only 48 hours and should not grant permissions to read, delete, or list any blobs. Which type of Shared Access Signature (SAS) is the most secure and appropriate for this requirement?
Control access to Azure Storage with shared access signatures
Medium
A.A User Delegation SAS with full permissions for the container.
B.A Service SAS for the uploads container with only write and create permissions.
C.An Account SAS with write and create permissions for the Blob service.
D.A Stored Access Policy with read, write, and list permissions.
Correct Answer: A Service SAS for the uploads container with only write and create permissions.
Explanation:
A Service SAS is scoped to a specific resource (in this case, the uploads container). This follows the principle of least privilege by only granting the necessary permissions (write, create) on the specific resource needed. An Account SAS would grant permissions to the entire blob service in the storage account, which is too broad. A User Delegation SAS is a good option but the question is about the type of SAS, and the service-level scope is the key here.
Incorrect! Try again.
25A company has a storage account containing sensitive financial data. The security policy mandates that all data must be encrypted at rest using keys that are managed and rotated by the company's internal security team, not by Microsoft. Which Azure Storage encryption feature must be configured to meet this requirement?
Azure Storage security
Medium
A.Configuring a virtual network service endpoint for the storage account.
B.Enforcing HTTPS for data in transit.
C.Storage Service Encryption (SSE) with customer-managed keys (CMK) stored in Azure Key Vault.
D.Storage Service Encryption (SSE) with Microsoft-managed keys.
Correct Answer: Storage Service Encryption (SSE) with customer-managed keys (CMK) stored in Azure Key Vault.
Explanation:
To meet the requirement of managing their own encryption keys, the company must use customer-managed keys (CMK). Azure Storage integrates with Azure Key Vault to allow customers to bring and manage their own RSA keys for encrypting data at rest. Microsoft-managed keys mean Microsoft handles all key management, which violates the stated policy.
Incorrect! Try again.
26Your company stores monthly transaction logs in Azure Blob Storage. These logs are frequently accessed for reporting during the first 30 days. After 30 days, they are rarely accessed but must be retained for one year for compliance. After one year, they can be deleted. What is the most cost-effective way to automate this data lifecycle?
Azure Blob Storage
Medium
A.Manually move blobs from Hot to Cool tier after 30 days and delete after one year.
B.Create a lifecycle management policy to transition blobs to Cool tier after 30 days and delete blobs older than 365 days.
C.Store all data in the Archive tier and rehydrate it when needed for reporting.
D.Write a custom script using Azure Functions to check blob ages and move them between tiers.
Correct Answer: Create a lifecycle management policy to transition blobs to Cool tier after 30 days and delete blobs older than 365 days.
Explanation:
Azure Storage lifecycle management offers a rule-based policy to automate tiering and deletion. This is the most efficient and cost-effective method. A rule can be set to move blobs from Hot to Cool after 30 days (reducing storage costs) and then another rule to delete blobs older than 365 days, perfectly matching the requirements without manual intervention or custom code.
Incorrect! Try again.
27A company has a storage account configured with Geo-Redundant Storage (GRS). The primary region experiences a major outage. The company needs to access their data from the secondary region for read-only operations while waiting for the primary region to recover. What must they have configured to enable this capability?
Data Redundancy Options (LRS, ZRS, GRS)
Medium
A.They must initiate a manual account failover.
B.They must have configured the account as Read-Access Geo-Redundant Storage (RA-GRS).
C.They must have configured the account as Zone-Redundant Storage (ZRS).
D.Nothing, GRS provides read access to the secondary region by default.
Correct Answer: They must have configured the account as Read-Access Geo-Redundant Storage (RA-GRS).
Explanation:
Standard GRS replicates data to a secondary region but does not allow read access to that data unless a Microsoft-initiated failover occurs. To get read-only access to the data in the secondary region at any time, the storage account must be configured with Read-Access Geo-Redundant Storage (RA-GRS), which provides a separate read-only endpoint for the secondary location.
Incorrect! Try again.
28A data analyst needs to manage files in an Azure Data Lake Storage Gen2 account and blobs in a standard storage account from their desktop computer. They need a graphical tool that allows them to upload/download data, manage access control lists (ACLs) on the ADLS Gen2 directories, and connect using different Azure AD accounts for different subscriptions. Which tool is best suited for this?
Azure Storage Explorer
Medium
A.Azure Storage Explorer
B.AzCopy command-line utility
C.Azure CLI
D.The Azure Portal
Correct Answer: Azure Storage Explorer
Explanation:
Azure Storage Explorer is a standalone graphical application designed for these exact tasks. It provides a rich user interface to manage various Azure storage services (Blobs, Files, Queues, Tables, ADLS Gen2) across multiple subscriptions and Azure AD tenants. It explicitly supports managing ADLS Gen2 ACLs, which is a key requirement not as easily handled by the portal for bulk operations.
Incorrect! Try again.
29An administrator configures a storage account firewall to only allow connections from a specific VNet subnet. A virtual machine in that subnet attempts to connect to the storage account's public endpoint but fails. What is a likely reason for this connection failure?
Azure Storage security
Medium
A.The administrator needs to use a private endpoint instead of a service endpoint.
B.The storage account must be set to Premium performance to use firewall rules.
C.A Network Security Group (NSG) is blocking outbound traffic from the VM to the storage service.
D.The virtual network service endpoint for Microsoft.Storage has not been enabled on the subnet.
Correct Answer: The virtual network service endpoint for Microsoft.Storage has not been enabled on the subnet.
Explanation:
For a storage account firewall rule to recognize traffic from a VNet subnet, a service endpoint for Microsoft.Storage must be enabled on that specific subnet. This endpoint provides a secure and direct route from the VNet to the Azure service, allowing the storage firewall to identify the source subnet and apply the rule.
Incorrect! Try again.
30A user is trying to use a SAS token to access a blob, but receives an authentication error. The administrator has verified the SAS signature, expiry time, and permissions are all correct. The storage account requires that all connections use HTTPS. What parameter in the SAS token string is most likely missing or misconfigured?
Control access to Azure Storage with shared access signatures
Medium
A.The Signed Protocol (spr) parameter.
B.The Signed IP (sip) parameter.
C.The Signed Resource (sr) parameter.
D.The Signed Start (st) parameter.
Correct Answer: The Signed Protocol (spr) parameter.
Explanation:
If a storage account is configured to allow HTTPS traffic only, any SAS token used to access it must specify HTTPS as the allowed protocol. This is done using the spr parameter (e.g., spr=https). If this parameter is omitted or set to http,https, and the client attempts an HTTP connection, or if the storage account enforces HTTPS-only, the request will fail.
Incorrect! Try again.
31A company uses a Recovery Services vault to back up blob data from a geo-redundant storage (GRS) account. The Recovery Services vault itself is configured with locally-redundant storage (LRS). If the primary Azure region fails, what is the state of the backup data?
Backup Vaults
Medium
A.The backup data can be restored, but only to the primary region.
B.The backup data can be restored to the secondary region automatically.
C.The backup data is also geo-replicated because the source storage is GRS.
D.The backup data is unavailable until the primary region is restored.
Correct Answer: The backup data is unavailable until the primary region is restored.
Explanation:
The redundancy setting of a Recovery Services vault is independent of the source data's redundancy. If the vault is configured as LRS, the backup data is stored only within a single datacenter in the primary region. Therefore, if that entire region fails, the LRS vault and all its backup data become inaccessible until the region is recovered.
Incorrect! Try again.
32A company wants to enable Azure Data Lake Storage Gen2 (ADLS Gen2) capabilities, such as hierarchical namespace and POSIX-like access controls, for their big data analytics workloads. When creating a new storage account, what specific setting must they enable?
Azure storage accounts
Medium
A.Choose FileStorage as the account kind.
B.Enable the Hierarchical namespace setting in the Advanced tab.
C.Select the Premium performance tier.
D.Enable Large file shares on the account.
Correct Answer: Enable the Hierarchical namespace setting in the Advanced tab.
Explanation:
Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage. The key feature that enables ADLS Gen2 capabilities is the hierarchical namespace. This must be explicitly enabled in the 'Advanced' tab during the creation of a standard or premium general-purpose v2 storage account. This setting cannot be changed after the account is created.
Incorrect! Try again.
33A company hosts a public-facing static website directly from Azure Storage. They have uploaded their index.html, CSS, and JavaScript files to a specific container. When users navigate to the storage account's static website endpoint, they receive a 404 error. What is the most likely misconfiguration?
Azure Blob Storage
Medium
A.The redundancy option for the storage account is set to LRS.
B.The container's access level is set to Private instead of Blob or Container.
C.The special container used for static website hosting has not been named $web.
D.The storage account firewall is blocking all public traffic.
Correct Answer: The special container used for static website hosting has not been named $web.
Explanation:
When you enable the static website feature on a storage account, Azure automatically creates a special container named $web. All the static content, including the root index.html file, must be placed inside this specific container. If the files are in any other container, the static website endpoint will not be able to find them, resulting in a 404 Not Found error.
Incorrect! Try again.
34A developer is using Azure Storage Explorer to manage a storage account but does not have access to the subscription through Azure AD. The administrator provides them with a connection string for the storage account. What information is contained within this connection string that allows Storage Explorer to authenticate?
Azure Storage Explorer
Medium
A.A client ID and secret for a service principal.
B.The storage account name and one of the account access keys.
D.The developer's Azure AD username and a password.
Correct Answer: The storage account name and one of the account access keys.
Explanation:
A storage account connection string is a standardized string that includes the protocol (e.g., HTTPS), the storage account name, and, crucially, one of the full-permission account access keys. Azure Storage Explorer parses this string to get the credentials needed to authenticate directly with the storage account, bypassing the need for Azure AD authentication.
Incorrect! Try again.
35A compliance policy requires that all backups of an Azure Storage account's blob data be immutable and protected from accidental or malicious deletion for a minimum of 30 days. Which Recovery Services vault feature should be configured to enforce this policy?
Backup Vaults
Medium
A.Configuring Geo-Redundant Storage (GRS) for the vault.
B.Enabling Multi-User Authorization (MUA) using Azure Resource Guard.
C.Enabling Soft Delete for the vault with a 30-day retention period.
D.Assigning a 'Deny' Azure Policy to the vault's resource group.
Correct Answer: Enabling Soft Delete for the vault with a 30-day retention period.
Explanation:
Soft Delete for Recovery Services vaults is designed specifically for this purpose. When enabled, if a backup item is deleted, the data is retained in a 'soft-deleted' state for a configured period (14 to 180 days). During this period, the deletion can be undone. This protects against accidental deletion. While MUA adds another layer of protection for critical operations, Soft Delete is the primary feature for retention after deletion.
Incorrect! Try again.
36A team has an existing general-purpose v2 storage account configured with Locally-Redundant Storage (LRS). Due to new high-availability requirements, they need to change the redundancy to Zone-Redundant Storage (ZRS). What is the process for this change?
Data Redundancy Options (LRS, ZRS, GRS)
Medium
A.The redundancy option can be changed directly in the Azure Portal, but it will cause several hours of downtime.
B.Initiate a live migration from LRS to ZRS from the 'Redundancy' blade in the Azure Portal with no downtime.
C.This change is not possible; a new ZRS account must be created and the data migrated.
D.Submit a support ticket to Microsoft to perform a backend conversion.
Correct Answer: Initiate a live migration from LRS to ZRS from the 'Redundancy' blade in the Azure Portal with no downtime.
Explanation:
For many regions and account types, Azure supports a live, no-downtime migration from LRS to ZRS (and also to GRS/GZRS). This can be initiated directly from the 'Redundancy' configuration blade of the storage account in the Azure Portal. Azure handles the data replication in the background without impacting application access to the data.
Incorrect! Try again.
37An administrator creates a stored access policy on a blob container with read permissions. They then generate two SAS tokens that reference this policy. Later, they need to immediately revoke access for both SAS tokens simultaneously without having to track the individual tokens. What is the most effective action?
Control access to Azure Storage with shared access signatures
Medium
A.Modify the expiration time of the stored access policy to a time in the past.
B.Use the Azure CLI to revoke each SAS token individually by its signature.
C.Regenerate the storage account access keys.
D.Delete the blob container.
Correct Answer: Modify the expiration time of the stored access policy to a time in the past.
Explanation:
The primary benefit of using a stored access policy is centralized control. By modifying the policy (e.g., changing its permissions or setting its expiry time to the past), all SAS tokens that reference it are immediately invalidated. Regenerating account keys would also work but is a much more disruptive action that invalidates all access methods using that key, not just the SAS tokens tied to the policy.
Incorrect! Try again.
38A developer is using the Azure CLI to create a new storage account. They want to ensure that unencrypted, HTTP requests to the storage account are automatically rejected. Which command-line flag should they include in their az storage account create command?
Azure storage accounts
Medium
A.--kind StorageV2
B.--https-only true
C.--allow-blob-public-access false
D.--encryption-services blob
Correct Answer: --https-only true
Explanation:
The --https-only true flag (or enabling 'Secure transfer required' in the portal) configures the storage account to reject any incoming requests that use the HTTP protocol. This enforces that all data is encrypted in transit using HTTPS/TLS. While the other options are valid security settings, only --https-only specifically addresses the rejection of unencrypted HTTP traffic.
Incorrect! Try again.
39A security audit reveals that several storage accounts allow anonymous public read access to some blob containers. The company wants to implement a preventative policy at the subscription level to block the creation of any new storage accounts that permit public access and flag existing ones. Which Azure service is best suited for this?
Azure Storage security
Medium
A.Network Security Groups (NSGs)
B.Azure Policy
C.Azure Active Directory Conditional Access
D.Azure Key Vault
Correct Answer: Azure Policy
Explanation:
Azure Policy is designed to enforce organizational standards and assess compliance at scale. There are built-in policies specifically to 'deny' or 'audit' storage accounts that have the allowBlobPublicAccess property set to true. This allows the company to enforce this rule across the entire subscription, preventing misconfigurations and auditing existing resources.
Incorrect! Try again.
40You are using Azure Storage Explorer to troubleshoot an application that writes to an Azure Queue. You need to view the contents of messages in the queue without removing them, as the application needs to process them later. Which Storage Explorer operation allows you to do this?
Azure Storage Explorer
Medium
A.Peek Messages
B.Dequeue Messages
C.Clear Queue
D.Get Messages
Correct Answer: Peek Messages
Explanation:
In Azure Queues, 'peeking' a message allows you to read its content without changing its visibility or removing it from the queue. The message remains available for other processes to dequeue and process. 'Dequeueing' reads the message and makes it invisible for a timeout period, effectively reserving it for processing. Azure Storage Explorer provides a 'Peek' function for this specific non-destructive read scenario.
Incorrect! Try again.
41A company deploys a critical application using an Azure Storage Account in the "East US" region. The requirements are:
1. Withstand a complete zonal failure within "East US".
2. Maintain a disaster recovery copy in "West US".
3. Allow read access to the secondary location for performance reasons without initiating a failover.
What is the most cost-effective storage redundancy option that meets all these requirements?
To meet all requirements, we must break them down:
Withstand zonal failure: This requires a zone-redundant option (ZRS or GZRS) in the primary region.
Disaster recovery copy in another region: This requires a geo-redundant option (GRS, RA-GRS, GZRS, or RA-GZRS).
Read access to the secondary: This requires a read-access (RA) option (RA-GRS or RA-GZRS).
Combining these, RA-GZRS is the only option that provides zone redundancy in the primary region, a geo-replicated copy in a secondary region, and read-access to that secondary copy.
Incorrect! Try again.
42A storage account has a VNet service endpoint enabled for SubnetA. It also has a firewall rule allowing traffic from a specific public IP address. A private endpoint for the same storage account is created in SubnetB within the same VNet. A VM in SubnetB (the subnet with the private endpoint) attempts to access the storage account using its public endpoint FQDN (mystorage.blob.core.windows.net). What is the expected outcome?
Azure Storage security
Hard
A.Access is granted if the VM's public IP matches the firewall rule.
B.Access is granted and routed through the private endpoint.
C.Access is granted through the VNet service endpoint in SubnetA.
D.Access is denied because network policies are not disabled for private endpoints on the subnet.
Correct Answer: Access is granted and routed through the private endpoint.
Explanation:
When a private endpoint is configured for a VNet, Azure DNS is updated to resolve the public FQDN of the storage account to the private IP address of the private endpoint for any client within that VNet. Therefore, when the VM in SubnetB resolves mystorage.blob.core.windows.net, it gets the private IP. The traffic is then routed privately over the Azure backbone to the storage account, bypassing the public endpoint, firewall rules, and service endpoints entirely. This is the primary function of a private endpoint.
Incorrect! Try again.
43A stored access policy named read-policy is created for a blob container, granting read permissions with a 24-hour expiry. A developer generates a Service SAS token for a specific blob in that container, referencing read-policy. Two hours later, an administrator deletesread-policy from the container. What happens when the developer tries to use the original SAS token to read the blob?
Control access to Azure Storage with shared access signatures
Hard
A.The token is still valid for reading for the original 24-hour period.
B.The token will work, but a warning will be logged in Azure Monitor about the missing policy.
C.The token is immediately invalidated and access is denied.
D.The token's permissions are dynamically updated to be read-only at the container level.
Correct Answer: The token is immediately invalidated and access is denied.
Explanation:
The validity of a SAS token tied to a stored access policy is checked against the policy at the time of use. If the stored access policy is deleted, any SAS tokens associated with it are immediately invalidated. When the storage service receives the request with the SAS, it will fail to find the referenced policy (read-policy) and will deny the request with a 403 (Forbidden) error.
Incorrect! Try again.
44A legal department requires that certain financial documents in a blob container cannot be deleted or modified for 7 years. New documents must be continuously added. During the 7-year period, a specific document is part of a lawsuit and its retention must be extended indefinitely, without affecting other documents. Which combination of features achieves this with the most administrative efficiency?
Azure Blob Storage
Hard
A.Create a 7-year time-based immutability policy on the container, and then add a blob-level legal hold to the specific document.
B.Enable versioning and a 7-year time-based immutability policy.
C.Enable a container-level legal hold and manually track the 7-year retention for other documents.
D.Enable soft delete with a 7-year retention and use Azure AD RBAC to prevent deletion.
Correct Answer: Create a 7-year time-based immutability policy on the container, and then add a blob-level legal hold to the specific document.
Explanation:
This scenario requires two different types of retention. A time-based immutability policy is perfect for the fixed 7-year requirement for all documents. A legal hold is an override that prevents deletion or modification until it is explicitly removed, making it ideal for the indefinite retention required by the lawsuit. Importantly, a legal hold can be applied at the blob-version level even when a time-based policy exists on the container. The legal hold takes precedence for that specific blob, extending its retention indefinitely while other blobs are still governed by the 7-year policy.
Incorrect! Try again.
45A company is migrating an HPC workload to Azure that requires extremely low-latency, high-throughput access to millions of small files. The data must be accessible via both the NFS v3 protocol and the native Blob REST API. Which storage account configuration is required to meet all these conditions?
Azure storage accounts
Hard
A.Premium File Shares account with NFS protocol enabled.
B.General-purpose v2, Standard performance, Hot tier, with hierarchical namespace enabled.
C.Premium Block Blobs account with hierarchical namespace enabled.
D.Premium Page Blobs account.
Correct Answer: Premium Block Blobs account with hierarchical namespace enabled.
Explanation:
This combination of requirements points to a very specific configuration. 'Extremely low-latency' suggests a Premium tier. 'Hierarchical namespace' enables file system semantics and is a key feature of Azure Data Lake Storage (ADLS) Gen2. The ability to support both the Blob REST API and NFS v3 protocol on the same data is a feature of ADLS Gen2. This functionality is enabled by creating a Premium Block Blobs account and enabling the hierarchical namespace feature during creation.
Incorrect! Try again.
46A storage account is configured with Geo-Redundant Storage (GRS). A regional outage occurs in the primary region. An administrator successfully initiates a customer-managed failover to the secondary region. After the failover completes, what is the redundancy level of the storage account in the new primary region (the old secondary)?
Data Redundancy Options (LRS, ZRS, GRS)
Hard
A.The account is automatically converted to Locally-Redundant Storage (LRS) and must be manually reconfigured back to GRS.
B.The account remains GRS, automatically replicating back to the original primary region once it's available.
C.The account is automatically converted to Zone-Redundant Storage (ZRS) to protect against failures in the new primary.
D.The account enters a permanent read-only state until the original primary region is restored.
Correct Answer: The account is automatically converted to Locally-Redundant Storage (LRS) and must be manually reconfigured back to GRS.
Explanation:
After a customer-managed failover, the secondary region becomes the new primary. To immediately protect the data in this new primary region, Azure automatically configures the account with LRS. Replication to a new secondary region (including the original primary once it recovers) does not start automatically. The administrator is responsible for manually re-enabling GRS or GZRS to re-establish geo-replication, which will incur costs and take time to sync.
Incorrect! Try again.
47A developer needs to provide a client application with temporary access to upload blobs to a container. For maximum security, the SAS must be tied to an Azure AD identity, be revokable through Azure AD permissions, and must not rely on storage account keys. The client application will use the Azure Identity library to authenticate. Which type of SAS must be used?
Control access to Azure Storage with shared access signatures
Hard
A.Account SAS signed with an account key.
B.User Delegation SAS.
C.A Service SAS combined with a stored access policy.
D.Service SAS signed with an account key.
Correct Answer: User Delegation SAS.
Explanation:
A User Delegation SAS is the only type signed with Azure AD credentials instead of the storage account key. This directly meets all the requirements: it is tied to an Azure AD identity, its permissions are derived from the RBAC roles assigned to that identity, and it can be revoked by revoking the user delegation key or the AAD identity's permissions. This approach avoids exposing the powerful account keys.
Incorrect! Try again.
48A storage account is configured with a default network access rule of "Deny". A VNet service endpoint for Microsoft.Storage is enabled on SubnetA. A private endpoint for the storage account is created in SubnetB. A firewall rule allows access from the on-premises corporate IP range. A user is on-premises, connected via an ExpressRoute circuit with private peering to the VNet. From a VM in SubnetB, what is the primary mechanism through which the storage account will be accessed?
Azure Storage security
Hard
A.Access is denied.
B.Through the public endpoint via the ExpressRoute circuit.
C.Through the VNet service endpoint.
D.Through the private endpoint.
Correct Answer: Through the private endpoint.
Explanation:
Even though multiple network access methods are configured, the presence of a private endpoint in a subnet and the corresponding DNS configuration (private DNS zone) means that any resource within that VNet resolving the storage account's FQDN will be directed to the private IP address of the endpoint. Therefore, the VM in SubnetB will communicate with the storage account over this private connection. The service endpoint on SubnetA and the public firewall rules are irrelevant for traffic originating from SubnetB.
Incorrect! Try again.
49A lifecycle management policy is configured with two rules on a container with versioning enabled: Rule 1: If blob index tag "status" = "archive_ready", move the current version to Archive after 30 days of creation. Rule 2: Delete previous versions 90 days after they become a previous version.
A blob is created with the tag status = "archive_ready". 45 days later, the blob is modified, creating a new current version. 100 days after the modification (145 days total), what is the state of the original (first) version of the blob?
Azure Blob Storage
Hard
A.It was archived at day 30 and then deleted at day 135.
B.It was moved to the Archive tier.
C.It has been deleted.
D.It exists in the Hot/Cool tier as a previous version.
Correct Answer: It has been deleted.
Explanation:
Let's trace the lifecycle of the original version:
Day 0: Original version created.
Day 45: Blob is modified. The original version becomes a 'previous version'. Rule 1 no longer applies to it, as it only targets the 'current version'.
Rule 2 Evaluation: Rule 2 deletes previous versions 90 days after they become a previous version. This occurs on Day 45 + 90 days = Day 135.
Day 145: The current time is past Day 135, so the condition for Rule 2 has been met and the original version has been deleted.
Incorrect! Try again.
50A storage account has operational backup configured via a Backup Vault with a 30-day retention policy. The storage account also has blob soft delete enabled for 14 days and versioning enabled. An attacker on Day 1 deletes a critical blob, report.docx. On Day 10, the attacker deletes the entire container. On Day 20, the security team discovers the breach. What is the most effective method to recover report.docx?
Backup Vaults
Hard
A.Use the versioning feature to restore the previous version of the blob.
B.Restore the container from the Backup Vault to a point in time before the Day 1 deletion.
C.Undelete the blob from the container using the soft delete feature.
D.Undelete the container using container soft delete, then undelete the blob.
Correct Answer: Restore the container from the Backup Vault to a point in time before the Day 1 deletion.
Explanation:
Blob soft delete is irrelevant because the blob's retention period (14 days) would have expired by Day 20 (20 - 1 = 19 days ago). Container soft delete (if enabled) could bring back the container, but the blob inside was already deleted and past its own soft delete window. Versioning doesn't help because the blob was deleted, not modified to create a new version. The Backup Vault is the definitive solution, as it provides point-in-time restore for the container. By restoring the container's state from before the Day 1 deletion, the team can recover the blob in its last known good state.
Incorrect! Try again.
51You have a General-purpose v2 (GPv2) storage account with LRS redundancy containing 100 TB of data. You need to change the account's redundancy to GZRS to meet new compliance requirements. What is the most accurate description of the process?
Azure storage accounts
Hard
A.You must first convert to ZRS, then to GZRS. This is a live migration that can take hours or days.
B.The change can be initiated in the Azure portal and happens instantly with no downtime.
C.The change is not possible because a storage account with existing data cannot be converted to a zone-redundant option.
D.You must create a new GZRS account and manually copy the data using a tool like AzCopy, which will incur data transfer and transaction costs.
Correct Answer: You must create a new GZRS account and manually copy the data using a tool like AzCopy, which will incur data transfer and transaction costs.
Explanation:
This question highlights a critical limitation in Azure Storage redundancy changes. You cannot perform a live migration from a non-zone-redundant option (LRS, GRS) to a zone-redundant option (ZRS, GZRS) on an existing account with data. The only supported method is to create a new storage account with the desired GZRS setting and perform a manual, server-side data migration using tools like AzCopy. This process involves downtime for the application and incurs costs for the data transfer and write operations on the new account.
Incorrect! Try again.
52A developer is using Azure Storage Explorer to manage a storage account that is secured with a private endpoint and has public network access disabled. The developer's machine is in an on-premises network connected to the Azure VNet via a Site-to-Site VPN. They can ping the private IP of the storage endpoint successfully but cannot browse it in Storage Explorer, receiving an authentication error. What is the most likely misconfiguration?
Azure Storage Explorer
Hard
A.The developer is trying to connect using the standard account.blob.core.windows.net FQDN without proper on-premises DNS forwarding.
B.The developer's Azure AD account lacks the Storage Blob Data Contributor role.
C.The VPN gateway's Network Security Group is blocking HTTPS traffic on port 443.
D.Azure Storage Explorer does not support connections over VPN.
Correct Answer: The developer is trying to connect using the standard account.blob.core.windows.net FQDN without proper on-premises DNS forwarding.
Explanation:
Private endpoints work by overriding DNS. For an on-premises machine to use a private endpoint, its DNS queries for account.blob.core.windows.net must resolve to the private IP address in Azure. This typically requires a DNS forwarder on-premises that directs these specific queries to Azure's internal DNS. If this is not set up, the developer's machine will resolve the FQDN to its public IP, try to connect publicly (which is disabled), and fail. Pinging the IP directly works because it bypasses the DNS resolution step, but Storage Explorer uses the FQDN for authentication and certificate validation, causing the failure.
Incorrect! Try again.
53An Account SAS is generated with the following parameters: Allowed Services: Blob, File Allowed Resource Types: Service, Container Allowed Permissions: Read, List Expiry: 24 hours
A user attempts to use this SAS to: 1) List all blobs in a specific container, and 2) Get the properties of the File service. Which statement is correct?
Control access to Azure Storage with shared access signatures
Hard
A.Operation 1 will fail, but operation 2 will succeed.
B.Both operations will succeed.
C.Operation 1 will succeed, but operation 2 will fail.
D.Both operations will fail.
Correct Answer: Operation 1 will fail, but operation 2 will succeed.
Explanation:
The Allowed Resource Types parameter is critical. It is set to Service and Container.
List blobs in a container: This operation requires access to the objects inside the container. Since Object (o) is not included in the Allowed Resource Types, this operation will fail. The SAS only allows container-level operations, like listing containers within the service.
Get File service properties: This is a service-level operation. Since Service (s) is an allowed resource type and File is an allowed service, this operation is permitted by the SAS token.
Incorrect! Try again.
54A financial services company needs to deploy a solution on Azure using Premium File Shares (FileStorage account type). Their primary goal is the highest possible availability within a single region, capable of surviving a datacenter-level failure. Cost is a secondary concern. Which redundancy option must they choose for their FileStorage account?
Data Redundancy Options (LRS, ZRS, GRS)
Hard
A.Zone-Redundant Storage (ZRS)
B.Geo-Zone-Redundant Storage (GZRS)
C.Locally-Redundant Storage (LRS)
D.Read-Access Geo-Redundant Storage (RA-GRS)
Correct Answer: Zone-Redundant Storage (ZRS)
Explanation:
This question tests knowledge of service limitations. Premium tier storage accounts, which include Premium File Shares (FileStorage), Premium Block Blobs, and Premium Page Blobs, only support LRS and ZRS. They do not support any form of geo-redundancy (GRS, GZRS, RA-GRS, RA-GZRS). To achieve the highest availability within a single region (the stated goal), ZRS is the only and best option, as it synchronously replicates data across three availability zones, protecting against datacenter (zonal) failures.
Incorrect! Try again.
55A storage account is configured for encryption using a Customer-Managed Key (CMK) from an Azure Key Vault. The Key Vault is protected by a VNet service endpoint and has a default 'Deny' network rule. The storage account's system-assigned managed identity has been given Get, Wrap Key, and Unwrap Key permissions on the Key Vault. However, the storage account itself is not in the Key Vault's VNet. What additional configuration is required for the storage account to access the key for encryption/decryption?
Azure Storage security
Hard
A.Add the storage account's public IP address to the Key Vault's firewall.
B.Grant the managed identity the Key Vault Crypto Service Encryption User role.
C.Enable the 'Allow trusted Microsoft services to bypass this firewall' option on the Key Vault's networking settings.
D.Create a private endpoint for the Key Vault in the storage account's VNet.
Correct Answer: Enable the 'Allow trusted Microsoft services to bypass this firewall' option on the Key Vault's networking settings.
Explanation:
When a storage account needs to access a Key Vault for CMK operations, it does so through its managed identity over the Microsoft backbone, not from within a VNet. If the Key Vault's firewall is enabled, this traffic will be blocked by default. The 'Allow trusted Microsoft services' exception is designed for this specific scenario. It creates a secure channel for services like Azure Storage to bypass the VNet firewall rules, provided the service instance (the storage account in this case) is properly authenticated via its managed identity.
Incorrect! Try again.
56A company uses an Azure Storage Account with the hierarchical namespace enabled (ADLS Gen2). They have a directory structure /raw/YYYY/MM/DD/. An analyst needs to process all data for June 2023. They are using an application that leverages the Blob REST API (blob.core.windows.net), not the ADLS Gen2 DFS endpoint (dfs.core.windows.net). What is the most significant performance challenge they will face when trying to list the necessary files?
Azure Blob Storage
Hard
A.The Blob API's flat namespace emulation will require an expensive, non-hierarchical listing of all blobs with the prefix /raw/2023/06/.
B.The hierarchical namespace is completely inaccessible via the Blob REST API, so the operation will fail.
C.The Blob REST API cannot handle path depths greater than 3, so the query is invalid.
D.Throttling will occur because list operations are more intensive on ADLS Gen2 accounts via the Blob API.
Correct Answer: The Blob API's flat namespace emulation will require an expensive, non-hierarchical listing of all blobs with the prefix /raw/2023/06/.
Explanation:
While ADLS Gen2 provides a true hierarchical namespace, this is primarily exposed through the DFS endpoint. When you access the same data via the Blob endpoint, it emulates directories over a flat namespace using prefixes. A 'List Blobs' operation with a prefix of /raw/2023/06/ will enumerate every single blob under that prefix, regardless of the 'subdirectories' (like /01/, /02/, etc.). This can be a very slow and expensive operation compared to the DFS endpoint which can perform efficient, atomic directory-level listings.
Incorrect! Try again.
57A Backup Vault is configured to protect a storage account. The backup policy has a 30-day retention. A legal hold (a type of immutability policy) is applied to a container within the storage account. An administrator attempts to perform a point-in-time restore from the Backup Vault to a time before the legal hold was applied, choosing the "overwrite existing blobs" option. What will be the outcome?
Backup Vaults
Hard
A.The restore will partially succeed, skipping any blobs that are protected by the legal hold.
B.The restore operation will succeed, and the blobs under legal hold will be overwritten.
C.The restore operation will succeed, but it will create new versions of the blobs instead of overwriting them.
D.The entire restore operation will fail because the container has an active immutability policy.
Correct Answer: The entire restore operation will fail because the container has an active immutability policy.
Explanation:
Azure Backup for blobs respects WORM (Write Once, Read Many) immutability policies like legal holds. Because the restore operation involves writing to the container (either overwriting or creating new blobs), and the container is locked, the operation is blocked. Azure Backup cannot bypass a legal hold. The entire restore job will fail with an error indicating a conflict with the immutability policy on the target container.
Incorrect! Try again.
58A workload requires writing large amounts of telemetry data as a continuous stream from thousands of devices. Data must be added to existing blobs without modifying previous writes. The primary access pattern is appending new data blocks; full-blob reads are secondary. Which Storage Account type and Blob type combination provides the most optimized performance and API support for this specific workload?
Azure storage accounts
Hard
A.General-purpose v2 account; using Page Blobs with the Put Page operation.
B.Premium BlockBlobStorage account; using Append Blobs with the Append Block operation.
C.Premium FileStorage account; using Files with the Put Range operation.
D.General-purpose v2 account; using Block Blobs with Put Block and Put Block List.
Correct Answer: Premium BlockBlobStorage account; using Append Blobs with the Append Block operation.
Explanation:
The scenario describes a classic append-only workload, for which the Append Blob type is specifically designed. The Append Block operation is optimized for this pattern. To get the best performance (high transaction rates, low latency), the Premium BlockBlobStorage account type is the ideal choice, as it's a specialized account tier optimized for block and append blobs. This combination directly and most efficiently addresses the workload's requirements.
Incorrect! Try again.
59A security audit requires that all SAS tokens used to access a storage account must be logged with the principal's Azure AD Object ID (OID). An application running on an Azure VM with a system-assigned managed identity needs to generate such a SAS to grant a client read access to a blob for one hour. Which sequence of actions must the application perform?
Control access to Azure Storage with shared access signatures
Hard
A.Request an OAuth 2.0 token for the managed identity, use it to get a user delegation key from the storage service, then create a User Delegation SAS.
B.Use the managed identity to get the storage account key, then create a Service SAS.
C.Use the managed identity to create a stored access policy, then generate a Service SAS referring to that policy.
D.Use an Account SAS signed with the storage account key, and add the managed identity's OID as a custom parameter.
Correct Answer: Request an OAuth 2.0 token for the managed identity, use it to get a user delegation key from the storage service, then create a User Delegation SAS.
Explanation:
The core requirement is to log the Azure AD OID of the SAS creator. This is a built-in feature of User Delegation SAS. The process for an identity like a managed identity is to first authenticate with Azure AD to get an OAuth 2.0 token, then present that token to the Azure Storage service to request a short-lived 'user delegation key'. This key is then used to sign the SAS token. Because this process is initiated by an AAD principal, the storage diagnostic logs will record the OID of that principal, fulfilling the audit requirement.
Incorrect! Try again.
60A container has blob versioning and a lifecycle management rule to transition previous versions to the Archive tier 30 days after they become a previous version. A blob named config.json is in the Hot tier. Day 1:config.json (Version A) is created. Day 10:config.json is updated, creating a new current version (Version B) and making Version A a previous version.
On which day will Version A complete its transition to the Archive tier?
Azure Blob Storage
Hard
A.Day 30
B.Day 10
C.Day 40
D.Day 31
Correct Answer: Day 40
Explanation:
This question tests the precise timing of lifecycle management rules for versions. Version A becomes a 'previous version' on Day 10. The rule's timer (30 days after they become a previous version) starts from this point. Therefore, the transition to the Archive tier for Version A is scheduled to occur 30 days after Day 10, which is Day 40 (10 + 30 = 40).