1After a fresh OS installation, what is one of the first properties you should configure to easily identify the server on a network?
configure local server properties
Easy
A.Configure power saving mode
B.Change the server name
C.Install a web browser
D.Set the screen resolution
Correct Answer: Change the server name
Explanation:
Changing the default, often generic, server name to a meaningful one is a fundamental first step for easy identification and management on a network.
Incorrect! Try again.
2Which local server property is critical for ensuring accurate log file timestamps and proper functioning of time-sensitive authentication protocols?
configure local server properties
Easy
A.Workgroup
B.Time Zone
C.Computer Name
D.Display Language
Correct Answer: Time Zone
Explanation:
Setting the correct time zone, date, and time is crucial for log correlation, scheduled tasks, and authentication services like Kerberos, which can fail if time skew is too great.
Incorrect! Try again.
3In a server operating system, what is the primary purpose of a 'role'?
configure server roles
Easy
A.To define a user's permission level
B.To be a software update or patch
C.To define the main function or service the server will provide to the network
D.To act as a hardware driver for a specific device
Correct Answer: To define the main function or service the server will provide to the network
Explanation:
A server role is a collection of software components that allows a server to perform a specific function, such as serving web pages (Web Server role) or managing user accounts (Active Directory role).
Incorrect! Try again.
4If you wanted a server to host a company website, which server role would you install?
configure server roles
Easy
A.Print Server
B.File Server
C.Web Server (IIS)
D.DHCP Server
Correct Answer: Web Server (IIS)
Explanation:
The Web Server (IIS) role provides the necessary services to host websites and web applications. IIS stands for Internet Information Services.
Incorrect! Try again.
5Which server role automatically assigns IP addresses to client computers on a network?
DHCP is used to automate the process of configuring devices on IP networks, allowing them to use network services without requiring manual configuration.
Incorrect! Try again.
6What is the primary function of the DNS (Domain Name System) server role?
set up IP addressing service roles
Easy
A.To store and manage user login information
B.To translate human-readable domain names (e.g., www.example.com) to IP addresses
C.To host and serve web pages
D.To automatically distribute IP addresses to network devices
Correct Answer: To translate human-readable domain names (e.g., www.example.com) to IP addresses
Explanation:
DNS acts like the phonebook of the internet, resolving domain names into the numerical IP addresses necessary for locating and identifying computer services and devices.
Incorrect! Try again.
7What is the most important reason for regularly applying updates (patches) to a server?
update the server
Easy
A.To change the background wallpaper
B.To free up disk space
C.To get a new user interface
D.To fix security vulnerabilities
Correct Answer: To fix security vulnerabilities
Explanation:
While updates can also fix bugs and add features, their most critical function is to patch security holes that could be exploited by malicious actors.
Incorrect! Try again.
8What is a 'hotfix' or 'patch'?
update the server
Easy
A.A complete reinstallation of the operating system
B.A hardware upgrade for the server
C.A small piece of software designed to fix a specific problem or bug
D.A new license key for an application
Correct Answer: A small piece of software designed to fix a specific problem or bug
Explanation:
A patch is a software update specifically created to correct issues, improve performance, or, most commonly, fix a security vulnerability in an existing program or operating system.
Incorrect! Try again.
9Which protocol provides a secure, encrypted command-line interface for remotely managing a Linux server?
server administration access and control methods
Easy
A.SSH (Secure Shell)
B.HTTP (Hypertext Transfer Protocol)
C.FTP (File Transfer Protocol)
D.Telnet
Correct Answer: SSH (Secure Shell)
Explanation:
SSH is the standard for secure remote administration, providing an encrypted connection to prevent eavesdropping. Telnet is an older, insecure alternative.
Incorrect! Try again.
10For managing a Windows Server with a full graphical interface, which remote access method is most commonly used?
server administration access and control methods
Easy
A.VNC (Virtual Network Computing)
B.RDP (Remote Desktop Protocol)
C.SSH (Secure Shell)
D.Telnet
Correct Answer: RDP (Remote Desktop Protocol)
Explanation:
RDP is a proprietary protocol from Microsoft that allows a user to connect to another computer over a network and interact with its graphical desktop.
Incorrect! Try again.
11What is a Service Level Agreement (SLA)?
create service level agreements
Easy
A.A software installation guide
B.A document listing all the hardware components in a server
C.A contract defining the expected level of service, availability, and responsibilities
D.A daily checklist for server administrators
Correct Answer: A contract defining the expected level of service, availability, and responsibilities
Explanation:
An SLA is a formal agreement between a service provider and a client that specifies metrics like uptime (e.g., 99.9% availability), response times, and penalties for failure to meet the targets.
Incorrect! Try again.
12Which of the following is a primary indicator of a server's processing load?
monitor server performance
Easy
A.Available Disk Space
B.Network Speed
C.Server Uptime
D.CPU Utilization
Correct Answer: CPU Utilization
Explanation:
CPU Utilization measures how busy the central processing unit is. Consistently high utilization can indicate that the server is overworked and may need an upgrade or load balancing.
Incorrect! Try again.
13If applications on a server are slow and the hard disk activity light is constantly flashing, which resource is most likely the bottleneck?
monitor server performance
Easy
A.Network Card
B.Power Supply
C.RAM (Memory)
D.CPU
Correct Answer: RAM (Memory)
Explanation:
When a server runs out of physical RAM, it starts using the much slower hard disk as 'virtual memory' (a process called swapping or paging), causing high disk activity and slowing down the system.
Incorrect! Try again.
14What is the main goal of server capacity planning?
perform capacity planning
Easy
A.To choose a new color for the server rack
B.To forecast future resource needs and ensure the server can handle future workloads
C.To minimize the physical size of the server
D.To delete old files to create more space
Correct Answer: To forecast future resource needs and ensure the server can handle future workloads
Explanation:
Capacity planning involves analyzing current usage trends to predict future requirements for CPU, memory, storage, and network bandwidth to avoid performance issues.
Incorrect! Try again.
15Which of these is considered a primary storage device in a modern server?
deploy primary storage devices
Easy
A.Tape Drive
B.Solid-State Drive (SSD)
C.DVD-ROM Drive
D.External USB Drive
Correct Answer: Solid-State Drive (SSD)
Explanation:
Primary storage is where the operating system, applications, and frequently accessed data reside. SSDs and HDDs are the most common forms of primary storage in servers.
Incorrect! Try again.
16What is a key difference between NAS (Network Attached Storage) and a standard file server?
storage technologies
Easy
A.NAS can only be used by one person at a time
B.NAS is much slower than a file server
C.NAS cannot be accessed over a network
D.NAS is a specialized device optimized specifically for file storage and serving
Correct Answer: NAS is a specialized device optimized specifically for file storage and serving
Explanation:
While both serve files, a NAS is a purpose-built appliance with a streamlined operating system designed solely for providing file-level storage services on a network.
Incorrect! Try again.
17A SAN (Storage Area Network) provides servers with what type of access to storage?
storage technologies
Easy
A.Read-only access
B.Block-level access
C.File-level access
D.Web-based access
Correct Answer: Block-level access
Explanation:
A SAN presents storage to servers as if it were locally attached disks (raw blocks). This is different from NAS, which presents storage as network file shares.
Incorrect! Try again.
18What does the acronym RAID stand for?
configure RAID
Easy
A.Remote Access and Integrated Drive
B.Real-time Array of Integrated Drives
C.Rapid Access to Internal Data
D.Redundant Array of Independent Disks
Correct Answer: Redundant Array of Independent Disks
Explanation:
RAID is a data storage virtualization technology that combines multiple physical disk drives into one or more logical units for the purposes of data redundancy, performance improvement, or both.
Incorrect! Try again.
19Which RAID level provides data protection by writing the exact same data to two separate drives (mirroring)?
configure RAID
Easy
A.RAID 1
B.JBOD
C.RAID 0
D.RAID 5
Correct Answer: RAID 1
Explanation:
RAID 1, or mirroring, creates an exact duplicate of data on a second drive. If one drive fails, the system can continue operating using the mirrored copy, ensuring data redundancy.
Incorrect! Try again.
20What is the primary benefit of RAID 0 (striping)?
configure RAID
Easy
A.Increased performance
B.Automatic backups
C.Lower cost
D.Data redundancy
Correct Answer: Increased performance
Explanation:
RAID 0 splits data across multiple disks, allowing for faster read and write operations because multiple drives work in parallel. However, it offers no fault tolerance; if any disk fails, all data is lost.
Incorrect! Try again.
21A newly deployed server in a branch office needs to synchronize its system clock with the company's primary domain controller, which acts as the authoritative time source. Which of the following PowerShell cmdlets is the most direct way to configure the server to use a specific NTP source?
The w32tm command is the primary tool for configuring the Windows Time service. This specific command sets the manual peer list to the specified NTP server and updates the configuration, forcing the server to synchronize with that source. The other options configure the time zone, computer name, and IP address, which are different local properties.
Incorrect! Try again.
22A company wants to host several internal websites (e.g., intranet.corp.local, hr.corp.local) on a single Windows Server using one IP address. Which feature within the Web Server (IIS) role must be configured to differentiate traffic and direct it to the correct website?
configure server roles
Medium
A.Host Header Bindings
B.IP Address and Domain Restrictions
C.SSL Bindings
D.Application Pools
Correct Answer: Host Header Bindings
Explanation:
Host Header Bindings (or Host Names) allow a single web server with one IP address and port (like port 80) to serve multiple websites. IIS inspects the HTTP host header in the client's request to determine which specific site to serve the content from.
Incorrect! Try again.
23A network administrator is configuring a DHCP scope for a new VLAN, 192.168.50.0/24. The network contains several devices with static IP addresses, including printers and switches, in the range 192.168.50.1 to 192.168.50.20. What DHCP feature should be configured to prevent the DHCP server from assigning these addresses to other clients?
set up IP addressing service roles
Medium
A.DHCP Reservations
B.Lease Duration
C.Scope Exclusion Range
D.DHCP Filters
Correct Answer: Scope Exclusion Range
Explanation:
An Exclusion Range is the correct feature to prevent the DHCP server from handing out a specific, contiguous block of IP addresses from within a defined scope. This is used specifically for addresses that are already assigned statically. Reservations are for assigning a specific IP to a specific MAC address.
Incorrect! Try again.
24An organization wants to implement a patch management strategy that allows them to first test updates on a set of non-production servers before deploying them to critical production systems. Which of the following solutions best supports this staged rollout requirement?
update the server
Medium
A.Manually downloading every update from the Microsoft Catalog and installing via a script.
B.Deploying Windows Server Update Services (WSUS) and creating separate computer groups for testing and production.
C.Configuring each server's Windows Update to "Download but let me choose to install".
D.Using Group Policy to set the "Active Hours" on all production servers.
Correct Answer: Deploying Windows Server Update Services (WSUS) and creating separate computer groups for testing and production.
Explanation:
WSUS is designed for centralized update management. By creating different computer groups (e.g., 'Test Servers', 'Production Servers'), an administrator can approve updates for the test group first. After verifying stability, the same updates can then be approved for the production group, fulfilling the requirement for a staged and controlled rollout.
Incorrect! Try again.
25An administrator needs to manage a fleet of Linux servers and Windows Servers (with PowerShell Core installed) from a single Linux-based management workstation. Which remote administration protocol is standard, secure, and ideal for this cross-platform command-line management scenario?
server administration access and control methods
Medium
A.Remote Desktop Protocol (RDP)
B.Secure Shell (SSH)
C.Telnet
D.Virtual Network Computing (VNC)
Correct Answer: Secure Shell (SSH)
Explanation:
SSH is the industry standard for secure, encrypted command-line access. It is native to Linux and fully supported on modern Windows Servers, making it the perfect choice for managing heterogeneous environments from a single workstation. RDP and VNC are GUI-based, and Telnet is insecure as it transmits data in plain text.
Incorrect! Try again.
26An SLA for a critical web server guarantees 99.9% uptime per month. Approximately how much total downtime is permissible within a 30-day month without violating the SLA?
create service level agreements
Medium
A.~7.2 hours
B.~4 minutes
C.~43 minutes
D.~1.5 hours
Correct Answer: ~43 minutes
Explanation:
To calculate the permissible downtime, first find the total minutes in a 30-day month: . The allowed downtime is . Therefore, the total permissible downtime is minutes.
Incorrect! Try again.
27A system administrator is investigating slow query responses on a database server. The \% Processor Time counter is consistently below 30%, but the \PhysicalDisk\Avg. Disk Queue Length counter is consistently above 3 for the data drive. What is the most likely performance bottleneck?
monitor server performance
Medium
A.Network Bandwidth
B.Storage I/O Subsystem
C.CPU Contention
D.Insufficient RAM
Correct Answer: Storage I/O Subsystem
Explanation:
A high Avg. Disk Queue Length (generally, a sustained value over 2 per spindle is a concern) indicates that I/O requests are waiting to be processed by the disk subsystem. Since CPU utilization is low, the processor is not the bottleneck. The disk cannot keep up with the demands from the application, pointing directly to a storage I/O bottleneck.
Incorrect! Try again.
28An administrator is conducting capacity planning for a file server. After establishing a performance baseline, they find that storage usage grows at a predictable rate of 250 GB per month. The server currently has 2 TB of free space. Approximately how many months until the server's storage is full if no action is taken?
perform capacity planning
Medium
A.6 months
B.4 months
C.10 months
D.8 months
Correct Answer: 8 months
Explanation:
First, convert the free space to a consistent unit: 2 TB is equal to 2048 GB. To find the number of months until the storage is full, divide the total free space by the monthly growth rate: months. The closest answer is 8 months.
Incorrect! Try again.
29A company needs to implement a storage solution for a high-performance database cluster. A key requirement is that the storage must be presented to the servers as block-level devices and be accessible by multiple servers simultaneously over a dedicated, high-speed network. Which storage architecture is the best fit?
deploy primary storage devices
Medium
A.Storage Area Network (SAN)
B.Network Attached Storage (NAS)
C.Direct Attached Storage (DAS)
D.Cloud Object Storage
Correct Answer: Storage Area Network (SAN)
Explanation:
A SAN is designed specifically for this use case. It provides block-level storage access over a dedicated network (like Fibre Channel or iSCSI), which is ideal for clustered applications like databases that require high-speed, shared access to raw storage volumes. NAS provides file-level access, and DAS is not shared.
Incorrect! Try again.
30A company is setting up a new SAN and wants to leverage its existing 10GbE Ethernet network infrastructure to minimize costs, instead of investing in specialized switches and HBAs. Which block storage protocol should they implement?
storage technologies
Medium
A.Internet SCSI (iSCSI)
B.Serial Attached SCSI (SAS)
C.Fibre Channel (FC)
D.InfiniBand
Correct Answer: Internet SCSI (iSCSI)
Explanation:
iSCSI is a storage networking protocol that works on top of TCP/IP. Its primary advantage is the ability to run on standard Ethernet hardware (switches, NICs, and cabling), making it a cost-effective alternative to Fibre Channel, which requires its own specialized and more expensive infrastructure.
Incorrect! Try again.
31An administrator is configuring a new server with six identical 2 TB drives. The primary requirement is to provide protection against up to two simultaneous drive failures while maximizing capacity. Which RAID level should be chosen?
configure RAID
Medium
A.RAID 5
B.RAID 6
C.RAID 10
D.RAID 1
Correct Answer: RAID 6
Explanation:
RAID 6 meets the requirement perfectly. It uses double parity (or an equivalent mechanism) to provide fault tolerance for up to two simultaneous drive failures. RAID 5 can only handle a single drive failure. RAID 10 (with 6 disks) could only survive two failures if they occurred in different mirrored pairs. RAID 1 would not maximize capacity.
Incorrect! Try again.
32A client device holding a DHCP lease for an IP address successfully renews its lease at the T1 timer. What percentage of the original lease duration has passed at this point?
set up IP addressing service roles
Medium
A.25%
B.75%
C.87.5%
D.50%
Correct Answer: 50%
Explanation:
The DHCP renewal process begins at the T1 timer, which by default is set to 50% of the lease duration. At this point, the client attempts to contact the DHCP server that issued the lease to renew it. If that fails, it will try again at the T2 timer (87.5% of the lease duration).
Incorrect! Try again.
33A junior administrator needs to perform a limited set of tasks on a Windows Server, such as restarting a specific service and checking disk space, but must not be granted full administrative rights. What is the most secure method to delegate this limited access using modern Windows Server features?
server administration access and control methods
Medium
A.Giving the user the credentials for the local Administrator account and trusting them.
B.Adding the user to the local Remote Desktop Users group.
C.Adding the user to the Server Operators built-in group.
D.Configuring PowerShell Just Enough Administration (JEA) with a role-specific endpoint.
Correct Answer: Configuring PowerShell Just Enough Administration (JEA) with a role-specific endpoint.
Explanation:
JEA is a security technology designed for the principle of least privilege. It allows you to create constrained, role-based administration endpoints where non-administrators can run specific commands, scripts, and executables as a temporary, privileged virtual account, without giving them broad permissions.
Incorrect! Try again.
34An SLA for a database service defines two key metrics: a Mean Time Between Failures (MTBF) of 198 hours and a Mean Time To Repair (MTTR) of 2 hours. Using these metrics, what is the calculated availability of the service?
create service level agreements
Medium
A.98.0%
B.99.5%
C.99.9%
D.99.0%
Correct Answer: 99.0%
Explanation:
Availability is calculated using the formula: . Plugging in the values: . This corresponds to 99.0% availability.
Incorrect! Try again.
35A RAID 5 array is built using five 4 TB disks. What is the total usable storage capacity of this array?
configure RAID
Medium
A.12 TB
B.20 TB
C.8 TB
D.16 TB
Correct Answer: 16 TB
Explanation:
In a RAID 5 configuration, the capacity of one of the disks is used for parity information, providing single-disk fault tolerance. The formula for usable capacity is (N-1) * S, where N is the number of disks and S is the size of the smallest disk. In this case, it is .
Incorrect! Try again.
36When conducting capacity planning for a virtual machine host, what is the primary purpose of establishing a performance baseline?
perform capacity planning
Medium
A.To document the server's initial configuration for disaster recovery.
B.To determine the typical resource utilization patterns, which are essential for forecasting future needs and identifying anomalies.
C.To satisfy the requirements for a security audit.
D.To immediately justify the purchase of more hardware.
Correct Answer: To determine the aypical resource utilization patterns, which are essential for forecasting future needs and identifying anomalies.
Explanation:
A performance baseline captures the metrics of a system during its normal operational state. This baseline is crucial for capacity planning because it provides the foundation for trend analysis (forecasting when resources will be exhausted) and anomaly detection (identifying when current performance deviates from the norm).
Incorrect! Try again.
37To proactively manage server health, an administrator wants to receive a notification before a problem causes an outage. Which of the following alert configurations is the best example of proactive monitoring?
monitor server performance
Medium
A.An alert that triggers when a user reports that an application is unavailable.
B.An alert that triggers when a server is no longer responding to ping requests.
C.An alert that triggers when the free space on the OS drive falls below 15%.
D.An alert that triggers when a critical service stops running.
Correct Answer: An alert that triggers when the free space on the OS drive falls below 15%.
Explanation:
This is a proactive alert because it warns the administrator of a potential issue (running out of disk space) well before it causes a system failure or service disruption. This allows them time to intervene and correct the problem. The other options are reactive, as they trigger after a failure has already occurred.
Incorrect! Try again.
38An administrator needs to apply a critical security patch to a production database server within a very brief overnight maintenance window. What is the most crucial step to perform to mitigate risk?
update the server
Medium
A.Ensure a verified backup or a system snapshot has been taken immediately before patching and a rollback plan is in place.
B.Apply the patch as quickly as possible to stay within the window.
C.Read online forums to see if other users have had issues with the patch.
D.Reboot the server multiple times after the patch is applied to ensure it is stable.
Correct Answer: Ensure a verified backup or a system snapshot has been taken immediately before patching and a rollback plan is in place.
Explanation:
For any change to a critical production system, having a reliable and tested rollback plan is paramount. A pre-patch backup or snapshot allows the administrator to quickly revert the system to its previous state if the patch causes unexpected issues, thereby minimizing downtime and risk.
Incorrect! Try again.
39A server is being configured to handle a high-transactional online database (OLTP) workload, which involves numerous small, random read and write operations. Which storage technology would provide the best performance for this specific use case?
storage technologies
Medium
A.Network Attached Storage (NAS)
B.10K RPM Hard Disk Drive (HDD)
C.Solid-State Drive (SSD)
D.Tape Backup Drive
Correct Answer: Solid-State Drive (SSD)
Explanation:
SSDs excel at random I/O operations because they have no moving parts and offer extremely low latency for accessing data, regardless of its physical location on the drive. This makes them ideal for OLTP database workloads, which are characterized by high rates of random reads and writes. HDDs suffer from seek time latency, which significantly slows down random I/O.
Incorrect! Try again.
40A company wants to simplify the deployment of new client operating systems on their network. They need a service that allows client computers to boot from the network and receive an OS installation image. Which Windows Server role should be installed and configured to provide this functionality?
configure server roles
Medium
A.Active Directory Domain Services (AD DS)
B.Windows Server Update Services (WSUS)
C.Hyper-V
D.Windows Deployment Services (WDS)
Correct Answer: Windows Deployment Services (WDS)
Explanation:
Windows Deployment Services (WDS) is the server role specifically designed for network-based deployment of Windows operating systems. It works with Pre-boot Execution Environment (PXE) to allow client machines to boot from a network adapter and then install an OS image stored on the WDS server.
Incorrect! Try again.
41A database server is configured with eight 1TB SAS drives. The primary requirement is the absolute lowest write latency for frequent, small, random I/O operations, with a secondary requirement for redundancy. The budget does not allow for an all-flash array. Which of the following RAID configurations provides the best performance profile for this specific workload, and why?
configure RAID
Hard
A.RAID 10 (4 pairs mirrored, then striped): It has no parity calculations, resulting in a significantly lower write penalty (2) compared to RAID 5 (4) or RAID 6 (6), making it ideal for write-intensive databases.
B.RAID 5 (7+1): It offers a good balance of capacity and performance with a single parity block, making it faster than RAID 6.
C.Two separate RAID 1 (mirrored) arrays of 4 drives each: This configuration isolates I/O but doesn't aggregate performance across all eight spindles for a single database volume.
D.RAID 6 (6+2): It provides dual-parity protection, which is essential for an 8-drive array, and the write penalty is manageable.
Correct Answer: RAID 10 (4 pairs mirrored, then striped): It has no parity calculations, resulting in a significantly lower write penalty (2) compared to RAID 5 (4) or RAID 6 (6), making it ideal for write-intensive databases.
Explanation:
RAID 10 (or 1+0) is the optimal choice for write-intensive database applications. For every write operation, the data is written twice (once to each disk in a mirrored pair), resulting in a write penalty of 2. In contrast, RAID 5 requires a read-modify-write sequence for each write (read data, read parity, write new data, write new parity), resulting in a write penalty of 4. RAID 6 is even worse with a write penalty of 6. Given the requirement for the lowest write latency, RAID 10's lack of parity calculation overhead makes it superior to both RAID 5 and RAID 6, despite having lower usable capacity (4TB vs. 7TB for RAID 5 or 6TB for RAID 6).
Incorrect! Try again.
42A system administrator is analyzing a virtualized web server's performance. They observe the following sustained counter values: % Processor Time is consistently low (~15%), Memory\Available MBytes is high, and PhysicalDisk\Avg. Disk Queue Length is also low (< 1). However, the Processor Queue Length is consistently greater than 3 per core, and users are reporting slow page load times. What is the most likely bottleneck?
monitor server performance
Hard
A.There is a network bandwidth limitation between the web server and the database server.
B.The physical host CPU is overloaded, and the guest VM is experiencing high CPU Ready time.
C.The web application is memory-starved, leading to excessive paging.
D.The storage subsystem is too slow, causing I/O waits.
Correct Answer: The physical host CPU is overloaded, and the guest VM is experiencing high CPU Ready time.
Explanation:
This is a classic virtualization performance problem. The guest OS reports low CPU usage (% Processor Time) because it is not actually getting scheduled onto a physical CPU core. The high Processor Queue Length inside the VM indicates that threads are ready to run but cannot. This discrepancy occurs when the hypervisor host is CPU-constrained and cannot service the VM's requests for CPU time in a timely manner. This wait state is often measured by a hypervisor-level metric called 'CPU Ready' or 'CPU Steal'. The other options are contradicted by the provided counters: low disk queue negates a storage bottleneck, high available memory negates a memory issue, and while a network issue is possible, the direct evidence points strongly to CPU scheduling contention on the host.
Incorrect! Try again.
43You have configured a DHCP failover relationship in hot standby mode between a primary server (DHCP-A) and a secondary server (DHCP-B). The primary server, DHCP-A, experiences a catastrophic hardware failure. DHCP-B takes over the scope as expected. Later, DHCP-A is restored from a backup that was taken before the failover relationship was created. What is the most likely outcome when DHCP-A is brought back online?
set up IP addressing service roles
Hard
A.A manual reconciliation of scopes is required, but no immediate IP conflicts will occur due to DHCPDECLINE messages from clients.
B.DHCP-A will automatically re-establish the hot standby relationship with DHCP-B and sync its database.
C.Both servers will begin issuing conflicting IP addresses, as DHCP-A's database is outdated and unaware of the failover state or leases issued by DHCP-B.
D.DHCP-B will automatically place DHCP-A into a 'Partner Down' state and continue to service all clients.
Correct Answer: Both servers will begin issuing conflicting IP addresses, as DHCP-A's database is outdated and unaware of the failover state or leases issued by DHCP-B.
Explanation:
This scenario highlights a critical disaster recovery mistake. Restoring DHCP-A from a backup taken before the failover configuration means it has no knowledge of the partnership with DHCP-B. When it comes online, it will believe it is the sole authoritative DHCP server for the scope and will begin issuing leases from its old, outdated database. Since DHCP-B is also actively managing the scope, this will inevitably lead to IP address conflicts on the network. The correct recovery procedure involves rebuilding the server and re-establishing the failover partnership from the active server (DHCP-B), not restoring an old backup.
Incorrect! Try again.
44A company wants to consolidate its storage onto a SAN but wishes to leverage its existing 10GbE network infrastructure, including switches and NICs, to minimize costs. They require block-level storage with performance comparable to native Fibre Channel. However, their network switches do not support Data Center Bridging (DCB). Which storage protocol is the most suitable choice under these specific constraints?
storage technologies
Hard
A.FCoE (Fibre Channel over Ethernet): This is unsuitable because FCoE requires DCB (also known as CEE) on the network switches to provide a lossless fabric.
B.Fibre Channel (FC): This is not possible as it requires dedicated FC switches and HBAs.
C.NFS (Network File System): This provides file-level access, not the required block-level storage.
D.iSCSI (Internet Small Computer System Interface): This protocol encapsulates SCSI commands in TCP/IP packets, runs over standard Ethernet, and provides block-level access without requiring specialized hardware like DCB-enabled switches.
Correct Answer: iSCSI (Internet Small Computer System Interface): This protocol encapsulates SCSI commands in TCP/IP packets, runs over standard Ethernet, and provides block-level access without requiring specialized hardware like DCB-enabled switches.
Explanation:
The key constraints are leveraging existing 10GbE infrastructure and the lack of Data Center Bridging (DCB) support. Fibre Channel is eliminated as it needs a separate, dedicated network. FCoE is a strong contender for running block storage over Ethernet, but its requirement for a lossless network, typically provided by DCB, makes it unsuitable for the existing infrastructure. NFS is file-level, not block-level. iSCSI is the perfect fit because it is designed to run over standard TCP/IP Ethernet networks, provides the required block-level access, and does not depend on DCB, making it the most cost-effective and technically compatible solution.
Incorrect! Try again.
45An SLA for a critical application defines an SLO of 99.9% uptime and an MTTR (Mean Time to Recovery) of 15 minutes. During a 30-day month, the system experienced two outages. Outage 1 lasted 12 minutes. Outage 2 lasted 20 minutes. Which statement accurately reflects the SLA compliance for the month?
create service level agreements
Hard
A.Both the uptime SLO and the MTTR were breached.
B.The uptime SLO was breached, but the MTTR was met.
C.The uptime SLO was met, but the MTTR was breached.
D.Both the uptime SLO and the MTTR were met.
Correct Answer: The uptime SLO was met, but the MTTR was breached.
Explanation:
This question requires calculating two separate metrics. First, calculate the total downtime against the SLO. A 30-day month has minutes. The allowed downtime for 99.9% uptime is minutes. The total actual downtime was minutes. Since , the uptime SLO was met. Second, evaluate the MTTR. MTTR is the average time to recover from failures. The average recovery time for the two outages is minutes. Since $16$ minutes is greater than the 15-minute MTTR defined in the SLA, the MTTR was breached. Therefore, the uptime SLO was met, but the MTTR was breached.
Incorrect! Try again.
46A company uses a hierarchical WSUS setup with one upstream server and multiple downstream replica servers in branch offices. You notice that client computers in one specific branch office are failing to install a newly approved critical security update, reporting error 0x80244019. Other branches are updating correctly. The affected clients can successfully contact their local downstream WSUS replica. What is the most probable cause of this issue?
update the server
Hard
A.The BITS service on the clients is stopped or misconfigured.
B.A firewall is blocking communication between the clients and the downstream WSUS server on port 8530.
C.The client computers have a corrupted Windows Update agent.
D.The downstream WSUS replica server has not finished downloading the update content from the upstream server, even though the update metadata (approval) has been replicated.
Correct Answer: The downstream WSUS replica server has not finished downloading the update content from the upstream server, even though the update metadata (approval) has been replicated.
Explanation:
The error code 0x80244019 corresponds to an HTTP 404 'Not Found' error. In a WSUS context, this means the client successfully contacted the WSUS server and received the list of approved updates (metadata), but when it tried to download the actual update files from the content directory, the files were not present. Since other branches are working, this points to an issue with the specific downstream replica server. The most common cause for this is that the metadata (the approval for the update) has replicated from the upstream server, but the much larger update binaries have not yet been downloaded and stored locally on the downstream server's content store. The other options are less likely: a corrupted agent or BITS issue would likely affect all updates, not just a new one, and a firewall issue would prevent contact entirely, not cause a 404 error.
Incorrect! Try again.
47You are creating a PowerShell Just Enough Administration (JEA) role capability file for junior database administrators. You want to allow them to restart a specific service, SQLSERVERAGENT, but not any other service. You also want to allow them to view the status of any service using Get-Service. Which of the following configurations in the .psrc file correctly and most securely implements this requirement?
server administration access and control methods
Hard
A.{ VisibleCmdlets = 'Get-Service'; AliasDefinitions = @{ Name = 'Restart-SQL'; Value = 'Restart-Service -Name SQLSERVERAGENT'} }
B.{ VisibleCmdlets = @{ Name = 'Get-Service' }, @{ Name = 'Restart-Service'; Parameters = @{ Name = 'Name'; ValidateSet = 'SQLSERVERAGENT' } } }
D.{ VisibleCmdlets = 'Get-Service'; VisibleFunctions = 'Restart-SQLService' } # Assuming Restart-SQLService is a custom function
Correct Answer: { VisibleCmdlets = @{ Name = 'Get-Service' }, @{ Name = 'Restart-Service'; Parameters = @{ Name = 'Name'; ValidateSet = 'SQLSERVERAGENT' } } }
Explanation:
This question tests deep knowledge of JEA configuration. The goal is to constrain a standard cmdlet (Restart-Service) to a specific parameter value. Option A is too permissive; it allows restarting any service. Option C relies on a custom function which is a valid approach, but not the most direct way to constrain a built-in cmdlet. Option D uses an alias, but aliases in JEA do not provide a security boundary; the user could still call Restart-Service directly with other parameters. Option B is the correct and most secure method. It makes the Restart-Service cmdlet visible but constrains its -Name parameter to a ValidateSet containing only 'SQLSERVERAGENT'. This prevents the user from restarting any other service while still using the standard, familiar cmdlet.
Incorrect! Try again.
48You are performing capacity planning for a file server. You have collected LogicalDisk\% Free Space data for the primary data volume over the past 12 months. The data shows an average decline of 2% per month. The current free space is 30%. The company has a policy to upgrade storage when free space drops below 15%. However, you also observe that for the last 3 months, the decline has accelerated to 4% per month due to a new project. Based on this trend analysis, when should you schedule the storage upgrade?
perform capacity planning
Hard
A.In approximately 4 months, by extrapolating the recent, accelerated decline of 4%.
B.Immediately, as the trend is accelerating and unpredictable.
C.In approximately 7-8 months, based on the long-term average decline of 2%.
D.In 15 months, as you only need to address the 15% drop from 30% to the 15% threshold.
Correct Answer: In approximately 4 months, by extrapolating the recent, accelerated decline of 4%.
Explanation:
Effective capacity planning requires not just looking at long-term averages but also identifying and weighting recent trends. The long-term average of 2% per month would suggest the 15% buffer (from 30% down to 15%) would be consumed in months. However, the recent, more relevant data shows a new usage pattern of 4% per month. This is a more accurate predictor of future behavior. Using this rate, the 15% buffer will be consumed in months. Therefore, scheduling the upgrade in approximately 4 months is the most prudent action based on a proper analysis of the changing trend.
Incorrect! Try again.
49A server with two 10GbE NICs is configured with a NIC Team in Switch Independent mode with Dynamic load balancing. The server is connected to two different, unmanaged switches for redundancy. It hosts a single, large file transfer application that communicates with multiple clients simultaneously. Administrators notice that while the aggregate network throughput from the server often reaches 12-14 Gbps, no single client can download from the server at a rate faster than ~9.5 Gbps. What is the correct explanation for this behavior?
configure local server properties
Hard
A.This is the expected behavior of Switch Independent teaming; outbound traffic can be balanced across NICs, but a single TCP conversation is always bound to a single NIC.
B.One of the physical NICs or switches is malfunctioning and operating at a lower speed.
C.The Dynamic load balancing algorithm is faulty and should be changed to Hyper-V Port.
D.The server's PCI Express bus is saturated and cannot handle the full 20 Gbps from both NICs.
Correct Answer: This is the expected behavior of Switch Independent teaming; outbound traffic can be balanced across NICs, but a single TCP conversation is always bound to a single NIC.
Explanation:
This question probes the specific behavior of NIC Teaming modes. In Switch Independent mode, the server's teaming software makes load balancing decisions without coordination from the switch. The Dynamic algorithm (or Address Hash in older versions) balances outbound traffic by distributing different TCP streams (conversations) across the available NICs. However, to prevent out-of-order packet delivery which would cripple TCP performance, it ensures that all packets for a single conversation (defined by source/destination IP/port) always exit through the same physical NIC. Therefore, the aggregate throughput can exceed one NIC's capacity by serving multiple clients, but any single client's stream is limited to the bandwidth of the one NIC it's been assigned to (~10 Gbps).
Incorrect! Try again.
50A system administrator attempts to install the Hyper-V role on a physical Windows Server that is already configured as a primary domain controller and is also running a specialized, third-party hardware monitoring service that requires direct access to CPU performance counters via low-level MSRs (Model-Specific Registers). After the Hyper-V role is installed and the server reboots, the monitoring service begins to fail with access violation errors. Why is this occurring?
configure server roles
Hard
A.Hyper-V and Active Directory Domain Services are mutually exclusive roles and cannot be installed on the same server.
B.The installation of Hyper-V converted the host OS into a privileged parent partition running on top of the hypervisor, which now controls and virtualizes hardware access, blocking the service's direct MSR access.
C.The server does not have enough RAM to run both the hypervisor and the domain controller simultaneously.
D.The network drivers for the monitoring service are incompatible with the Hyper-V Virtual Switch.
Correct Answer: The installation of Hyper-V converted the host OS into a privileged parent partition running on top of the hypervisor, which now controls and virtualizes hardware access, blocking the service's direct MSR access.
Explanation:
This is a complex interaction between server roles. When the Hyper-V role is installed, the server's architecture fundamentally changes. The hypervisor (a Type-1, bare-metal hypervisor) loads first and takes direct control of the hardware. The original Windows Server OS is then loaded into a special virtual machine called the parent partition (or root partition). While the parent partition has privileged access, the hypervisor still abstracts and controls direct hardware access. A service that expects to read low-level CPU registers directly will fail because the hypervisor is now intercepting these calls. This is a key reason why it's a best practice not to install other roles, especially those with low-level hardware interaction, on a Hyper-V host. While running a DC on a Hyper-V host is also not recommended for other reasons, it is not technically impossible; the direct cause of the failure is the hardware abstraction layer introduced by the hypervisor.
Incorrect! Try again.
51In a VDI (Virtual Desktop Infrastructure) environment using non-persistent desktops, a large number of desktops are created and destroyed daily. The SAN administrator has used thin-provisioned LUNs to host the desktop images to save space. However, they notice that even after a peak usage period ends and most desktops are deleted, the SAN reports that the LUNs are still nearly full. What is the most likely reason for this, and what action is required?
deploy primary storage devices
Hard
A.The hypervisor's storage driver is caching the block information and has not released it to the SAN.
B.Thick provisioning should have been used, as thin provisioning is unsuitable for VDI workloads.
C.The SAN is malfunctioning and requires a firmware update to report space correctly.
D.Deleted blocks within the guest OS are not being communicated to the hypervisor and the SAN, requiring a manual space reclamation process like TRIM/UNMAP to be run.
Correct Answer: Deleted blocks within the guest OS are not being communicated to the hypervisor and the SAN, requiring a manual space reclamation process like TRIM/UNMAP to be run.
Explanation:
This scenario describes a common challenge with thin provisioning. When a file is deleted inside a guest OS, it simply marks the blocks as free in its own file system table; it does not typically inform the underlying storage that these blocks are no longer needed. From the SAN's perspective, the blocks are still allocated. To reclaim this space, a command must be issued that tells the storage array which blocks are free. This is achieved through the SCSI UNMAP command (or ATA TRIM command for SATA/NVMe). Most modern hypervisors and guest OSes support this, but it often needs to be enabled or run periodically as a scheduled task to 'punch zeros' or send UNMAP commands for the freed blocks, allowing the thin-provisioned LUN to shrink.
Incorrect! Try again.
52A server has a RAID 5 array consisting of 5 x 4TB drives. One drive fails and the array enters a degraded state. A replacement 4TB drive is inserted, and the rebuild process begins. During the rebuild, the RAID controller encounters an Unrecoverable Read Error (URE) on one of the remaining, non-failed drives while trying to read a block needed to reconstruct the data for the new drive. What is the most probable outcome?
configure RAID
Hard
A.The controller will automatically convert the array to a RAID 4 to isolate the faulty drive and complete the rebuild.
B.The rebuild process will pause, and the array will remain in a degraded state until the drive with the URE is also replaced.
C.The rebuild will fail, and the entire RAID 5 array will be lost, resulting in data loss.
D.The controller will successfully rebuild the array by using parity data from the other three healthy drives, flagging the single block as bad.
Correct Answer: The rebuild will fail, and the entire RAID 5 array will be lost, resulting in data loss.
Explanation:
This scenario illustrates the infamous 'RAID 5 write hole' or, more accurately, the risk of UREs during a rebuild. In a degraded RAID 5, the data on the failed drive is reconstructed by reading data from all other surviving drives and using the parity information. If a read error occurs on any of these surviving drives during the rebuild, the controller cannot reconstruct the data for that specific stripe. Since RAID 5 has only a single parity block, it cannot recover from a second failure event (the original failed drive + the URE on another drive). This leads to a catastrophic failure of the rebuild process and the loss of the entire array. This high-risk scenario is a primary reason why RAID 6 (with its dual parity) is recommended for large arrays of high-capacity drives.
Incorrect! Try again.
53You are monitoring a SQL server and see the following behavior: Memory\Available MBytes is very low, Memory\Pages/sec is near zero, and the SQL Server:Buffer Manager\Page life expectancy counter is very high and stable. Users are not reporting any performance issues. A junior admin suggests adding more RAM to the server because the available memory is so low. What is your assessment?
monitor server performance
Hard
A.The low available memory and high page life expectancy indicate that the SQL database is heavily fragmented and needs to be re-indexed.
B.The junior admin is correct; low available memory is always a sign of a bottleneck and more RAM is needed.
C.The server has a memory leak in a non-SQL process that is consuming all available RAM.
D.This is normal and healthy behavior for a properly configured SQL Server, which is designed to cache as much data in RAM as possible to improve performance.
Correct Answer: This is normal and healthy behavior for a properly configured SQL Server, which is designed to cache as much data in RAM as possible to improve performance.
Explanation:
This question requires a nuanced understanding of application-specific performance monitoring. SQL Server is designed to be memory-intensive. It will intentionally use as much RAM as it is allocated to cache data pages from the database, which minimizes slow disk I/O. Therefore, seeing low Available MBytes on a SQL server is expected. The key confirming counters are Pages/sec (which indicates hard paging to disk) being near zero, and Page life expectancy (how long data pages stay in the cache) being high and stable. These two counters together confirm that SQL Server is using its memory cache effectively and is not under memory pressure. Adding more RAM would likely just result in SQL Server using that new RAM for its cache as well, without necessarily improving performance if the cache is already effective.
Incorrect! Try again.
54After several years of service and numerous in-place upgrades and monthly patch cycles, a Windows Server's C: drive is running low on space. Analysis with the Dism.exe tool shows that the Component Store (WinSxS folder) is consuming over 20 GB. Which command sequence is the most effective and appropriate for safely reducing the size of the WinSxS folder?
update the server
Hard
A.Using the built-in Disk Cleanup utility and selecting 'Windows Update Cleanup'.
B.Running sfc /scannow followed by chkdsk /f.
C.Manually deleting files from the C:\Windows\WinSxS directory using File Explorer.
The WinSxS folder contains components that allow for Windows features to be added and updates to be uninstalled. Manually deleting files from it will corrupt the operating system. sfc and chkdsk are for system file integrity and disk errors, not cleanup. The Disk Cleanup utility is a valid method, but the DISM command offers more power and control. The command Dism.exe /Online /Cleanup-Image /StartComponentCleanup removes superseded versions of components. Adding the /ResetBase switch is a more aggressive step that removes all superseded versions of every component in the component store, and critically, it makes all existing updates and service packs permanent, meaning they can no longer be uninstalled. In a scenario where space is critical on a stable, long-running server, this is the most effective command to maximize space reclamation from the component store.
Incorrect! Try again.
55A three-tier application consists of a web front-end, a middle-tier application server, and a back-end SQL database server. A user authenticates to the web front-end using their Windows credentials. The web server needs to pass these credentials to the middle-tier server, which then needs to query the SQL server as the original user. This process is failing. All servers are in the same domain. What is the most likely cause of this authentication failure?
server administration access and control methods
Hard
A.The SQL Server's firewall is blocking the connection from the middle-tier server.
B.The web server and middle-tier server are in different Active Directory sites, causing replication latency.
C.The application pool identity on the web server lacks the 'Log on as a batch job' user right.
D.Kerberos Constrained Delegation has not been configured for the middle-tier server's computer account in Active Directory to allow it to delegate credentials to the SQL service.
Correct Answer: Kerberos Constrained Delegation has not been configured for the middle-tier server's computer account in Active Directory to allow it to delegate credentials to the SQL service.
Explanation:
This scenario describes the classic 'Kerberos double-hop' problem. Standard Kerberos authentication prevents a service from forwarding a user's credentials to another service to prevent credential theft. To allow this for multi-tier applications, Kerberos Delegation must be configured. The middle-tier server needs to be trusted in Active Directory to impersonate the user and present delegated credentials to the SQL server. Specifically, Kerberos Constrained Delegation (KCD) is the modern, secure method where you configure the middle-tier computer object to be trusted to delegate to specific Service Principal Names (SPNs), such as the MSSQLSvc SPN on the database server. Without this configuration, the second hop (middle-tier to SQL) will fail authentication.
Incorrect! Try again.
56You are designing a storage solution for a new data warehousing application. The primary workload consists of running complex queries against very large datasets. This results in I/O patterns that are predominantly large-block (64KB or greater) sequential reads. Writes are infrequent and occur in large batches overnight. Which storage configuration would provide the most cost-effective performance for this specific workload?
storage technologies
Hard
A.A cloud-based object storage solution with a local caching gateway.
B.A SAN built with a large number of 10K or 15K RPM SAS HDDs in a RAID 6 array.
C.A hybrid array with a small SSD cache tier and a large HDD capacity tier.
D.An all-flash array (AFA) using NVMe SSDs configured in RAID 10.
Correct Answer: A SAN built with a large number of 10K or 15K RPM SAS HDDs in a RAID 6 array.
Explanation:
The key to this question is matching the I/O pattern to the storage technology. The workload is large-block sequential reads. Traditional spinning disks (HDDs) excel at this type of workload because the slowness of seek time is minimized; once the read head is in position, it can read long, contiguous streams of data very quickly. A large number of spindles (drives) in a RAID array will provide very high aggregate sequential throughput. An all-flash array would offer superior performance, but its primary benefit is for small-block, random I/O, making it an unnecessarily expensive solution for this specific workload (not cost-effective). A hybrid array's cache would be less effective for scanning massive datasets that are much larger than the cache size. Object storage is not suitable for the performance demands of a data warehouse query engine.
Incorrect! Try again.
57A company is upgrading a server that runs a critical, but old, single-threaded application. They are replacing an older server that has a 4-core CPU running at 3.5 GHz with a new server that has a 16-core CPU running at 2.5 GHz. After the migration, users complain that the application is now running significantly slower. What is the most likely explanation for this performance degradation?
perform capacity planning
Hard
A.The new server has a misconfigured BIOS power management setting, throttling the CPU.
B.The application is single-threaded and is now bottlenecked by the lower per-core clock speed of the new CPU, as it cannot utilize the additional cores.
C.The new server has insufficient RAM, causing the application to page to disk.
D.The application is not compatible with the new server's operating system version.
Correct Answer: The application is single-threaded and is now bottlenecked by the lower per-core clock speed of the new CPU, as it cannot utilize the additional cores.
Explanation:
This is a common capacity planning pitfall. Not all workloads scale with more cores. A single-threaded application can only execute on one CPU core at a time. Therefore, its performance is almost entirely dependent on the clock speed (Instructions Per Clock x Frequency) of that single core. The upgrade from a 3.5 GHz core to a 2.5 GHz core represents a significant decrease in single-threaded performance, even though the new CPU's total aggregate computing power (16 x 2.5 GHz) is much higher. The application simply cannot use the other 15 cores, so the 'upgrade' was actually a downgrade for this specific workload.
Incorrect! Try again.
58An SLA includes two key metrics: a 99.9% availability Service Level Objective (SLO) and a 95th percentile API response time of under 200ms. In the last month, the service had 100% uptime. However, monitoring shows that for a 3-hour period during a marketing campaign, the API response time for all users peaked at 450ms. For the rest of the month, response times were consistently 50ms. Which statement is true?
create service level agreements
Hard
A.The SLA was fully met because the 99.9% uptime SLO was achieved.
B.The SLA was not breached because the high response time was temporary and the average response time for the month was low.
C.It is impossible to determine SLA compliance without knowing the Mean Time Between Failures (MTBF).
D.The SLA was breached because the 95th percentile response time metric was violated.
Correct Answer: The SLA was breached because the 95th percentile response time metric was violated.
Explanation:
This question highlights that SLAs are often multi-faceted. Focusing only on uptime is a mistake. The 95th percentile metric means that 95% of requests must be faster than the threshold. The 3-hour peak period constitutes of the month. However, during that time, all users were experiencing high latency. The vast majority of the month (99.6%) had fast responses. The 95th percentile calculation would look at all response times, and since more than 5% of the time period experienced responses well above 200ms, the 95th percentile value would certainly be higher than 200ms. Therefore, even with 100% uptime, the performance aspect of the SLA was breached.
Incorrect! Try again.
59You are configuring a storage array for a virtual machine host that will run a write-heavy online transaction processing (OLTP) database. The array is populated entirely with enterprise-grade MLC SSDs. You are concerned about drive endurance and write amplification. Which RAID level would be the most detrimental to the lifespan of the SSDs in this specific scenario?
configure RAID
Hard
A.RAID 10: It also has a write amplification factor of 2, mirroring each write.
B.RAID 0: It has a write amplification factor of 1, but offers no redundancy.
C.RAID 1: It has a write amplification factor of 2, which is predictable.
D.RAID 5: Its read-modify-write operation significantly increases write amplification beyond the data and parity writes, causing excessive wear on the SSDs.
Correct Answer: RAID 5: Its read-modify-write operation significantly increases write amplification beyond the data and parity writes, causing excessive wear on the SSDs.
Explanation:
Write amplification is a critical concept for SSDs. It's the ratio of physical writes to the SSD versus logical writes from the host. RAID levels that use parity, especially with small, random write workloads typical of OLTP, suffer from high write amplification at the array level. For a small write in RAID 5, the controller must read the old data block, read the old parity block, calculate the new parity, write the new data block, and write the new parity block. This turns a single host write into two reads and two writes at the drive level, significantly increasing the total write I/O and wear on the SSDs. RAID 1 and 10 have a simple, fixed write penalty of 2 (the data is just written to two drives). While RAID 0 has the lowest amplification, it's unsuitable due to lack of redundancy. Therefore, RAID 5 is the most damaging choice for SSD endurance in a write-heavy OLTP environment.