Unit2 - Subjective Questions
INT249 • Practice Questions with Detailed Answers
Explain the significance of configuring local server properties immediately after installation. What are the key properties that must be configured?
Configuring local server properties is the foundational step in system administration to ensure the server is identifiable, secure, and communicable within a network. Skipping this step can lead to naming conflicts, time synchronization errors, and security vulnerabilities.
Key properties to configure include:
- Computer Name: Assigning a unique, descriptive name (e.g.,
WEB-SRV-01) to identify the server on the network. - Network Settings: Configuring static IP addresses, Subnet Masks, Gateways, and DNS servers to ensure stable connectivity.
- Time Zone: Setting the correct time zone and synchronizing with an NTP server (like
time.windows.com) is crucial for log timestamps and authentication protocols (like Kerberos). - Remote Management: Enabling Remote Desktop or Remote Management to allow administration from other workstations.
- Windows Update Settings: Configuring how and when the server receives security patches.
Distinguish between Server Roles and Features in the context of Windows Server administration.
Server Roles and Features serve different purposes in the architecture of a server operating system:
-
Server Roles:
- Definition: Roles describe the primary function of the server. They are collections of software programs that allow a computer to perform specific functions for users or other computers on a network.
- Examples: Active Directory Domain Services (AD DS), DNS Server, DHCP Server, Web Server (IIS).
- Impact: A server is often dedicated to one or two specific roles (e.g., a dedicated Domain Controller).
-
Features:
- Definition: Features are software programs that support or augment the functionality of existing roles or the operating system itself, but they do not constitute the primary function of the server.
- Examples: .NET Framework, BitLocker Drive Encryption, Telnet Client, Failover Clustering.
- Impact: Features are auxiliary tools installed to help the server perform its roles more effectively.
Describe the DHCP DORA process and explain why a server needs to be authorized to assign IP addresses.
DHCP (Dynamic Host Configuration Protocol) assigns IP addresses automatically using the DORA process:
- Discover: The client broadcasts a message on the network segment looking for a DHCP server.
- Offer: A DHCP server receives the request and broadcasts an offer of an available IP address to the client.
- Request: The client selects the offer and sends a request to the server to lease that specific IP.
- Acknowledge: The server acknowledges the request, finalizing the lease and sending configuration details (subnet, gateway, DNS).
DHCP Authorization:
In an Active Directory environment, a DHCP server must be authorized in AD to prevent 'rogue' DHCP servers. A rogue server could hand out incorrect IP addresses or malicious DNS settings, disrupting network connectivity or creating security risks. Only authorized servers are permitted to service client requests.
What is WSUS (Windows Server Update Services), and how does it assist in Server Administration?
WSUS (Windows Server Update Services) is a server role that enables administrators to deploy the latest Microsoft product updates to computers running Windows operating systems.
Benefits for Administration:
- Bandwidth Management: Instead of every client downloading updates from the internet, the WSUS server downloads updates once and distributes them locally.
- Approval Control: Administrators can test updates in a staging environment before approving them for production servers, preventing buggy updates from crashing critical systems.
- Reporting: It provides detailed reports on the compliance status of servers, showing which machines have successfully installed patches and which have failed.
- Scheduling: Administrators can force updates to install during specific maintenance windows to minimize downtime.
Compare Remote Desktop Protocol (RDP) and Remote Server Administration Tools (RSAT) as methods for server access and control.
Both RDP and RSAT are used to manage servers, but they function differently:
Remote Desktop Protocol (RDP):
- Method: Provides a full graphical user interface (GUI) session of the remote server. It is like sitting physically in front of the server.
- Usage: Best for initial configuration, troubleshooting complex issues, or managing applications that require a GUI on the server itself.
- Drawback: It consumes more server resources (GUI rendering) and only allows a limited number of concurrent admin sessions.
Remote Server Administration Tools (RSAT):
- Method: A collection of tools (Server Manager, MMC snap-ins, PowerShell cmdlets) installed on a client workstation (e.g., Windows 10/11) that connects to the server remotely.
- Usage: The preferred method for daily administration. The processing happens on the client, managing the server via APIs.
- Benefit: More secure (less exposure of the server GUI), consumes fewer server resources, and allows managing multiple servers from a single pane of glass.
Define a Service Level Agreement (SLA). What are the critical metrics usually included in a server SLA?
A Service Level Agreement (SLA) is a formal contract between a service provider (which can be an internal IT department) and a customer (end-users or business units) that defines the level of service expected.
Critical Metrics in an SLA:
- Uptime/Availability: usually expressed as a percentage (e.g., 99.9% or 'three nines'). It guarantees how long the server will be operational.
- Calculation:
- MTTR (Mean Time to Repair): The average time required to fix a failed component or service.
- RPO (Recovery Point Objective): The maximum acceptable amount of data loss measured in time (e.g., data must be recoverable to within 1 hour of the crash).
- RTO (Recovery Time Objective): The target time to restore a service after a disaster.
- Throughput/Performance: Minimum guaranteed speed or transaction processing capabilities.
Explain the four key hardware subsystems that must be monitored when analyzing server performance.
To ensure optimal server performance, administrators must monitor the four primary hardware subsystems, often referred to as the 'Four Food Groups' of performance monitoring:
- Processor (CPU):
- Key Counter:
% Processor Time. - Issue: High utilization (>80% sustained) indicates the CPU is the bottleneck.
- Key Counter:
- Memory (RAM):
- Key Counter:
Available MBytesorPages/sec. - Issue: High paging (moving data between RAM and Disk) indicates insufficient physical memory, severely slowing down the server.
- Key Counter:
- Disk (Storage I/O):
- Key Counter:
Avg. Disk Queue Length. - Issue: If the queue length is consistently high (e.g., >2 per spindle), the storage system cannot keep up with read/write requests.
- Key Counter:
- Network Interface:
- Key Counter:
Bytes Total/sec. - Issue: Saturation of network bandwidth results in packet loss and latency.
- Key Counter:
What is Capacity Planning? Describe the steps involved in an effective capacity planning process.
Capacity Planning is the proactive process of determining the production capacity needed by an organization to meet changing demands for its IT infrastructure. It ensures that resources (CPU, RAM, Storage) are available before they are actually needed.
Steps in Capacity Planning:
- Establish Baselines: Measure the current performance of the server under normal load to understand standard resource consumption.
- Analyze Trends: Use historical data to identify growth patterns (e.g., database size grows by 10% per month).
- Predict Future Requirements: Apply trends to business goals (e.g., 'We are hiring 50 new users next month') to forecast resource needs.
- Example:
- Simulate Load: Use stress-testing tools to verify if the proposed hardware can handle the predicted load.
- Plan and Acquire: Budget for and purchase necessary upgrades (scale-up) or additional servers (scale-out) before the bottleneck occurs.
Differentiate between MBR (Master Boot Record) and GPT (GUID Partition Table) partitioning styles.
MBR and GPT are two different ways of storing partitioning information on a drive.
| Feature | MBR (Master Boot Record) | GPT (GUID Partition Table) |
|---|---|---|
| Maximum Drive Size | Supports up to 2 TB. | Supports up to 18 EB (Exabytes). |
| Partition Limit | Maximum 4 primary partitions (or 3 primary + 1 extended). | Supports 128 primary partitions on Windows. |
| Resilience | Partition data is stored in one place. If corrupted, data is lost. | Stores redundant copies of partition headers (start and end of disk). |
| Boot Mode | Uses Legacy BIOS. | Requires UEFI to boot the OS. |
| Data Integrity | No built-in check. | Uses CRC (Cyclic Redundancy Check) to verify the integrity of the partition table. |
Conclusion: GPT is the modern standard and is required for drives larger than 2TB.
Compare DAS (Direct Attached Storage), NAS (Network Attached Storage), and SAN (Storage Area Network).
1. DAS (Direct Attached Storage):
- Definition: Storage physically connected directly to the server (e.g., internal hard drives, external USB/SAS drives).
- Access: Block-level access.
- Pros: Inexpensive, simple to configure, high speed for that specific server.
- Cons: Cannot be easily shared between servers; creates silos of data.
2. NAS (Network Attached Storage):
- Definition: A dedicated file-level storage device connected to the network (LAN).
- Access: File-level access (using protocols like SMB/CIFS or NFS).
- Pros: Easy to share files across different OS types, easier management.
- Cons: Performance depends on LAN traffic; generally slower than SAN for database applications.
3. SAN (Storage Area Network):
- Definition: A dedicated high-speed network that provides access to consolidated, block-level storage.
- Access: Block-level access (servers see the storage as local disks).
- Pros: High performance, high redundancy, supports advanced features like clustering and replication.
- Cons: Expensive and complex to set up (requires Fibre Channel or iSCSI infrastructure).
Explain RAID 0, RAID 1, and RAID 5. Discuss the trade-off between performance and redundancy in each.
RAID (Redundant Array of Independent Disks) combines multiple physical disks into a single logical unit.
-
RAID 0 (Striping):
- Mechanism: Data is split evenly across two or more disks.
- Redundancy: None. If one drive fails, all data is lost.
- Performance: Excellent (Read and Write) because operations are parallelized.
- Storage Efficiency: .
-
RAID 1 (Mirroring):
- Mechanism: Data is written identically to two drives.
- Redundancy: High. Can survive the loss of one drive.
- Performance: Good Read speed (can read from both), slightly slower Write speed (must write to both).
- Storage Efficiency: .
-
RAID 5 (Striping with Parity):
- Mechanism: Stripes data and parity information across three or more disks.
- Redundancy: Good. Can survive the loss of one drive. Data is reconstructed using parity.
- Performance: Excellent Read speed; Slower Write speed due to parity calculation overhead.
- Storage Efficiency: (where is the number of disks).
What is the difference between Software RAID and Hardware RAID?
Hardware RAID:
- Processing: Handled by a dedicated controller card (RAID card) with its own processor and cache memory.
- Performance: Does not use the server's CPU, resulting in better overall system performance.
- Features: Often supports hot-swapping drives and battery-backed cache for data protection during power loss.
- Cost: Expensive.
Software RAID:
- Processing: Managed by the Operating System (e.g., Windows Disk Management).
- Performance: Consumes the server's host CPU cycles to calculate parity and manage data flow, which can slow down the server under heavy load.
- Flexibility: Does not require specific hardware; works with any drives connected to the system.
- Cost: Free (included in the OS).
Derive the usable storage capacity for a RAID 10 array consisting of 6 drives, each with 2TB capacity.
Understanding RAID 10 (1+0):
RAID 10 is a nested RAID level that combines Mirroring (RAID 1) and Striping (RAID 0). It requires a minimum of 4 drives.
Configuration:
- The drives are paired into mirrors (RAID 1 sets).
- These mirrored sets are then striped (RAID 0).
Calculation:
For a RAID 10 array with drives of size :
- The formula for usable capacity is:
- This is because half the drives are used for mirroring (redundancy).
Applying the values:
- Number of drives () = 6
- Size per drive () = 2 TB
Conclusion:
The usable storage is 6 TB. The array can tolerate up to 3 drive failures (one from each mirrored pair).
What is iSCSI? Explain the function of the iSCSI Initiator and iSCSI Target.
iSCSI (Internet Small Computer System Interface) is a storage networking standard that runs over TCP/IP. It allows block-level SCSI commands to be sent over a regular local area network (LAN), enabling SAN implementation without expensive Fibre Channel hardware.
Components:
-
iSCSI Initiator (The Client):
- This is the server or endpoint that consumes the storage.
- It initiates the connection and sends SCSI commands over the IP network.
- It can be software (built into Windows Server) or hardware (a specialized network card).
-
iSCSI Target (The Storage Server):
- This is the storage device or server that houses the disk drives.
- It receives the SCSI commands, executes them on the physical storage, and returns the data.
- To the Initiator, the Target looks like a local hard drive.
Discuss the Principle of Least Privilege in the context of Server Administration Access.
The Principle of Least Privilege (PoLP) is a security concept requiring that a user, system, or process be granted only the minimum levels of access—or permissions—needed to perform its assigned function, and nothing more.
Application in Server Admin:
- Delegation: Instead of giving every IT staff member 'Domain Admin' rights, use the 'Delegation of Control' wizard to grant specific permissions (e.g., only resetting passwords or managing a specific Organizational Unit).
- Role-Based Access Control (RBAC): Assign permissions to groups based on job roles rather than individual users.
- Just-in-Time (JIT) Administration: Grant administrative privileges only when needed and revoke them immediately after the task is complete.
- Impact: This minimizes the attack surface. If an admin account is compromised, the damage is limited to the scope of that account's specific privileges.
Explain the purpose of DNS (Domain Name System) in a server environment and define A Records and CNAME Records.
Purpose of DNS:
DNS acts as the phonebook of the internet and local networks. It translates human-readable hostnames (like www.company.com or FILESRV01) into machine-readable IP addresses (like 192.168.1.50). In a Windows Server environment (Active Directory), DNS is critical for locating domain controllers and services.
Record Types:
-
A Record (Address Record):
- Maps a hostname to a 32-bit IPv4 address.
- Example:
Server1192.168.1.10. - This is the most fundamental record type.
-
CNAME Record (Canonical Name Record):
- Maps an alias name to a true (canonical) domain name.
- Example:
ftp.company.comserver1.company.com. - It does not point to an IP address directly but to another name. If the IP of
server1changes, the CNAMEftpremains valid automatically.
What are the benefits of using Server Core installation option over the Desktop Experience?
Server Core is a minimal installation option for Windows Server that does not include the standard Graphical User Interface (GUI).
Benefits:
- Reduced Attack Surface: Because there are fewer components (no GUI, fewer running services), there are fewer potential vulnerabilities for attackers to exploit.
- Reduced Maintenance/Patching: Fewer installed components mean fewer software updates are required, leading to fewer reboots and less downtime.
- Lower Resource Consumption: Server Core uses significantly less RAM and CPU and requires less disk space, allowing for higher density in virtualized environments.
- Stability: Less code running generally translates to fewer system crashes/bugs.
Management is typically performed remotely using RSAT, Windows Admin Center, or PowerShell.
Describe the NTFS (New Technology File System) permissions and how they differ from Share permissions.
File server security relies on two layers of permissions:
1. NTFS Permissions:
- Scope: Applied at the file system level. They protect data regardless of how a user accesses it (locally at the console or over the network).
- Granularity: Very detailed (Read, Write, Modify, Full Control, List Folder Contents, etc.).
- Inheritance: Permissions flow down from parent folders to child files/folders unless inheritance is broken.
2. Share Permissions:
- Scope: Applied only to the network share point. They essentially open the 'gateway' to the folder over the network.
- Limitation: If a user logs in locally to the server, share permissions do not apply.
- Granularity: Limited to Read, Change, and Full Control.
Combination Rule: When accessed over a network, the most restrictive permission between NTFS and Share permissions wins.
Explain the concept of Storage Spaces in Windows Server.
Storage Spaces is a storage virtualization technology built into Windows Server that allows administrators to protect data from drive failures and extend storage over time.
How it works:
- Storage Pools: You group physical drives (of different sizes or types, like SSDs and HDDs) into a single logical pool.
- Virtual Disks (Spaces): You create virtual disks from the available capacity in the pool. These act like physical disks to the OS.
- Resiliency: You can define layout types for these virtual disks:
- Simple: No redundancy (like RAID 0).
- Mirror: Data is duplicated (like RAID 1).
- Parity: Data and parity info are striped (like RAID 5).
Advantage: It allows for Thin Provisioning, where you can create a virtual disk larger than the physical storage currently available (e.g., a 10TB virtual disk on 5TB of physical drives) and add more physical drives later as the data grows.
What is a Baseline in performance monitoring, and why is it essential for troubleshooting?
Definition:
A Baseline is a set of data collected over a period of time (e.g., one week) representing the server's performance under normal operating conditions. It acts as a reference point for the 'health' of the system.
Importance for Troubleshooting:
- Anomaly Detection: Without a baseline, looking at a CPU spike of 80% is meaningless. Is 80% normal for this time of day? If the baseline shows the average is usually 20%, then 80% indicates a problem.
- Capacity Planning: Baselines show the rate of growth over time, allowing administrators to predict when resources will run out.
- Verification: After making a configuration change or hardware upgrade, comparing new data against the baseline helps verify if performance has actually improved.