B.To automate the process of installing, updating, configuring, and removing software packages
C.To write and compile source code for new applications
D.To manage user accounts and permissions
Correct Answer: To automate the process of installing, updating, configuring, and removing software packages
Explanation:
A package manager is a tool that automates the management of software packages, handling tasks like installation, upgrades, and dependency resolution, which simplifies software administration.
Incorrect! Try again.
2In the context of software management, what is a 'dependency'?
A.An optional plugin that adds extra features to a program
B.A backup copy of an application
C.The end-user license agreement for a piece of software
D.A software library or package that another program requires to function correctly
Correct Answer: A software library or package that another program requires to function correctly
Explanation:
A dependency is a prerequisite piece of software that must be installed for another program to work. Package managers are crucial for automatically identifying and installing these dependencies.
Incorrect! Try again.
3Which command is commonly used to install a package on an RPM-based Linux distribution like CentOS or Fedora?
Managing Software: Manage RPM software packages and repositories
Easy
A.dnf install package_name
B.apt-get install package_name
C.install-package package_name
D.pacman -S package_name
Correct Answer: dnf install package_name
Explanation:
dnf (and its predecessor yum) is the standard high-level package manager for RPM-based systems. apt-get is for Debian-based systems, and pacman is for Arch Linux.
Incorrect! Try again.
4What is the standard file extension for a Red Hat Package Manager file?
Managing Software: Manage RPM software packages and repositories
Easy
A..tar.gz
B..deb
C..rpm
D..exe
Correct Answer: .rpm
Explanation:
Files used by the Red Hat Package Manager are packaged with the .rpm extension. .deb is for Debian packages, .tar.gz is a compressed archive, and .exe is for Windows executables.
Incorrect! Try again.
5On a Debian-based system like Ubuntu, which command should you run to update the local list of available packages?
Managing Software: Manage Debian-based software packages and repositories
Easy
A.sudo dnf refresh
B.sudo yum update
C.sudo apt update
D.sudo apt upgrade
Correct Answer: sudo apt update
Explanation:
The sudo apt update command synchronizes the local package index files from the repositories. apt upgrade is used to install the newest versions of all packages currently installed.
Incorrect! Try again.
6What is the low-level tool used to manage .deb packages on Debian and its derivatives?
Managing Software: Manage Debian-based software packages and repositories
Easy
A.make
B.dpkg
C.yum
D.rpm
Correct Answer: dpkg
Explanation:
dpkg is the underlying package manager for Debian-based systems. Higher-level tools like apt and apt-get use dpkg to perform package installation, removal, and management.
Incorrect! Try again.
7In the common three-step process for compiling software from source (configure, make, make install), what is the purpose of the first step, ./configure?
Managing Software: Compile from source code
Easy
A.To compile the source code into binary files
B.To copy the compiled files to their final destination
C.To check the system for required dependencies and create a Makefile
D.To download the source code from the internet
Correct Answer: To check the system for required dependencies and create a Makefile
Explanation:
The ./configure script prepares the source code for compilation on the specific system by checking for necessary libraries, tools, and system features, and then generating a customized Makefile.
Incorrect! Try again.
8What does the make command do when you are building software from source?
Managing Software: Compile from source code
Easy
A.It configures the build environment
B.It reads the Makefile and executes the commands to compile the program
C.It installs the program onto the system
D.It deletes the source code after compilation
Correct Answer: It reads the Makefile and executes the commands to compile the program
Explanation:
The make utility follows the instructions in the Makefile (created by the ./configure script) to convert the human-readable source code into machine-executable binary files.
Incorrect! Try again.
9What is a software 'repository'?
Managing Software: Acquire software
Easy
A.A type of computer virus that replicates software
B.A centralized server or location where software packages are stored and maintained for distribution
C.A list of all software currently installed on a computer
D.A text file containing source code
Correct Answer: A centralized server or location where software packages are stored and maintained for distribution
Explanation:
A repository (or 'repo') is a storage location from which your system retrieves and installs software updates and applications. Package managers are configured to use one or more repositories.
Incorrect! Try again.
10What is generally the safest way to acquire and install software on a Linux system?
Managing Software: Acquire software
Easy
A.Using the official package manager and repositories provided by the distribution
B.Disabling security features before installation
C.Compiling source code found on a random forum
D.Downloading and running scripts from untrusted websites
Correct Answer: Using the official package manager and repositories provided by the distribution
Explanation:
The official repositories contain software that has been tested and packaged specifically for the distribution, making it the most secure and reliable method for acquiring software.
Incorrect! Try again.
11What is the primary security benefit of running an application in a sandbox?
Managing Software: Run software in a sandbox
Easy
A.It makes the application run significantly faster
B.It automatically compresses the application to save disk space
C.It provides a better user interface for the application
D.It isolates the application, restricting its access to the host operating system and user data
Correct Answer: It isolates the application, restricting its access to the host operating system and user data
Explanation:
Sandboxing creates a controlled, isolated environment. If the sandboxed application is compromised, the isolation helps prevent it from harming the rest of the system.
Incorrect! Try again.
12Which of the following technologies is commonly used to implement application sandboxing?
Containerization technologies are a popular way to create isolated (sandboxed) environments where applications can run with their own dependencies and limited access to the host system.
Incorrect! Try again.
13What does the acronym RAID stand for?
Administering Storage: Understand storage
Easy
A.Rapid Application Internet Deployment
B.Read-Only Archived Information Directory
C.Redundant Array of Independent Disks
D.Random Access Integrated Drive
Correct Answer: Redundant Array of Independent Disks
Explanation:
RAID is a storage technology that combines multiple physical disk drives into one or more logical units for the purposes of data redundancy, performance improvement, or both.
Incorrect! Try again.
14What is a major difference between a Solid-State Drive (SSD) and a Hard Disk Drive (HDD)?
Administering Storage: Understand storage
Easy
A.SSDs have no moving parts, while HDDs use spinning platters
B.HDDs are significantly faster than SSDs
C.SSDs are always larger in capacity than HDDs
D.SSDs can only store data temporarily
Correct Answer: SSDs have no moving parts, while HDDs use spinning platters
Explanation:
The primary physical difference is that SSDs use flash memory chips for data storage, resulting in faster access times and better durability, whereas HDDs rely on mechanical spinning disks and read/write heads.
Incorrect! Try again.
15What is the process of creating a filesystem on a storage partition called?
Administering Storage: Deploy storage
Easy
A.Partitioning
B.Mounting
C.Swapping
D.Formatting
Correct Answer: Formatting
Explanation:
Formatting (or 'making a filesystem') is the process of preparing a data storage device such as a hard disk drive or partition for initial use by writing a new filesystem to it.
Incorrect! Try again.
16Which of the following is a common, modern filesystem used by default in many Linux distributions?
Administering Storage: Deploy storage
Easy
A.FAT32
B.APFS
C.ext4
D.NTFS
Correct Answer: ext4
Explanation:
The Fourth Extended Filesystem (ext4) is a journaling file system for Linux and the successor to ext3. It is the default for many popular distributions like Debian and Ubuntu. NTFS is for Windows, and APFS is for macOS.
Incorrect! Try again.
17What does NAS stand for in the context of computer storage?
Administering Storage: Manage other storage options
Easy
A.New Age Software
B.Network Attached Storage
C.Native Archiving System
D.Network Access Server
Correct Answer: Network Attached Storage
Explanation:
NAS is a file-level computer data storage server connected to a computer network, providing data access to a diverse group of clients.
Incorrect! Try again.
18What is a primary advantage of using Logical Volume Management (LVM) in Linux?
Administering Storage: Manage other storage options
Easy
A.It increases the physical read/write speed of the disk
B.It is the only way to create partitions
C.It provides more flexible disk space management, like easy volume resizing
D.It automatically encrypts all data written to the disk
Correct Answer: It provides more flexible disk space management, like easy volume resizing
Explanation:
LVM adds a layer of abstraction over physical storage, allowing administrators to create logical volumes that can be easily resized, moved, and managed without repartitioning disks.
Incorrect! Try again.
19Which Linux command is typically used to check the amount of free disk space on mounted filesystems?
Administering Storage: Troubleshoot storage
Easy
A.df -h
B.ls -l
C.du -h
D.free
Correct Answer: df -h
Explanation:
The df (disk free) command reports filesystem disk space usage. The -h flag makes the output 'human-readable' (e.g., showing MB and GB instead of just blocks).
Incorrect! Try again.
20What is the main purpose of S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) for hard drives?
Administering Storage: Troubleshoot storage
Easy
A.To organize files more efficiently
B.To increase the speed of data transfers
C.To monitor drive health and predict potential failures
D.To encrypt the data on the drive
Correct Answer: To monitor drive health and predict potential failures
Explanation:
S.M.A.R.T. is a monitoring system included in storage devices that detects and reports on various indicators of reliability, with the intent of enabling the anticipation of hardware failures.
Incorrect! Try again.
21An administrator on a RHEL-based system needs to find out which installed package provides the /etc/ssh/sshd_config file. Which of the following commands will accomplish this?
Managing Software: Manage RPM software packages and repositories
Medium
A.A) rpm -qf /etc/ssh/sshd_config
B.B) yum whatprovides /etc/ssh/sshd_config
C.D) dnf search /etc/ssh/sshd_config
D.C) rpm -ql sshd_config
Correct Answer: A) rpm -qf /etc/ssh/sshd_config
Explanation:
The command rpm -qf (query file) is used to determine which installed package owns a specific file. yum whatprovides (or dnf whatprovides) is similar but queries configured repositories to find which package provides a file, regardless of whether it's installed. rpm -ql lists all files within an installed package, but you need to know the package name first. dnf search searches package names and descriptions, not file paths.
Incorrect! Try again.
22A system administrator on a Debian system has just run apt update. They now want to see a list of all installed packages that have available upgrades, without actually performing the upgrade. Which command should they use?
Managing Software: Manage Debian-based software packages and repositories
Medium
A.B) apt-get upgrade --simulate
B.D) apt-cache policy
C.A) apt list --upgradable
D.C) dpkg --get-selections | grep hold
Correct Answer: A) apt list --upgradable
Explanation:
apt list --upgradable is the modern and direct command to display a list of all packages that can be upgraded. apt-get upgrade --simulate (or -s) will show what would happen during an upgrade but is more verbose. dpkg --get-selections | grep hold lists packages that are explicitly held back from upgrades. apt-cache policy shows repository priority information but doesn't list upgradable packages directly.
Incorrect! Try again.
23An administrator is setting up a new Linux server and needs to prepare a new, unformatted disk /dev/sdc for use with LVM. What is the correct first step in the LVM setup process for this disk?
Administering Storage: Deploy storage
Medium
A.B) Run vgcreate my_vg /dev/sdc to create a volume group directly.
B.D) Run lvcreate -n my_lv -L 10G my_vg to create a logical volume on the disk.
C.C) Run mkfs.ext4 /dev/sdc to format the disk before adding it to LVM.
D.A) Run pvcreate /dev/sdc to initialize it as a physical volume.
Correct Answer: A) Run pvcreate /dev/sdc to initialize it as a physical volume.
Explanation:
The first step in using a block device with LVM is to initialize it as a Physical Volume (PV) using the pvcreate command. Only after a device is a PV can it be added to a Volume Group (VG) with vgcreate or vgextend. Creating a filesystem with mkfs or a logical volume with lvcreate are later steps in the process and cannot be done on a raw, unprepared block device.
Incorrect! Try again.
24An administrator is compiling software from source on a multi-core server. The ./configure script has been run successfully. To significantly speed up the compilation process, which command should be used to utilize 8 CPU cores?
Managing Software: Compile from source code
Medium
A.A) make -j 8
B.D) make && make install -j 8
C.B) make --threads=8
D.C) compile --parallel=8
Correct Answer: A) make -j 8
Explanation:
The make utility uses the -j or --jobs flag to specify the number of commands to run simultaneously, which parallelizes the compilation process across multiple CPU cores. make -j 8 will run up to 8 jobs in parallel. The other options use incorrect syntax or commands for the standard make utility.
Incorrect! Try again.
25A user reports that a filesystem mounted at /data is full. df -h shows the partition is at 100% usage. However, du -sh /data reports a total size significantly less than the partition's capacity. What is the most common cause for this discrepancy?
Administering Storage: Troubleshoot storage
Medium
A.A) A running process has an open file descriptor to a large, deleted file.
B.C) The du command does not have permission to read all subdirectories.
C.D) The filesystem is thinly provisioned and has over-allocated space.
D.B) The filesystem has run out of available inodes.
Correct Answer: A) A running process has an open file descriptor to a large, deleted file.
Explanation:
This is a classic issue. When a file is deleted, its name is removed from the directory structure, so du (which sums file sizes) no longer sees it. However, if a process still has the file open, the kernel will not release the blocks on the disk. df reports block usage from the filesystem's perspective and correctly shows the space as still occupied. The space is only truly freed when the process closes the file handle (or terminates).
Incorrect! Try again.
26What is the primary security benefit of running an application inside a container (like Docker) or using a sandboxing tool like Firejail?
Managing Software: Run software in a sandbox
Medium
A.A) It isolates the application's processes and filesystem access from the host system.
B.C) It encrypts the application's binary code to prevent reverse engineering.
C.B) It automatically resolves all software dependencies for the application.
D.D) It guarantees the application will run faster due to kernel optimizations.
Correct Answer: A) It isolates the application's processes and filesystem access from the host system.
Explanation:
The main goal of sandboxing is isolation. It uses kernel features like namespaces and cgroups to create a restricted environment for an application. If the sandboxed application is compromised, the attacker's access is limited to the sandbox, preventing them from easily affecting the host operating system or other applications. While containers help with dependencies, their primary security benefit is isolation.
Incorrect! Try again.
27In the context of storage technologies, what is the key difference between Network Attached Storage (NAS) and a Storage Area Network (SAN)?
Administering Storage: Understand storage
Medium
A.D) NAS provides block-level access to clients, while SAN provides file-level access.
B.A) NAS presents storage as a file-based share (e.g., NFS, SMB), while SAN presents storage as block-level devices.
C.B) NAS uses fiber optic cables exclusively, while SAN uses standard Ethernet networks.
D.C) NAS is used for backups only, while SAN is used for primary application storage.
Correct Answer: A) NAS presents storage as a file-based share (e.g., NFS, SMB), while SAN presents storage as block-level devices.
Explanation:
The fundamental difference lies in the access protocol and how the storage is presented to the client. A NAS device serves files over the network (file-level access). A client operating system sees it as a network share. A SAN provides block-level access (like iSCSI or Fibre Channel), which makes the remote storage appear as a local disk to the client operating system, which can then be partitioned and formatted with a filesystem.
Incorrect! Try again.
28A developer asks you to install a specific version of a Node.js library for their project, but they want it isolated from the system's global packages. Which of the following approaches best satisfies this requirement?
Managing Software: Acquire software
Medium
A.B) Use yum install nodejs-library to install it from the system's repository.
B.C) Compile the library from source and install it into /usr/local/bin.
C.A) Use Node Version Manager (nvm) or a local node_modules directory within the project.
D.D) Download a binary RPM and install it using rpm -ivh --force.
Correct Answer: A) Use Node Version Manager (nvm) or a local node_modules directory within the project.
Explanation:
Language-specific package managers and version managers (like nvm for Node.js, pyenv for Python, rbenv for Ruby) are designed specifically for this purpose. They allow developers to manage project-specific dependencies in an isolated environment (node_modules folder) without affecting system-wide installations or other projects. The other methods all involve system-wide installation, which is what needs to be avoided.
Incorrect! Try again.
29An administrator tries to install a local RPM file with dnf install myapp.rpm, but the installation fails due to a missing dependency, libexample.so.2. This dependency is available in a newly added, but disabled, repository called extra-tools. Which command will successfully install the package and its dependency?
Managing Software: Manage RPM software packages and repositories
Medium
Correct Answer: A) dnf --enablerepo=extra-tools install myapp.rpm
Explanation:
The dnf command can install local RPM files while resolving their dependencies from configured repositories. The --enablerepo flag allows you to temporarily enable a disabled repository for a single transaction. This is the most direct way to solve the problem. Option B is incorrect as rpm has no --resolve-deps flag. Option C would fail on the first command. Option D uses rpm for the final install, which would not resolve dependencies from the newly enabled repo.
Incorrect! Try again.
30An administrator needs to create a 2GB file that will be used as a loop device for a virtual disk. The file should be pre-allocated and filled with zeros. Which command is most appropriate for this task?
Administering Storage: Manage other storage options
Medium
Correct Answer: A) dd if=/dev/zero of=/path/to/disk.img bs=1M count=2048
Explanation:
The dd command is the classic tool for this task. if=/dev/zero provides a stream of null bytes, and bs=1M count=2048 specifies writing 2048 blocks of 1 Megabyte each, resulting in a 2GB file filled with zeros. While fallocate and truncate can create a file of a specific size much faster, they create a sparse file, which is not pre-allocated with zeros. mkfs is used to create a filesystem inside a file or on a device, not to create the file itself.
Incorrect! Try again.
31A critical security update for the openssl package has been released. An administrator on an Ubuntu server wants to install only this specific update and its required dependencies, without upgrading any other packages on the system. Which command should be used?
Managing Software: Debian-based software packages and repositories
Medium
A.C) apt-get install openssl
B.D) dpkg -i openssl_latest.deb
C.B) apt-get upgrade openssl
D.A) apt-get install --only-upgrade openssl
Correct Answer: C) apt-get install openssl
Explanation:
On Debian-based systems, running apt-get install <package_name> for a package that is already installed will cause apt to check the repositories for a newer version. If one is found, it will upgrade that specific package and any dependencies it requires. apt-get upgrade would upgrade all upgradable packages. The --only-upgrade flag is a valid flag for apt-get install, but simply running apt-get install openssl achieves the same desired outcome and is more common practice. Using dpkg directly would not resolve any new dependencies the updated package might have.
Incorrect! Try again.
32Which of the following best describes the concept of a 'dependency hell' in software management?
Managing Software: Understand software management
Medium
A.A) A situation where multiple applications require different and conflicting versions of the same shared library.
B.B) The process of manually compiling every single dependency a program needs from source code.
C.D) When a package manager's repository server is offline and cannot be reached.
D.C) A software package that has been digitally signed by an untrusted source.
Correct Answer: A) A situation where multiple applications require different and conflicting versions of the same shared library.
Explanation:
'Dependency hell' is the classic problem where installing or updating one piece of software breaks another. This typically occurs because Application A needs libfoo.so.1, while Application B needs libfoo.so.2, and the system can only have one version easily accessible, or installing one overwrites the other. Modern package managers and containerization are designed to mitigate this problem.
Incorrect! Try again.
33An administrator has a Volume Group named vg_data with 50GB of free space. They need to create a new 20GB Logical Volume named lv_apps formatted with the XFS filesystem and mount it persistently at /apps. Which sequence of commands is correct?
Administering Storage: Deploy storage
Medium
A.A) lvcreate -n lv_apps -L 20G vg_data, then mkfs.xfs /dev/vg_data/lv_apps, then add to /etc/fstab and mount.
B.D) lvcreate -n lv_apps -l 100%FREE vg_data, then mkfs.xfs /dev/vg_data/lv_apps, then add to /etc/fstab and mount.
C.B) mkfs.xfs /dev/vg_data -L 20G, then lvcreate -n lv_apps /dev/vg_data, then add to /etc/fstab and mount.
D.C) lvextend -L +20G /dev/vg_data/lv_apps, then mkfs.xfs /dev/vg_data/lv_apps, then add to /etc/fstab and mount.
Correct Answer: A) lvcreate -n lv_apps -L 20G vg_data, then mkfs.xfs /dev/vg_data/lv_apps, then add to /etc/fstab and mount.
Explanation:
The correct process is to first create the Logical Volume (LV) from the Volume Group (VG) using lvcreate. Then, a filesystem is created on the newly created LV device (/dev/vg_name/lv_name). Finally, an entry is added to /etc/fstab for persistent mounting, and the filesystem is mounted. The other options perform these steps in the wrong order or use incorrect commands (lvextend is for resizing, not creating).
Incorrect! Try again.
34After successfully running ./configure and make, an administrator runs make install. By default, where does this command typically place the compiled binaries and associated files?
Managing Software: Compile from source code
Medium
A.D) Within the user's home directory (e.g., ~/bin).
B.C) Directly into /usr/bin and /usr/lib, overwriting system packages.
C.A) In subdirectories of /usr/local (e.g., /usr/local/bin, /usr/local/lib).
D.B) In subdirectories of /opt.
Correct Answer: A) In subdirectories of /usr/local (e.g., /usr/local/bin, /usr/local/lib).
Explanation:
The standard convention for software compiled manually by a local administrator is to install it into the /usr/local hierarchy. This isolates it from the software managed by the system's package manager (which typically uses /usr/bin, /usr/lib, etc.), preventing conflicts and making it easier to manage or remove manually installed software. While this default can be changed with a ./configure --prefix=/path option, /usr/local is the standard default.
Incorrect! Try again.
35A Linux server's root filesystem is nearly full. An administrator needs to find the largest files and directories within /var/log to identify what can be cleaned up. Which of the following commands is most effective for this specific task?
Administering Storage: Troubleshoot storage
Medium
A.C) find /var/log -type f -size +100M
B.B) ls -lSh /var/log
C.D) df -ih /var/log
D.A) du -ah /var/log | sort -rh | head -n 20
Correct Answer: A) du -ah /var/log | sort -rh | head -n 20
Explanation:
This command pipeline is a powerful way to find large disk space consumers. du -ah calculates disk usage for all files and directories in a human-readable format. This output is then piped to sort -rh, which sorts the lines numerically (-n) in reverse (largest first, -r), correctly interpreting the human-readable suffixes like M and G (-h). Finally, head -n 20 displays the top 20 largest items. ls -lSh only shows file sizes in the current directory, not subdirectories. find is good but requires you to guess a size (+100M). df shows overall filesystem usage, not individual file/directory sizes.
Incorrect! Try again.
36When using a tool like AppArmor or SELinux, what is the primary mechanism used to confine an application and restrict its capabilities?
Managing Software: Run software in a sandbox
Medium
A.A) Attaching a security policy or profile to the application's executable that defines allowed actions (e.g., file access, network ports).
B.B) Running the application as a special, unprivileged user with a restricted home directory.
C.D) Intercepting all system calls made by the application and requiring user approval for each one.
D.C) Encapsulating the application within a lightweight virtual machine with its own kernel.
Correct Answer: A) Attaching a security policy or profile to the application's executable that defines allowed actions (e.g., file access, network ports).
Explanation:
AppArmor and SELinux are Mandatory Access Control (MAC) systems. They work by enforcing a detailed security policy that is independent of standard Linux user/group permissions. A profile (AppArmor) or context (SELinux) is associated with an application, and the kernel's Linux Security Module (LSM) framework enforces the rules in that policy, such as what files it can read/write or what network capabilities it has. This is a more granular and powerful control than just running as an unprivileged user.
Incorrect! Try again.
37Which of the following RAID levels provides redundancy through disk mirroring but does not offer any performance improvement for write operations?
Administering Storage: Understand storage
Medium
A.B) RAID 0
B.C) RAID 5
C.A) RAID 1
D.D) RAID 10
Correct Answer: A) RAID 1
Explanation:
RAID 1 (Mirroring) writes identical data to two or more disks simultaneously. This provides excellent redundancy, as the array can survive the failure of any single disk (in a 2-disk array). However, since every piece of data must be written to every disk in the mirror, the write performance is typically limited to the speed of the slowest disk in the set and offers no improvement over a single disk. Read performance, however, can be improved as data can be read from any disk in the mirror.
Incorrect! Try again.
38A system administrator needs to install software that is not available in their distribution's official repositories. The vendor provides the software as an AppImage file. What is the correct procedure to run this software?
Managing Software: Acquire software
Medium
A.D) Mount the AppImage file as a loop device and copy the contents to /opt.
B.C) Extract the contents of the AppImage file using tar and run the binary inside.
C.B) Use apt install ./software.AppImage to register it with the package manager.
D.A) Make the AppImage file executable (chmod +x) and then run it directly (./software.AppImage).
Correct Answer: A) Make the AppImage file executable (chmod +x) and then run it directly (./software.AppImage).
Explanation:
AppImage is a format for distributing portable software on Linux that does not require superuser permissions to install. The entire application and its dependencies are contained within a single file. The standard procedure is to download the file, make it executable using chmod, and then simply execute it. It does not integrate with the system package manager like apt or yum.
Incorrect! Try again.
39What is the primary use case for creating a swap partition or a swap file on a Linux system?
Administering Storage: Manage other storage options
Medium
A.B) To create a temporary filesystem that resides entirely in RAM for high-speed file operations.
B.D) To act as a cache for the package manager to speed up software installation.
C.C) To store a backup copy of the master boot record (MBR) for system recovery.
D.A) To provide virtual memory, allowing the system to move inactive memory pages from RAM to disk when physical memory is low.
Correct Answer: A) To provide virtual memory, allowing the system to move inactive memory pages from RAM to disk when physical memory is low.
Explanation:
Swap space (either a partition or a file) is used by the kernel as virtual memory. When the system's physical RAM is full, the kernel's memory manager can 'swap out' memory pages that are not currently in active use to the swap space on the disk. This frees up RAM for active processes. It also enables hibernation, where the entire contents of RAM are written to swap before shutdown.
Incorrect! Try again.
40What is the role of a package repository in a Linux distribution's software management system?
Managing Software: Understand software management
Medium
A.C) The source code archive for a single application before it is compiled.
B.A) A centralized server that stores and manages a collection of software packages, metadata, and cryptographic keys for a distribution.
C.D) A version control system like Git used to manage changes to system configuration files.
D.B) A local database file on a client system that tracks all currently installed software and their versions.
Correct Answer: A) A centralized server that stores and manages a collection of software packages, metadata, and cryptographic keys for a distribution.
Explanation:
A repository (or 'repo') is a remote storage location from which package managers like apt and dnf retrieve and install software. It contains the actual package files (e.g., .deb, .rpm), along with metadata files that list available packages, their versions, and their dependencies. It also typically holds GPG keys to verify the authenticity and integrity of the packages.
Incorrect! Try again.
41A system administrator runs dnf update on a critical RHEL 8 server, but the transaction fails due to a file conflict between an updated package foo-2.0 and a manually installed file from another package bar-1.0. The error is /usr/bin/foobar conflicts with file from package bar-1.0-1.x86_64. The administrator cannot remove bar-1.0 as it's a legacy dependency. Which dnf command is the most appropriate and safest way to proceed with the update while resolving the conflict?
Managing Software: Manage RPM software packages and repositories
Hard
dnf update --allowerasing is the correct choice because it allows DNF to resolve the conflict by removing the package that owns the conflicting file (bar-1.0) to complete the transaction for foo-2.0. This is superior to --skip-broken, which would simply not update foo and its dependents. Forcing removal with rpm -e --nodeps is risky and can leave the system in an inconsistent state. --setopt=tsflags=noscripts would not resolve a file conflict.
Incorrect! Try again.
42You are managing a Debian server with repositories for stable, testing, and backports. You need to install the latest version of nginx from testing (1.20) but keep all other packages from stable (which has nginx 1.18). However, you also want to ensure that any security updates for nginx from stable-security (e.g., 1.18.0-1+deb11u1) are given higher priority than the version from testing if they are released. Which of the following /etc/apt/preferences.d/nginx-pin configurations correctly achieves this complex pinning strategy?
Managing Software: Manage Debian-based software packages and repositories
Hard
This configuration correctly sets the priorities. The default priority for installed packages is 100. The default for uninstalled packages from a target release is 500. A priority between 500 and 1000 allows version downgrades. A priority > 1000 will force a version, even if it's a downgrade. By setting the testing version to 900, apt will prefer it over stable (default 500) for installation. Crucially, by setting the Debian-Security label to 1001 for nginx, any package from a security repository will be prioritized over all other versions, ensuring security patches are always applied first.
Incorrect! Try again.
43You are compiling a complex piece of software on a hardened system where standard library paths are not used. The configure script fails, unable to find the libcrypto library, which is located in /opt/custom/ssl/lib64, with its headers in /opt/custom/ssl/include. You have already set export LD_LIBRARY_PATH=/opt/custom/ssl/lib64. Why is the configure script still failing, and what is the most robust solution?
Managing Software: Compile from source code
Hard
A.The LD_LIBRARY_PATH is only used at runtime, not compile time. The solution is to use configure --with-ssl-dir=/opt/custom/ssl.
B.The LD_LIBRARY_PATH variable is incorrect; it should be LD_PRELOAD. The solution is to use export LD_PRELOAD=/opt/custom/ssl/lib64/libcrypto.so.
C.LD_LIBRARY_PATH is for runtime linking. The configure script needs compile-time flags. The solution is to set environment variables: LDFLAGS="-L/opt/custom/ssl/lib64" CPPFLAGS="-I/opt/custom/ssl/include" ./configure.
D.LD_LIBRARY_PATH is correct but the linker cache is stale. The solution is to run ldconfig before ./configure.
Correct Answer: LD_LIBRARY_PATH is for runtime linking. The configure script needs compile-time flags. The solution is to set environment variables: LDFLAGS="-L/opt/custom/ssl/lib64" CPPFLAGS="-I/opt/custom/ssl/include" ./configure.
Explanation:
LD_LIBRARY_PATH is a mechanism to tell the dynamic linker where to find shared libraries at runtime. The ./configure script and the subsequent compilation process need to know where to find libraries and header files at compile time. LDFLAGS is used to pass options to the linker (like -L to add a library search path), and CPPFLAGS is used to pass options to the C preprocessor (like -I to add an include file search path). This is the standard, most portable way to solve the issue when a specific --with-... flag is not available or doesn't work.
Incorrect! Try again.
44A database server with an XFS filesystem on /dev/sdb1 experiences a power failure. Upon reboot, the system hangs during the mount process for /dev/sdb1. dmesg shows messages indicating a corrupt log: XFS (sdb1): Log recovery failed. You cannot afford to lose data by reformatting. Which command sequence represents the safest and most appropriate first attempt at recovery?
D.xfs_logprint /dev/sdb1; xfs_admin -z /dev/sdb1; mount /dev/sdb1
Correct Answer: xfs_repair -L /dev/sdb1
Explanation:
When an XFS log recovery fails, it means the primary recovery mechanism is broken. The -L option for xfs_repair tells it to zero out (reset) the log. This is a last-resort option that may result in the loss of metadata for transactions that were in-flight during the crash, but it is often the only way to make the filesystem mountable again without a full mkfs. The other options are incorrect: xfs_repair -n is a no-op check; mount will likely fail again without addressing the log issue; xfs_admin -z is not a standard command for this purpose, and zeroing the log is what -L does.
Incorrect! Try again.
45You are sandboxing a legacy network service using systemd's sandboxing features. The service needs to bind to port 2049 on the loopback interface (127.0.0.1) and have read-only access to /etc and read-write access to its state directory in /var/lib/legacy-svc. All other filesystem access, network capabilities (beyond loopback), and privileges must be denied. Which combination of directives in the [Service] section of the systemd unit file achieves the most secure configuration meeting these requirements?
ProtectSystem=strict mounts /usr and /boot read-only and /etc read-only for the service. PrivateNetwork=yes sets up a new network namespace with only a loopback device (lo), satisfying the network requirement. ReadWritePaths=/var/lib/legacy-svc creates a writeable bind mount for the state directory. BindReadOnlyPaths=/etc is redundant with ProtectSystem=strict but reinforces the read-only requirement for /etc. This combination is the most direct and secure way to achieve the stated goals using modern systemd features. The other options are either less secure or don't meet all requirements (RootDirectory is too restrictive; InaccessiblePaths=/ would block needed access; NetworkNamespacePath is for joining existing namespaces).
Incorrect! Try again.
46An administrator needs to create a 100GiB thin-provisioned LVM logical volume named lv_thin within the volume group vg_data, backed by a 20GiB thin pool named thin_pool. They then need to format it with ext4. What is the correct sequence of commands to accomplish this?
This sequence is correct and demonstrates a nuanced understanding of LVM thin provisioning.
lvcreate -L 20G -T vg_data/thin_pool: This command creates the thin pool. The -T or --thinpool flag is key. The size (-L 20G) specifies the actual physical space allocated for the pool.
lvcreate -V 100G -T vg_data/thin_pool -n lv_thin: This creates the thin logical volume. -V or --virtualsize specifies the apparent size of the volume (100GiB), which can be larger than the pool itself. -T specifies which pool it belongs to, and -n gives it a name.
mkfs.ext4 /dev/vg_data/lv_thin: Formats the newly created thin volume. The other options use incorrect syntax or misunderstand the relationship between the thin pool and the thin volume.
Incorrect! Try again.
47You are managing a ZFS pool zpool1 with a dataset zpool1/data containing critical information. You need to create a point-in-time, read-only copy for backup verification and simultaneously create a writable clone of that copy for development testing, without duplicating the data blocks on disk initially. Which set of commands correctly and most efficiently accomplishes this?
Administering Storage: Manage other storage options
Hard
This is the canonical ZFS workflow for this task. zfs snapshot zpool1/data@backup creates an instantaneous, read-only, space-efficient snapshot of the dataset. The second command, zfs clone zpool1/data@backup zpool1/dev_clone, then creates a new writable dataset (zpool1/dev_clone) whose initial contents are identical to the snapshot. This clone is also space-efficient, as it only stores new or modified blocks (Copy-on-Write). send/receive is for replication, not local cloning. Cloning a dataset directly is not possible; you must clone a snapshot. promote is used for swapping a clone with its origin dataset.
Incorrect! Try again.
48A user downloads a binary app.tar.gz, a signature file app.tar.gz.asc, and the developer's public key dev.key. The user has never interacted with this developer before. What is the correct and most secure sequence of gpg commands to verify the integrity and authenticity of the downloaded binary?
This sequence is correct for a basic verification. gpg --import dev.key adds the developer's public key to the user's keyring. gpg --verify app.tar.gz.asc app.tar.gz then uses the public key to check if the signature in app.tar.gz.asc is valid for the file app.tar.gz. GPG will output a 'Good signature' message but also a warning like 'This key is not certified with a trusted signature!'. This is expected and correct behavior, as the user has not established a trust path to the key. The option with --edit-key and setting ultimate trust (5) is insecure without out-of-band verification of the key's fingerprint. The other options use incorrect commands or logic.
Incorrect! Try again.
49A system is being designed for a high-performance video editing workload, which involves sequential reads and writes of very large files (50-200 GB each) and requires protection against a single disk failure. The system has four 8TB NVMe drives available. To maximize sequential throughput and provide redundancy, which of the following storage configurations is optimal?
Administering Storage: Understand storage
Hard
A.A ZFS pool configured as a single RAID-Z1 vdev.
B.An XFS filesystem on an LVM-managed RAID 5 array.
C.An XFS filesystem on a Linux software RAID 0 array, with nightly backups.
D.A Btrfs filesystem on a RAID 10 array.
Correct Answer: A ZFS pool configured as a single RAID-Z1 vdev.
Explanation:
For large sequential workloads, striped configurations excel. RAID-Z1 is ZFS's equivalent of RAID 5, offering striping with single-parity protection. ZFS is particularly well-suited for this task because its integrated nature (volume manager + filesystem) avoids the abstraction penalties of LVM+MD+FS. Its Copy-on-Write architecture and advanced caching (ARC) are highly beneficial for large file I/O. RAID 10 would sacrifice half the capacity for mirroring and doesn't offer as much sequential read performance as a wide stripe. LVM RAID 5 suffers from the 'write hole' and generally has poorer performance than ZFS's RAID-Z. RAID 0 provides no redundancy, making it unsuitable for a critical workload.
Incorrect! Try again.
50You are creating a custom RPM package. You need a script to run before the transaction begins, to check for a specific kernel module. If the module is not loaded, the entire transaction (including this package and any others being installed with it) should fail. Where in the RPM .spec file should this script be placed?
Managing Software: Manage RPM software packages and repositories
Hard
A.%verifyscript
B.%pre
C.%triggerin
D.%pretrans
Correct Answer: %pretrans
Explanation:
The %pretrans scriptlet is executed at the very beginning of a transaction, before any packages are installed or erased. A non-zero exit code from this scriptlet will cause the entire DNF/YUM transaction to abort. This is the correct place for a global pre-flight check that must pass for the transaction to even begin. %pre runs just before the specific package is installed but after the transaction has been validated and started. %verifyscript runs during rpm -V. %triggerin runs when another package is installed.
Incorrect! Try again.
51A server's root ext4 filesystem (/dev/sda2) is full. After deleting 10GB of log files, df -h still shows 100% usage. lsof | grep deleted reveals that a running daemon, app-daemon, still has an open file descriptor to a large, now-deleted log file. The daemon cannot be restarted due to a critical ongoing process. What is the most effective command to reclaim the disk space without restarting the daemon?
When a file is deleted on a Unix-like system, the disk space is not freed until all processes that have the file open close their file descriptors. The truncate command can be used to shrink a file to a specified size. By targeting the file descriptor in the /proc filesystem (/proc/<pid>/fd/<fd_num>), we can command the kernel to truncate the underlying file data to zero bytes, which immediately frees the disk space, even while the process keeps the (now empty) file descriptor open. Restarting or killing the process would also work but is forbidden by the prompt. drop_caches affects memory caches, not disk allocation for deleted files. Sending SIGHUP might cause the daemon to re-read its config, but it's not guaranteed to close and reopen log files.
Incorrect! Try again.
52A system update via apt upgrade fails with a message about a package libfoo1 being held back. Running apt install libfoo1 reveals a complex dependency issue: libfoo1 requires libbar2 (>= 2.1), but an essential application critical-app depends specifically on libbar2 (= 2.0). Uninstalling critical-app is not an option. What is the most appropriate first step to diagnose the exact dependency chain causing the conflict?
Managing Software: Manage Debian-based software packages and repositories
Hard
A.aptitude
B.apt-cache policy libfoo1 libbar2
C.apt-get -f install
D.dpkg --get-selections | grep hold
Correct Answer: aptitude
Explanation:
While apt-cache policy is useful for seeing available versions, aptitude in its interactive mode is the most powerful tool for diagnosing and resolving complex dependency conflicts. When you try to install libfoo1 within aptitude, it will present one or more potential solutions to the conflict, clearly showing the dependency chains and trade-offs (e.g., "keep libfoo1 at current version", "downgrade package X to satisfy Y"). This analytical capability is far superior to the often-terse output of apt or apt-get for such intricate problems. dpkg --get-selections only shows the hold state, not the reason. apt-get -f install is for fixing broken dependencies, not resolving holds due to version conflicts.
Incorrect! Try again.
53You are using firejail to sandbox a graphical application that needs to access a user's ~/Pictures directory but nothing else in their home directory. It also needs access to the X11 server and D-Bus. Which firejail command provides the most restrictive sandbox that still allows the application to function correctly?
The --private=<dir> option is the most effective and concise way to achieve this. It creates a new, temporary home directory for the sandboxed application and then bind-mounts the specified directory (~/Pictures in this case) inside that new home directory. This automatically denies access to all other files in the user's real home directory (like ~/.ssh, ~/.config, etc.) while providing access to the required folder. The default firejail profile will handle X11 and D-Bus access appropriately. --private alone creates a completely empty temporary home. --whitelist works from the current home directory and is less secure than creating a new private one. --blacklist is prone to errors as you might miss a directory.
Incorrect! Try again.
54You are setting up an iSCSI target on a server and an initiator on a client. The discovery via iscsiadm -m discovery -t sendtargets -p <target_ip> works perfectly. However, the login iscsiadm -m node -l fails with iscsiadm: initiator reported error (8 - connection timed out). A firewall check shows the port (3260) is open. The initiator and target are on the same subnet. What is the most likely cause of this specific failure mode?
Administering Storage: Deploy storage
Hard
A.The underlying storage LUN on the target has not been properly exported or is offline.
B.The initiator name in /etc/iscsi/initiatorname.iscsi on the client does not match any ACL on the iSCSI target.
C.The client and server have a mutual CHAP secret mismatch.
D.The target portal IP address is configured incorrectly in the target's configuration daemon.
Correct Answer: The initiator name in /etc/iscsi/initiatorname.iscsi on the client does not match any ACL on the iSCSI target.
Explanation:
This is a classic iSCSI setup issue. The discovery process works because it's a broadcast/request that doesn't typically require authentication or authorization. The login process, however, is where the target authenticates the initiator. A common security practice is to configure an Access Control List (ACL) on the target that specifies which initiator IQNs (iSCSI Qualified Names) are allowed to log in. If the client's IQN (from /etc/iscsi/initiatorname.iscsi) isn't in the target's ACL, the target will simply refuse or drop the connection, often resulting in a timeout on the client side. A CHAP error would produce a specific authentication failure message (code 24), not a timeout. Incorrect portal IP would cause discovery to fail. An offline LUN would typically allow a login to succeed but fail I/O later.
Incorrect! Try again.
55After successfully compiling and installing an application from source using the standard ./configure && make && sudo make install procedure, you discover that the system's package manager (e.g., dnf or apt) is now unaware of the installed files. This could lead to conflicts later. What tool could have been used in place of sudo make install to integrate the build into the system's package manager, and what is its primary mechanism?
Managing Software: Compile from source code
Hard
A.alien, which converts an existing package from one format to another (e.g., .rpm to .deb).
B.stow, which uses symbolic links to manage files installed in a separate prefix.
C.checkinstall, which runs the make install process but intercepts the file installation to build a native package (.deb or .rpm).
D.make package, which is a standard Makefile target that creates a native package if the developer included it.
Correct Answer: checkinstall, which runs the make install process but intercepts the file installation to build a native package (.deb or .rpm).
Explanation:
checkinstall is specifically designed for this scenario. It monitors the make install step to see what files are being placed where. Instead of letting them be copied directly into the filesystem, it packages them into a proper .deb, .rpm, or Slackware package. This package can then be installed with the system's package manager (dpkg -i or rpm -i), making the system aware of the software and allowing for clean uninstallation. stow is an alternative but works differently (symlinks from /usr/local/stow/). make package is not a universal standard. alien is for converting existing packages, not creating them from source.
Incorrect! Try again.
56When configuring an ext4 filesystem for a server that hosts millions of very small (<4KB) files, which mkfs.ext4 option would have the most significant positive impact on storage efficiency by reducing metadata overhead?
The inline_data filesystem feature allows ext4 to store the contents of very small files directly within the inode table itself, rather than allocating a separate data block. This dramatically reduces storage overhead for workloads with a vast number of tiny files because it saves the space of a data block (typically 4KB) and the pointer to it for each file that fits. Changing the bytes-per-inode ratio (-i) would allow for more files but wouldn't make the storage of each file more efficient. A smaller block size (-b 1024) helps but has performance trade-offs and doesn't eliminate the block allocation entirely like inline_data does. large_file is for supporting files >2TB and is irrelevant here.
Incorrect! Try again.
57You are managing a Btrfs filesystem and have created several snapshots of a subvolume. Over time, many files have been deleted from the active subvolume, but df shows that no space has been freed. What is the reason for this, and what is the proper way to reclaim the space?
Administering Storage: Manage other storage options
Hard
A.You need to run btrfs balance start /mountpoint to force the filesystem to re-evaluate free space.
B.The autodefrag mount option must be enabled to reclaim space from deleted files in a CoW filesystem.
C.The Btrfs cleaner kernel thread is stuck; a reboot is required to reclaim space.
D.The deleted data blocks are still referenced by the old snapshots. The space will be freed only after all snapshots referencing those blocks are deleted.
Correct Answer: The deleted data blocks are still referenced by the old snapshots. The space will be freed only after all snapshots referencing those blocks are deleted.
Explanation:
This is a fundamental concept of Copy-on-Write (CoW) filesystems with snapshots. When a file is "deleted" from the active subvolume, the filesystem only removes the link to it from the current filesystem tree. The actual data blocks on disk are not freed as long as at least one snapshot still references them. The space is only truly reclaimed when the last reference (from both the active filesystem and all snapshots) to a data block is removed. Therefore, to free the space, the administrator must delete the old snapshots that are holding onto the data.
Incorrect! Try again.
58What is the primary difference in how a statically linked binary and a dynamically linked binary handle external library dependencies, and what is a major security implication of this difference?
Managing Software: Understand software management
Hard
A.Statically linked binaries are placed in /usr/local/bin while dynamically linked binaries are in /usr/bin. This affects the system's PATH and can be exploited.
B.Dynamic linking uses the LD_PRELOAD environment variable to load libraries, which is a security risk. Statically linked binaries do not use this variable.
C.Static linking includes all library code in the final executable, making it larger but self-contained. A security flaw in a library requires recompiling the application. Dynamic linking references shared system libraries at runtime.
D.Static linking is faster because it resolves symbols at compile time. A security flaw in a static library cannot be patched. Dynamic linking is slower but more secure.
Correct Answer: Static linking includes all library code in the final executable, making it larger but self-contained. A security flaw in a library requires recompiling the application. Dynamic linking references shared system libraries at runtime.
Explanation:
This option correctly identifies the core difference and its security consequence. A statically linked binary is a monolithic file containing all necessary code from its libraries. If a vulnerability (like Heartbleed in OpenSSL) is found in a library, every single statically linked application that used it must be individually recompiled and redeployed. With dynamic linking, the binary only contains references to shared libraries (.so files) on the system. To patch the vulnerability, an administrator only needs to update the single shared library file (e.g., libssl.so), and all dynamically linked applications will automatically and immediately use the patched version upon their next launch.
Incorrect! Try again.
59At boot, a system with LVM on top of software RAID 1 drops to an initramfs emergency shell. The error message indicates that a volume group (vg_root) cannot be found. Running cat /proc/mdstat shows the RAID array (/dev/md0) is active and clean. Running pvscan shows no results. What is the most likely cause and the correct command to run from the emergency shell to resolve it?
Administering Storage: Troubleshoot storage
Hard
A.The LVM cache is out of sync. The solution is vgscan --mknodes followed by vgchange -ay.
B.The LVM physical volume metadata on the RAID array is corrupt. The solution is pvcreate --restorefile /etc/lvm/backup/vg_root /dev/md0.
C.The initramfs is missing the LVM tools. The solution is to reboot with a rescue disk and rebuild the initramfs using dracut -f.
D.The RAID array was assembled after LVM tried to scan for devices. The solution is to manually trigger a scan with lvm vgscan and then lvm vgchange -ay.
Correct Answer: The LVM cache is out of sync. The solution is vgscan --mknodes followed by vgchange -ay.
Explanation:
In an initramfs environment, device discovery can be complex. Even if the RAID array is assembled, LVM might have already scanned for physical volumes (PVs) and found nothing, populating its cache with this negative result. pvscan uses this cache and thus also finds nothing. The command vgscan (or lvm vgscan) forces a re-scan of all block devices for LVM metadata, ignoring the potentially stale cache. The --mknodes option ensures that the device nodes in /dev are created if they are missing. Once the scan successfully identifies the PV on /dev/md0 and the vg_root volume group, vgchange -ay can activate the logical volumes within it, allowing the boot process to continue.
Incorrect! Try again.
60A company policy requires that all third-party software be run as a Flatpak application. A developer needs to use a proprietary GUI application that is only distributed as a .deb package. What is the most viable and self-contained method for the developer to use this application while adhering to the Flatpak-only policy?
Managing Software: Acquire software
Hard
A.Extract the .deb file using ar and tar, and manually place the binaries and libraries in /usr/local, then create a .desktop file.
B.Install the .deb package inside a Docker container and run the GUI application by forwarding the X11 socket to the container.
C.Create a custom Flatpak manifest (.json or .yaml file) that details how to fetch the .deb file, extract its contents, and package them into a Flatpak application using flatpak-builder.
D.Use the alien tool to convert the .deb package to an .rpm and install it with rpm -i.
Correct Answer: Create a custom Flatpak manifest (.json or .yaml file) that details how to fetch the .deb file, extract its contents, and package them into a Flatpak application using flatpak-builder.
Explanation:
This is the correct approach for integrating non-Flatpak software into a Flatpak ecosystem. The Flatpak manifest file is a blueprint that tells flatpak-builder what to do. It can be configured to download a specific .deb file as a source, run commands to extract it, and then place the resulting files in the correct locations within the Flatpak's sandbox filesystem (/app). This results in a fully self-contained Flatpak package that can be installed and managed like any other, perfectly adhering to the company policy. The other methods violate the 'Flatpak-only' rule or, in the case of Docker, introduce a different containerization technology that is less suited for desktop GUI applications.