1What is the primary concept behind Infrastructure as Code (IaC)?
Using Infrastructure as Code: Understand infrastructure as code
Easy
A.Managing and provisioning computer data centers through machine-readable definition files.
B.Manually configuring servers one by one using a command-line interface.
C.A type of physical hardware used for network routing.
D.Writing application code in a specific infrastructure-focused language.
Correct Answer: Managing and provisioning computer data centers through machine-readable definition files.
Explanation:
Infrastructure as Code (IaC) is the practice of managing infrastructure (networks, virtual machines, etc.) in a descriptive model, using files similar to source code, rather than through manual configuration.
Incorrect! Try again.
2Which of the following is a major benefit of using IaC?
Using Infrastructure as Code: Understand infrastructure as code
Easy
A.Elimination of the need for version control
B.Increased consistency and reduced human error
C.Slower deployment speeds
D.The requirement for more physical hardware
Correct Answer: Increased consistency and reduced human error
Explanation:
By defining infrastructure in code, you create a single source of truth that can be applied repeatedly, ensuring environments are consistent and removing the risk of mistakes from manual configuration.
Incorrect! Try again.
3In the context of IaC, what does "idempotency" mean?
Using Infrastructure as Code: Understand infrastructure as code
Easy
A.Running the same code multiple times results in the same system state.
B.The infrastructure configuration changes every time the code is run.
C.The code can only be executed a single time.
D.The code is written in an idempotent programming language.
Correct Answer: Running the same code multiple times results in the same system state.
Explanation:
Idempotency is a key principle of IaC. It ensures that applying a configuration will bring the system to the desired state, regardless of its starting state, and making no further changes if it's already in that state.
Incorrect! Try again.
4Which of the following tools is a popular choice for implementing Infrastructure as Code?
Using Infrastructure as Code: Understand infrastructure as code
Easy
A.Microsoft Excel
B.Terraform
C.GIMP
D.Wireshark
Correct Answer: Terraform
Explanation:
Terraform is a widely-used open-source IaC tool that allows users to define and provision infrastructure across various cloud providers and on-premises solutions.
Incorrect! Try again.
5Which Git command is used to create a copy of a remote repository on your local machine?
Using Infrastructure as Code: Manage version control with Git
Easy
A.git init
B.git commit
C.git clone
D.git push
Correct Answer: git clone
Explanation:
The git clone command is used to create a local working copy of an existing remote repository.
Incorrect! Try again.
6What is the purpose of the git commit command?
Using Infrastructure as Code: Manage version control with Git
Easy
A.To upload changes to the remote repository.
B.To discard all local changes.
C.To record changes to the local repository.
D.To view the status of your changes.
Correct Answer: To record changes to the local repository.
Explanation:
git commit takes a snapshot of the staged changes and saves it to the project's history in the local repository. It does not send the changes to a remote server.
Incorrect! Try again.
7Which command initializes a new, empty Git repository in the current directory?
Using Infrastructure as Code: Manage version control with Git
Easy
A.git init
B.git start
C.git new
D.git create
Correct Answer: git init
Explanation:
The git init command creates a new Git repository. It can be used to convert an existing, unversioned project to a Git repository or initialize a new, empty one.
Incorrect! Try again.
8What is the function of a .gitignore file?
Using Infrastructure as Code: Manage version control with Git
Easy
A.To list the history of all commits.
B.To configure global Git settings.
C.To specify intentionally untracked files that Git should ignore.
D.To store the user's Git credentials.
Correct Answer: To specify intentionally untracked files that Git should ignore.
Explanation:
The .gitignore file tells Git which files or directories to ignore in a project, which is useful for avoiding committing temporary files, build artifacts, or sensitive information.
Incorrect! Try again.
9What is a hypervisor?
Managing Containers in Linux: Understand virtualization concepts
Easy
A.A networking protocol for VMs.
B.Software that creates and runs virtual machines.
C.A physical hardware component for virtualization.
D.An operating system that runs inside a virtual machine.
Correct Answer: Software that creates and runs virtual machines.
Explanation:
A hypervisor, or virtual machine monitor (VMM), is software, firmware, or hardware that creates and runs virtual machines (VMs) by separating the computer's OS and applications from the underlying physical hardware.
Incorrect! Try again.
10What is a "Guest OS" in the context of virtualization?
Managing Containers in Linux: Understand virtualization concepts
Easy
A.An operating system running inside a virtual machine.
B.A type of hypervisor.
C.A lightweight version of an operating system.
D.The main operating system installed on the physical server.
Correct Answer: An operating system running inside a virtual machine.
Explanation:
The "Host OS" is the operating system of the physical machine, while the "Guest OS" is any operating system installed and running on top of the hypervisor within a virtual machine.
Incorrect! Try again.
11A Type 1 hypervisor runs directly on the host's physical hardware. What is it also known as?
Managing Containers in Linux: Understand virtualization concepts
Easy
A.Guest hypervisor
B.Bare-metal hypervisor
C.Hosted hypervisor
D.Container engine
Correct Answer: Bare-metal hypervisor
Explanation:
Type 1 hypervisors are called 'bare-metal' because they are installed directly on the physical hardware, without needing a host operating system to run on.
Incorrect! Try again.
12What is the primary function of virtualization?
Managing Containers in Linux: Understand virtualization concepts
Easy
A.To package an application and its dependencies into a single object.
B.To create a virtual version of a device or resource, such as a server, storage device, network or even an operating system.
C.To increase the physical speed of a CPU.
D.To manage source code for infrastructure.
Correct Answer: To create a virtual version of a device or resource, such as a server, storage device, network or even an operating system.
Explanation:
Virtualization uses software to allow a piece of physical hardware to host multiple virtual machines, each with its own operating system and resources, thereby improving efficiency and flexibility.
Incorrect! Try again.
13What is a key difference between containers and virtual machines (VMs)?
Managing Containers in Linux: Understand containers
Easy
A.VMs start up faster than containers.
B.Containers provide stronger hardware-level isolation than VMs.
C.Containers are much larger in size than VMs.
D.Containers share the host OS kernel, while VMs have their own full guest OS.
Correct Answer: Containers share the host OS kernel, while VMs have their own full guest OS.
Explanation:
This is the fundamental difference. Because containers share the host kernel, they are more lightweight and have faster startup times compared to VMs, which need to boot a complete operating system.
Incorrect! Try again.
14Which of the following is a well-known containerization platform?
Managing Containers in Linux: Understand containers
Easy
A.KVM
B.Docker
C.VMware vSphere
D.Oracle VirtualBox
Correct Answer: Docker
Explanation:
Docker is the most popular and widely used platform for developing, shipping, and running applications inside containers. The other options are hypervisors for running virtual machines.
Incorrect! Try again.
15What is a container image?
Managing Containers in Linux: Understand containers
Easy
A.A read-only template with instructions for creating a container.
B.A backup file for a virtual machine.
C.A file system for the host operating system.
D.A running instance of a container.
Correct Answer: A read-only template with instructions for creating a container.
Explanation:
A container image is a lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files. A container is a running instance of an image.
Incorrect! Try again.
16How do containers achieve process and resource isolation on a Linux host?
Managing Containers in Linux: Understand containers
Easy
A.By using different physical hardware for each container.
B.By encrypting all of their network traffic.
C.By running a separate hypervisor for each container.
D.Using kernel features like namespaces and cgroups.
Correct Answer: Using kernel features like namespaces and cgroups.
Explanation:
Linux containers are made possible by kernel features. Namespaces provide process isolation (making a container think it has its own private system), and cgroups (control groups) manage and limit resource usage (CPU, memory, etc.).
Incorrect! Try again.
17Which Docker command is used to download an image from a container registry like Docker Hub?
Managing Containers in Linux: Deploy containers
Easy
A.docker save [image_name]
B.docker pull [image_name]
C.docker download [image_name]
D.docker get [image_name]
Correct Answer: docker pull [image_name]
Explanation:
The docker pull command is used to fetch a container image from a remote registry (Docker Hub by default) and save it to your local machine.
Incorrect! Try again.
18What is the basic command to create and start a new container from an image called nginx?
Managing Containers in Linux: Deploy containers
Easy
A.docker build nginx
B.docker run nginx
C.docker start nginx
D.docker create nginx
Correct Answer: docker run nginx
Explanation:
The docker run command is a shortcut that first creates a writable container layer over the specified image, and then starts it.
Incorrect! Try again.
19Which command allows you to see a list of all currently running containers?
Managing Containers in Linux: Deploy containers
Easy
A.docker info
B.docker list
C.docker ps
D.docker images
Correct Answer: docker ps
Explanation:
The docker ps (process status) command lists all containers that are currently running on the host system.
Incorrect! Try again.
20What is the purpose of a Dockerfile?
Managing Containers in Linux: Deploy containers
Easy
A.It's a log file generated by a running container.
B.It's a text file with instructions on how to build a custom container image.
C.It's a configuration file for the Docker daemon.
D.It's a script for managing multiple containers at once.
Correct Answer: It's a text file with instructions on how to build a custom container image.
Explanation:
A Dockerfile is a script containing a series of commands that Docker uses to automatically build a specific container image. It defines the base image, commands to run, files to copy, and other configuration details.
Incorrect! Try again.
21An Infrastructure as Code (IaC) script is designed to ensure a web server is running and correctly configured. If the script is run multiple times, it makes no further changes after the first successful execution. This property is known as:
Using Infrastructure as Code: Understand infrastructure as code
Medium
A.Convergence
B.Idempotence
C.Procedural Deployment
D.Imperative Configuration
Correct Answer: Idempotence
Explanation:
Idempotence is a core principle of declarative IaC tools. It means that an operation can be applied multiple times without changing the result beyond the initial application. This ensures that running the same script repeatedly will not cause errors or unwanted changes, but will simply enforce the desired state.
Incorrect! Try again.
22A developer working on a feature-xyz branch needs to incorporate the latest updates from the main branch. To maintain a clean, linear project history without creating a merge commit, which Git command is the most appropriate choice?
Using Infrastructure as Code: Manage version control with Git
Medium
A.git cherry-pick main
B.git pull origin main --no-commit
C.git merge main
D.git rebase main
Correct Answer: git rebase main
Explanation:
git rebase main takes the commits from the current branch (feature-xyz) and reapplies them on top of the latest commit from main. This results in a straight, linear history. In contrast, git merge would create a merge commit, which can make the history more complex to read.
Incorrect! Try again.
23An organization wants to run a virtualization platform directly on their bare-metal servers for maximum performance and security, without an underlying host operating system. Which type of hypervisor should they choose?
Managing Containers in Linux: Understand virtualization concepts
Medium
A.Type 1 Hypervisor
B.Container-based Hypervisor
C.Hybrid Hypervisor
D.Type 2 Hypervisor
Correct Answer: Type 1 Hypervisor
Explanation:
A Type 1 hypervisor, also known as a bare-metal hypervisor, runs directly on the host's hardware to control the hardware and manage guest operating systems. Examples include VMware ESXi and Xen. This is in contrast to a Type 2 hypervisor, which runs as an application on top of a conventional host OS.
Incorrect! Try again.
24Which of the following statements best describes a key difference in resource utilization between a container and a full virtual machine on the same host?
Managing Containers in Linux: Understand containers
Medium
A.A container shares the host OS kernel and its binaries/libraries, leading to lower overhead than a VM which runs a full guest OS.
B.Containers require a dedicated block of RAM pre-allocated at boot, whereas VMs use dynamic memory.
C.A VM shares the host OS kernel, while each container must load its own kernel into memory.
D.Both containers and VMs have identical resource footprints, with the main difference being startup time.
Correct Answer: A container shares the host OS kernel and its binaries/libraries, leading to lower overhead than a VM which runs a full guest OS.
Explanation:
The fundamental difference is that containers virtualize the operating system, not the hardware. They share the host's kernel, which significantly reduces the memory and disk space footprint compared to a VM, which must bundle a complete guest operating system, including its own kernel.
Incorrect! Try again.
25A system administrator needs to run a temporary container for a quick test and wants the container's filesystem to be automatically removed once the container exits. Which flag should be added to the docker run command?
Managing Containers in Linux: Deploy containers
Medium
A.--no-persist
B.--delete
C.--transient
D.--rm
Correct Answer: --rm
Explanation:
The --rm flag tells the Docker daemon to automatically clean up the container and remove its filesystem when the container exits. This is useful for short-lived or temporary containers to prevent the accumulation of stopped containers on the host system.
Incorrect! Try again.
26When comparing declarative and imperative approaches to Infrastructure as Code, which statement is most accurate?
Using Infrastructure as Code: Understand infrastructure as code
Medium
A.Imperative tools like Terraform automatically handle state management and dependency resolution.
B.Declarative tools like shell scripts are easier to maintain for complex infrastructure.
C.Imperative requires a step-by-step sequence of commands, while declarative defines the desired final state.
D.Declarative focuses on how to achieve the end state, while imperative focuses on what the end state should be.
Correct Answer: Imperative requires a step-by-step sequence of commands, while declarative defines the desired final state.
Explanation:
The core distinction is the paradigm. An imperative approach (e.g., a shell script) specifies the exact commands to run in sequence to reach a state. A declarative approach (e.g., Terraform, Ansible) specifies the desired end state, and the tool is responsible for figuring out the necessary steps to achieve it.
Incorrect! Try again.
27A developer accidentally committed a file with sensitive credentials to a local branch that has not been pushed to a remote repository. To completely remove the most recent commit from the branch's history as if it never happened, which command should be used?
Using Infrastructure as Code: Manage version control with Git
Medium
A.git clean -fd
B.git reset --hard HEAD~1
C.git revert HEAD
D.git checkout HEAD~1
Correct Answer: git reset --hard HEAD~1
Explanation:
git reset --hard HEAD~1 moves the branch pointer back by one commit (HEAD~1) and discards the changes from both the staging area and the working directory. This effectively erases the last commit. git revert would create a new commit that undoes the changes, leaving the original, sensitive commit in the history.
Incorrect! Try again.
28Which of the following Linux kernel features is primarily responsible for isolating the filesystem view for each container, making it seem like it has its own root filesystem?
Managing Containers in Linux: Understand containers
Medium
A.cgroups (Control Groups)
B.SELinux
C.AppArmor
D.Namespaces (specifically, the mount namespace)
Correct Answer: Namespaces (specifically, the mount namespace)
Explanation:
Linux namespaces are the core technology for container isolation. The mount namespace, in particular, allows each container to have its own set of filesystem mount points, completely separate from the host and other containers. cgroups are used for resource limiting (CPU, memory), not for filesystem isolation.
Incorrect! Try again.
29A sysadmin needs to run a containerized database that requires its data to persist even if the container is removed and recreated. What is the recommended Docker feature to achieve this?
Managing Containers in Linux: Deploy containers
Medium
A.Mounting a Docker volume to a data directory inside the container.
B.Committing the container's state to a new image after every transaction.
C.Using the --persist flag with docker run.
D.Storing the data inside the container's default writable layer.
Correct Answer: Mounting a Docker volume to a data directory inside the container.
Explanation:
Docker volumes are the preferred mechanism for persisting data generated by and used by Docker containers. Volumes are managed by Docker and exist on the host filesystem, outside the container's lifecycle. This allows data to persist even when the container is removed and enables sharing data between containers.
Incorrect! Try again.
30What is the primary advantage of paravirtualization (PV) compared to full hardware virtualization (HVM)?
Managing Containers in Linux: Understand virtualization concepts
Medium
A.It provides stronger security isolation between guest VMs by using hardware extensions.
B.It can offer better I/O performance by allowing the guest OS to communicate directly with the hypervisor.
C.It requires no modifications to the guest operating system.
D.It is the only method compatible with Type 2 hypervisors like VirtualBox.
Correct Answer: It can offer better I/O performance by allowing the guest OS to communicate directly with the hypervisor.
Explanation:
In paravirtualization, the guest OS is modified to be 'aware' that it is being virtualized. This allows it to make direct calls (hypercalls) to the hypervisor, bypassing the performance overhead of emulating hardware devices, which is particularly beneficial for I/O-intensive operations.
Incorrect! Try again.
31Your team uses an IaC tool like Terraform, which maintains a file to track the current state of the managed infrastructure. What is the primary purpose of this 'state file'?
Using Infrastructure as Code: Understand infrastructure as code
Medium
A.To map the real-world resources to your configuration, track metadata, and improve performance.
B.To serve as a human-readable log of all API calls made.
C.To act as a backup of the infrastructure configuration code.
D.To store sensitive credentials required to access cloud providers in an encrypted format.
Correct Answer: To map the real-world resources to your configuration, track metadata, and improve performance.
Explanation:
The state file is crucial for IaC tools like Terraform. It stores the mapping between the resources defined in your code and the actual resources provisioned in your cloud environment. This allows the tool to plan what to create, update, or destroy by comparing the desired state (code) with the last known actual state (state file).
Incorrect! Try again.
32While reviewing a pull request, you find a small typo in the commit message of an older commit (not the most recent one) in the branch. What is the most effective Git command to fix the message of that specific, older commit?
Using Infrastructure as Code: Manage version control with Git
Medium
A.git rebase -i <base-commit-hash>
B.git filter-branch --msg-filter
C.git commit --amend
D.git revert <commit-hash>
Correct Answer: git rebase -i <base-commit-hash>
Explanation:
git rebase -i (interactive rebase) is the standard tool for modifying a series of recent, local commits. It allows you to reorder, squash, edit, or reword commits. You would start the interactive rebase from a commit before the one you want to change, and then mark the target commit with reword.
Incorrect! Try again.
33You need to expose a web server running on port 3000 inside a Docker container to port 8080 on the host machine. Which docker run option correctly maps these ports?
Managing Containers in Linux: Deploy containers
Medium
A.-p 8080:3000
B.--port 8080,3000
C.-p 3000:8080
D.--expose 8080:3000
Correct Answer: -p 8080:3000
Explanation:
The -p or --publish flag in Docker uses the format <host_port>:<container_port>. To map port 8080 on the host to port 3000 in the container, the correct syntax is -p 8080:3000. The --expose flag only documents which ports the container listens on, but does not actually publish them.
Incorrect! Try again.
34A company needs to migrate a legacy application that is tightly coupled to a specific, older kernel version onto a modern server. They cannot containerize the application due to this strict kernel dependency. Which technology is the most suitable solution?
Managing Containers in Linux: Understand virtualization concepts
Medium
A.Application Containerization
B.Process-based Sandboxing
C.OS-level Virtualization
D.Hardware Virtualization (VM)
Correct Answer: Hardware Virtualization (VM)
Explanation:
Hardware Virtualization, which creates a Virtual Machine (VM), is the ideal solution. A VM allows you to install a complete, isolated guest operating system with its own specific kernel. Containers, a form of OS-level virtualization, are unsuitable because they share the host's kernel.
Incorrect! Try again.
35In the context of Docker, what is the primary function of a container image?
Managing Containers in Linux: Understand containers
Medium
A.It is a live snapshot of a virtual machine's memory and disk state.
B.It is a configuration file, typically docker-compose.yml, that defines a container's runtime state.
C.It is a read-only template with instructions for creating a container, including code, runtime, libraries, and environment variables.
D.It is a running instance of an application with a dedicated writable layer.
Correct Answer: It is a read-only template with instructions for creating a container, including code, runtime, libraries, and environment variables.
Explanation:
An image is the blueprint for a container. It's an immutable, standalone, executable package that contains everything needed to run a piece of software. A container is a runnable instance of an image. When a container is launched, a thin writable layer is added on top of the read-only image layers.
Incorrect! Try again.
36A team follows a branching model where they want to combine all commits from a feature branch into a single, cohesive commit before merging it into the main branch. This process is best described as:
Using Infrastructure as Code: Manage version control with Git
Medium
A.Forking
B.Stashing
C.Squashing
D.Cherry-picking
Correct Answer: Squashing
Explanation:
Squashing is the act of combining multiple commits into a single one. This is commonly done using an interactive rebase (git rebase -i) before merging a feature branch. The goal is to keep the history of the main branch clean and concise, where each commit represents a complete feature or fix.
Incorrect! Try again.
37Consider the following Dockerfile instructions:
dockerfile
RUN apt-get update && apt-get install -y nodejs
CMD ["node", "app.js"]
What is the fundamental difference between the RUN and CMD instructions?
Managing Containers in Linux: Deploy containers
Medium
A.RUN executes when building the image, while CMD specifies the default command to execute when a container starts.
B.CMD executes when building the image to set up the environment, while RUN is the primary command for the running container.
C.RUN is used for installing packages, while CMD is used for running application processes.
D.Both are executed when the container starts, but RUN is executed as the root user while CMD is executed as a non-root user.
Correct Answer: RUN executes when building the image, while CMD specifies the default command to execute when a container starts.
Explanation:
The RUN instruction executes commands during the image build process (docker build) to create new layers in the image, such as installing software. The CMD instruction provides the default command and parameters for an executing container (docker run), which can be overridden by the user at runtime.
Incorrect! Try again.
38A system administrator wrote a shell script to provision a new server. The script works, but if a step fails midway, the server is left in an inconsistent, partially configured state. This is a common problem associated with which IaC approach?
Using Infrastructure as Code: Understand infrastructure as code
Medium
A.Agent-based
B.Imperative/Procedural
C.Stateful
D.Declarative
Correct Answer: Imperative/Procedural
Explanation:
Imperative or procedural scripts, such as shell scripts, execute a sequence of commands. They typically lack built-in state management and error handling for partial failures. If a command fails, the script often halts, leaving the system in an unknown, intermediate state. Declarative tools are designed to converge on a target state and can often self-correct from partial failures on subsequent runs.
Incorrect! Try again.
39When comparing OS-level virtualization (containers) with hardware virtualization (VMs), which statement about security isolation is most accurate?
Managing Containers in Linux: Understand virtualization concepts
Medium
A.Security isolation is managed by the application layer in both technologies, not the virtualization layer.
B.Both offer identical levels of security isolation, as they both use cgroups and namespaces for containment.
C.VMs generally offer stronger security isolation due to the complete separation of kernels provided by the hypervisor.
D.Containers provide superior security isolation because they don't have a guest kernel to attack.
Correct Answer: VMs generally offer stronger security isolation due to the complete separation of kernels provided by the hypervisor.
Explanation:
Because VMs run a full guest OS with its own kernel, the attack surface between a VM and the host (or other VMs) is much smaller and hardened by the hypervisor. In contrast, all containers on a host share the same host kernel. A kernel-level vulnerability could potentially allow a process in one container to escape and affect the host or other containers.
Incorrect! Try again.
40A container needs access to a configuration file located on the host machine at /opt/configs/app.conf. The administrator wants to make this file available inside the container at /etc/app.conf without copying it into the image. Which docker run option achieves this?
Managing Containers in Linux: Deploy containers
Medium
The --volume (or -v) flag is used to mount a host file or directory into a container. The correct syntax is -v <host_path>:<container_path>. This makes the host file appear at the specified path inside the container's filesystem, allowing for dynamic configuration without rebuilding the image.
Incorrect! Try again.
41An idempotent Terraform script designed to manage a web server, a load balancer, and a database fails after successfully provisioning the server and load balancer, but before the database. The failure was due to a temporary cloud provider API outage. Before re-running the script, a sysadmin manually deletes the load balancer via the cloud console. What is the expected outcome when the exact same Terraform script is executed again?
Using Infrastructure as Code: Understand infrastructure as code
Hard
A.Terraform will destroy the existing web server to start from a clean slate, then provision all three resources.
B.The script will ignore the manual deletion, fail at the database step again, and leave the infrastructure in its current state (one server, no load balancer).
C.Terraform will detect the discrepancy, re-create the missing load balancer, and then create the missing database, leaving the existing server untouched.
D.The script will error out, reporting a state mismatch for the manually deleted load balancer.
Correct Answer: Terraform will detect the discrepancy, re-create the aissing load balancer, and then create the missing database, leaving the existing server untouched.
Explanation:
Declarative IaC tools like Terraform use a state file to track resources. On execution, Terraform performs a 'refresh' operation, comparing its state file with the actual infrastructure. It will notice the web server exists as expected, see that the load balancer recorded in the state file is missing in reality, and that the database is missing from both. It will then create a plan to reconcile the differences: create the load balancer and the database, while taking no action on the server. This demonstrates the power of state management and drift detection.
Incorrect! Try again.
42A developer working on a feature branch needs to incorporate updates from the main branch. However, the main branch has a commit that reverts a change the developer's feature branch depends on. If the developer runs git rebase main, what is the most likely outcome during the rebase process?
Using Infrastructure as Code: Manage version control with Git
Hard
A.The rebase will fail immediately, and git will recommend using git merge instead.
B.The rebase will complete automatically, but the feature will be broken in the newly rebased commits.
C.Git will encounter a merge conflict on the commit in the feature branch that depends on the reverted code, pausing the rebase until the developer resolves it.
D.Git will automatically skip the problematic commit from the feature branch, leading to an incomplete feature.
Correct Answer: Git will encounter a merge conflict on the commit in the feature branch that depends on the reverted code, pausing the rebase until the developer resolves it.
Explanation:
git rebase works by replaying each commit from the current branch on top of the target branch. When it tries to apply the commit from the feature branch that relies on the now-missing code (due to the revert on main), the patch will not apply cleanly. This results in a merge conflict. The rebase process will halt, requiring the developer to manually fix the conflicting files, stage them, and then continue the rebase with git rebase --continue.
Incorrect! Try again.
43When comparing a Type-1 (bare-metal) hypervisor like KVM with a Type-2 (hosted) hypervisor like VirtualBox for a high-throughput, low-latency database server workload, what is the most significant performance disadvantage of the Type-2 hypervisor?
Managing Containers in Linux: Understand virtualization concepts
Hard
A.Inability to utilize hardware virtualization extensions like Intel VT-x or AMD-V.
B.Higher memory overhead because the host OS and hypervisor must share the same memory space.
C.Lack of support for paravirtualized drivers (e.g., virtio) for network and storage devices.
D.Increased I/O latency due to an additional layer of system calls passing through the host operating system's kernel and scheduler.
Correct Answer: Increased I/O latency due to an additional layer of system calls passing through the host operating system's kernel and scheduler.
Explanation:
The key architectural difference is that a Type-2 hypervisor runs as an application on top of a conventional host OS. Every I/O operation from the guest VM must traverse the hypervisor process, then the host OS kernel's scheduling and I/O subsystems, before reaching the physical hardware. This extra layer introduces significant latency and jitter, which is detrimental to performance-sensitive workloads like databases. Type-1 hypervisors have more direct access to hardware, minimizing this overhead.
Incorrect! Try again.
44You are deploying a containerized application that writes sensitive logs. You want the logs to persist even if the container is destroyed, but you need to ensure the log data is encrypted on the host filesystem. What is the most effective container deployment strategy on a standard Linux host with filesystem-level encryption (like eCryptfs or fscrypt) enabled?
Managing Containers in Linux: Deploy containers
Hard
A.Mount an in-memory tmpfs volume, as it is isolated from the host filesystem and more secure.
B.Use a Docker named volume (-v my-logs:/path/in/container), as volumes are automatically encrypted by the container runtime.
C.Use a Docker bind mount (-v /path/on/host:/path/in/container) to a directory that is encrypted on the host.
D.Run an encryption process inside the container that writes to a standard volume, encrypting the data before it is written to the host.
Correct Answer: Use a Docker bind mount (-v /path/on/host:/path/in/container) to a directory that is encrypted on the host.
Explanation:
Docker named volumes are typically stored in a directory managed by Docker (e.g., /var/lib/docker/volumes), but the Docker engine itself does not perform encryption. To leverage host-level filesystem encryption, the most direct method is to use a bind mount that points to a specific directory that has been configured for encryption by the host OS (e.g., within an encrypted home directory). Option C adds complexity and overhead inside the container. Option D is not persistent. Option B is incorrect because Docker volumes are not inherently encrypted.
Incorrect! Try again.
45A container is running a process as UID 1000. The host system does not have a user with UID 1000. The container is configured with a bind mount to a host directory owned by root:root with permissions 755 (rwxr-xr-x). What operations can the containerized process perform on the files within the mounted directory?
Managing Containers in Linux: Understand containers
Hard
A.It can read existing files but cannot write to them or create new files.
B.It cannot read or write any files due to a UID mismatch.
C.It can read, write, and create new files because container UIDs are mapped to the host's root user.
D.It can create new files, which will be owned by UID 1000 on the host, but cannot modify existing files owned by root.
Correct Answer: It can read existing files but cannot write to them or create new files.
Explanation:
By default, without user namespace remapping, the UID inside the container is the same UID used for permission checks on the host. The directory has permissions 755. The '5' for 'others' means 'read' and 'execute' permissions. Since the container's UID 1000 does not match the owner (root, UID 0) or the group (root, GID 0), it is treated as 'other'. Therefore, the process can read existing files and list the directory contents (cd into it), but it lacks write permissions, so it cannot modify existing files or create new ones.
Incorrect! Try again.
46You have accidentally committed a large binary file to your Git history and pushed it to a remote develop branch. You have since added more commits on top of it. You need to completely remove the binary file from the entire history of the develop branch to reduce the repository size, without losing any of the subsequent commits. Which Git command is the most appropriate and effective tool for this specific task?
Using Infrastructure as Code: Manage version control with Git
Hard
A.git rebase -i on the affected commits, edit the commit with the binary, remove the file, amend the commit, and continue.
B.git filter-branch --tree-filter 'rm -f path/to/binary' HEAD followed by a force push.
C.git revert on the commit that added the binary file.
D.git reset --hard <commit_before_binary> and then cherry-pick the subsequent commits.
Correct Answer: git filter-branch --tree-filter 'rm -f path/to/binary' HEAD followed by a force push.
Explanation:
git filter-branch (or its more modern replacement, git-filter-repo) is designed specifically for rewriting entire repository history based on certain criteria. The --tree-filter option allows you to run a command on the checked-out files of every single commit, effectively removing the file from each one. This purges it from the history. rebase -i is tedious for many commits. revert only creates a new commit that undoes the change, it doesn't remove the file from history. reset --hard is destructive and requires manually re-applying subsequent work, which is error-prone. After rewriting history, a force push is required to update the remote branch.
Incorrect! Try again.
47In the context of Infrastructure as Code, what is the fundamental difference between 'configuration drift' and a 'failed idempotent execution'?
Using Infrastructure as Code: Understand infrastructure as code
Hard
A.A failed idempotent execution always leads to drift, but drift can occur without a failed execution.
B.Drift is a divergence between the live infrastructure and the code-defined state caused by out-of-band changes, while a failed execution is an unsuccessful run of the IaC tool itself.
C.Drift can only be detected by imperative IaC tools, while failed executions are a problem for declarative tools.
D.Drift refers to changes in the IaC code that haven't been applied, while a failed execution is when those changes cannot be applied due to an error.
Correct Answer: Drift is a divergence between the live infrastructure and the code-defined state caused by out-of-band changes, while a failed execution is an unsuccessful run of the IaC tool itself.
Explanation:
Configuration drift occurs when manual changes (e.g., through a web console) are made to infrastructure that is managed by IaC, making the actual state different from what the code declares. A failed idempotent execution is an event where the IaC tool (e.g., Ansible, Terraform) starts running but terminates due to an error (like an API failure, syntax error, or permission issue). While a partially completed run can be a source of drift if not handled correctly, the core distinction is the cause: drift is caused by external, manual changes, whereas a failed execution is an internal failure of the automation tool's run.
Incorrect! Try again.
48A guest virtual machine is running a legacy operating system that has no awareness of virtualization and lacks paravirtualized (e.g., virtio) drivers. Which I/O virtualization technique must the hypervisor primarily rely on to provide network and disk access to this guest, and what is its main drawback?
Managing Containers in Linux: Understand virtualization concepts
Hard
A.Split device driver model; it is efficient but requires the guest OS to be modified with a 'frontend' driver.
B.Shared virtual memory; it is fast for memory access but not applicable for disk or network I/O.
C.Full device emulation; it is flexible but suffers from high CPU overhead due to trapping and emulating hardware device interactions.
D.I/O passthrough (VT-d/AMD-Vi); it is fast but requires dedicated hardware for the guest and is inflexible.
Correct Answer: Full device emulation; it is flexible but suffers from high CPU overhead due to trapping and emulating hardware device interactions.
Explanation:
When a guest OS is not 'virtualization-aware', the hypervisor must present it with virtual hardware that looks and acts exactly like real physical hardware (e.g., an Intel e1000 network card or an IDE disk controller). This is called full device emulation. The hypervisor must 'trap' every attempt by the guest OS to interact with this emulated hardware and translate it into an action on the host. This process of trapping and emulating is computationally expensive, leading to significant CPU overhead and lower I/O performance compared to paravirtualization, where the guest OS knows it's virtualized and can communicate more efficiently with the hypervisor.
Incorrect! Try again.
49Consider the following Docker Compose configuration. Service api needs a fully initialized database in db before it can start. What is the critical flaw in this configuration for ensuring the api service starts correctly?
Managing Containers in Linux: Deploy containers
Hard
A.The api service is missing a command directive, so it will not start.
B.The db service does not expose any ports, preventing the api service from connecting to it.
C.The services are not on a shared network, so api will be unable to resolve the hostname db.
D.The depends_on directive only waits for the db container to be started, not for the PostgreSQL service within it to be fully initialized and ready to accept connections.
Correct Answer: The depends_on directive only waits for the db container to be started, not for the PostgreSQL service within it to be fully initialized and ready to accept connections.
Explanation:
A common and critical misconception is that depends_on checks for service readiness. It does not. It only ensures that the db container has been started before the api container is started. The PostgreSQL server inside the db container takes several seconds to initialize its database cluster on first run. The api container will likely start immediately and try to connect to a database that isn't ready, causing it to crash. The proper solution involves implementing a healthcheck in the db service and using the long-form depends_on syntax, or building retry logic into the api application itself.
Incorrect! Try again.
50To enhance container security, a system administrator wants to prevent a container from making any network calls except for DNS lookups and connections to a specific internal API server at 10.0.5.10 on port 8080. Which combination of Linux kernel features, commonly leveraged by container runtimes, would be most effective for enforcing this specific policy?
Managing Containers in Linux: Understand containers
Hard
A.A custom seccomp profile to block socket syscalls combined with a restrictive AppArmor profile.
B.Network namespaces to isolate the container's network stack and cgroups to limit bandwidth.
C.A restrictive cgroup v2 network controller policy and iptables rules within the container's network namespace.
D.A default Docker bridge network with an egress firewall implemented using host-level iptables rules that match the container's source IP address.
Correct Answer: A default Docker bridge network with an egress firewall implemented using host-level iptables rules that match the container's source IP address.
Explanation:
This requires fine-grained network filtering. Seccomp (A) operates at the syscall level and is too coarse to filter by IP address and port. Network namespaces (B) provide isolation but not filtering. Cgroups (C) are primarily for resource limiting (bandwidth, CPU), not for firewalling. The most direct and powerful method is to use the host's iptables firewall. The administrator can create rules in the DOCKER-USER chain (or FORWARD chain) that specifically match traffic originating from the container's assigned IP address, allowing traffic to the DNS server (port 53) and 10.0.5.10:8080, while dropping all other outbound traffic.
Incorrect! Try again.
51A project uses git submodules to include a shared library. A developer clones the main repository using git clone --recursive, makes changes inside the submodule directory, commits them, and pushes them from within that directory. They then go to the main project's root, see a change for the submodule, and commit it with the message "Update library". What exactly is stored in this new commit in the main repository?
Using Infrastructure as Code: Manage version control with Git
Hard
A.A complete copy of all the files from the submodule.
B.A symbolic link pointing to the submodule's directory.
C.The new commit SHA-1 of the submodule's HEAD.
D.A patch file (diff) of the changes made in the submodule.
Correct Answer: The new commit SHA-1 of the submodule's HEAD.
Explanation:
A git submodule is not a copy of the files; it's a pointer. The main (super)project does not track the submodule's content. Instead, it tracks a specific commit SHA-1 from the submodule's repository. When the developer commits the submodule change in the main project, the only thing being recorded is that the main project should now point to this new commit hash from the submodule's history. This is why it's referred to as a 'gitlink' entry in the tree object.
Incorrect! Try again.
52In the context of KVM, what is the primary role of Extended Page Tables (EPT) by Intel or Rapid Virtualization Indexing (RVI) by AMD, and what specific performance bottleneck does it address?
Managing Containers in Linux: Understand virtualization concepts
Hard
A.They provide a mechanism for efficient live migration by tracking dirty memory pages in hardware.
B.They enable direct assignment of PCIe devices (passthrough) to guest VMs, bypassing the hypervisor for I/O operations.
C.They accelerate memory virtualization by allowing the guest OS to directly manage its page tables, eliminating the overhead of synchronizing with shadow page tables maintained by the hypervisor.
D.They are used for CPU virtualization, allowing the hypervisor to trap and emulate privileged instructions with lower overhead.
Correct Answer: They accelerate memory virtualization by allowing the guest OS to directly manage its page tables, eliminating the overhead of synchronizing with shadow page tables maintained by the hypervisor.
Explanation:
Without EPT/RVI, the hypervisor uses a technique called shadow page tables. It maintains a separate set of page tables that map guest virtual addresses to host physical addresses and must keep them synchronized with the guest's own page tables. This synchronization process involves costly 'VM exits' (context switches from guest to hypervisor) on every page fault. EPT/RVI is a hardware feature that allows the processor's Memory Management Unit (MMU) to handle two levels of address translation (Guest Virtual -> Guest Physical -> Host Physical) directly in hardware. This drastically reduces the number of VM exits related to memory management, significantly improving performance by eliminating the overhead of shadow page table maintenance.
Incorrect! Try again.
53You are managing a large-scale cloud environment with an IaC tool like Terraform. Your team decides to refactor the code, moving a managed resource (e.g., a VM) from one module to another. If you simply move the code block and run terraform apply, what will happen and what command should be used to prevent this undesirable outcome?
Using Infrastructure as Code: Understand infrastructure as code
Hard
A.Terraform will abandon the old resource, leaving it orphaned, and create a new one; terraform import should be used to manage the old resource.
B.The plan will fail with a 'duplicate resource' error; terraform refresh should be run first to resolve it.
C.Terraform will plan to destroy the existing VM and create a new one at the new code location; terraform state mv should be used beforehand.
D.Terraform will automatically detect the move and update its state file without any infrastructure changes.
Correct Answer: Terraform will plan to destroy the existing VM and create a new one at the new code location; terraform state mv should be used beforehand.
Explanation:
Terraform identifies resources by their address in the code (e.g., module.old_module.my_vm). When you move the code, the old address is no longer present, and a new address (module.new_module.my_vm) appears. Terraform's plan will interpret this as 'the old resource should be destroyed, and a new one should be created'. To prevent this destructive action and simply tell Terraform that the existing resource is now managed by the new code block, you must use the terraform state mv 'module.old_module.my_vm' 'module.new_module.my_vm' command to update the address in the state file before running apply.
Incorrect! Try again.
54A multi-stage Dockerfile is used to build a Go application. The builder stage compiles the binary, and the final stage copies it from the builder into a scratch image. The resulting container fails to start with an error like /app/my-app: not found or an execution format error. The binary exists at the correct path inside the container. What is the most likely cause of this failure?
Managing Containers in Linux: Deploy containers
Hard
A.The COPY --from=builder command failed to preserve the executable permissions of the binary.
B.The scratch image has no shell (/bin/sh), so the CMD ["/app/my-app"] exec form cannot be processed.
C.The Go application was not compiled as a static binary, and the scratch image lacks the required dynamic libraries (like libc).
D.The builder stage and final stage are based on different CPU architectures (e.g., amd64 vs. arm64).
Correct Answer: The Go application was not compiled as a static binary, and the scratch image lacks the required dynamic libraries (like libc).
Explanation:
The scratch image is a completely empty base image; it contains no libraries, no shell, no utilities, nothing. By default, Go compiles binaries that are dynamically linked against standard C libraries (libc). When you copy such a binary into scratch, it cannot run because its dependencies are missing. The OS loader tries to find them and fails, resulting in a 'not found' error. The solution is to compile the Go application as a static binary, which includes all dependencies within the executable itself. This is typically done with CGO_ENABLED=0 and ldflags like -s -w.
Incorrect! Try again.
55A security audit requires that all containers must not be able to gain new privileges via setuid or setgid binaries. Which Docker security feature, when enabled, most directly and effectively enforces this policy?
Managing Containers in Linux: Understand containers
Hard
A.Applying the no-new-privileges security option (--security-opt no-new-privileges).
B.Dropping the CAP_SETUID and CAP_SETGID Linux capabilities (--cap-drop=SETUID --cap-drop=SETGID).
C.Enabling a user namespace (--userns-remap=default).
D.Running the container with a read-only root filesystem (--read-only).
Correct Answer: Applying the no-new-privileges security option (--security-opt no-new-privileges).
Explanation:
The no-new-privileges flag sets the PR_SET_NO_NEW_PRIVS attribute on the container's processes. This is a specific kernel security feature that ensures a process (and its children) cannot gain more privileges than its parent. It directly blocks the effects of setuid/setgid binaries, which is exactly what the policy requires. While dropping capabilities (D) is a good security practice, it doesn't prevent a setuid binary owned by root from running as root. A read-only filesystem (B) prevents modification but not execution of existing setuid binaries. User namespaces (C) are a powerful isolation tool but no-new-privileges is the most direct control for this specific requirement.
Incorrect! Try again.
56A developer has been working on a feature branch and has made several commits. They realize that a bug was introduced in the very first commit on their branch, but it is only detectable by a test that runs against the third commit. If they use git bisect to find the bug, what potential complication will they face and how is it typically handled?
Using Infrastructure as Code: Manage version control with Git
Hard
A.The bisect process will fail because the 'good' starting commit (the base of the branch) will not have the feature code necessary for the test to run at all.
B.The bisect will correctly identify the first commit as the source, as it checks out each commit and runs the test script against the full codebase at that point.
C.The bisect will incorrectly identify the third commit as 'bad' because that's where the test fails, even though the bug originated earlier. The developer must use git bisect skip on commits that cannot be tested.
D.git bisect cannot be used in this scenario because the bug and the test are in different commits.
Correct Answer: The bisect will incorrectly identify the third commit as 'bad' because that's where the test fails, even though the bug originated earlier. The developer must use git bisect skip on commits that cannot be tested.
Explanation:
git bisect assumes that if a commit is 'good', all its ancestors are also 'good'. In this scenario, the commits between the introduction of the bug (commit 1) and the introduction of the test (commit 3) are untestable or will report 'good' because the test doesn't exist yet. When git bisect lands on one of these commits and the test passes (or can't run), it will incorrectly assume the bug was introduced later. The correct way to handle this is to have the test script return a special exit code (125) that tells git bisect to skip this commit and try a different one nearby, allowing the bisection to narrow down the correct range.
Incorrect! Try again.
57What is a primary distinction between OS-level virtualization (containers) and hypervisor-based virtualization (VMs) regarding the handling of kernel-level system calls?
Managing Containers in Linux: Understand virtualization concepts
Hard
A.Both containers and VMs use a hypervisor to trap all system calls, but the hypervisor for containers is much more lightweight.
B.Containers do not make system calls; they use a higher-level API provided by the runtime, while VMs make standard system calls.
C.In containers, all processes make system calls directly to the single, shared host OS kernel. In VMs, processes make system calls to their own guest OS kernel, which then uses privileged instructions or hypercalls that are handled by the hypervisor.
D.In containers, system calls are emulated by the container runtime (e.g., Docker engine), while in VMs, they are passed through directly to the host kernel.
Correct Answer: In containers, all processes make system calls directly to the single, shared host OS kernel. In VMs, processes make system calls to their own guest OS kernel, which then uses privileged instructions or hypercalls that are handled by the hypervisor.
Explanation:
This is the fundamental architectural difference. Containers are isolated processes on a single host kernel, leveraging features like namespaces and cgroups. A read() syscall from a containerized process is handled directly by the host kernel. In a VM, the same process makes a read() syscall to its guest kernel. The guest kernel then executes privileged instructions to interact with what it thinks is hardware. The hypervisor traps these instructions and translates them into actions on the host, or the guest kernel uses efficient hypercalls to ask the hypervisor to perform the action. This extra layer of indirection (guest kernel + hypervisor) is why VMs have higher overhead but also stronger isolation.
Incorrect! Try again.
58You are trying to optimize the build time of a large Docker image for a Node.js application. The package.json and package-lock.json files change infrequently, while the application source code (.js files) changes very frequently. Which of the following Dockerfile snippets demonstrates the most effective layer caching strategy?
Managing Containers in Linux: Deploy containers
Hard
A.dockerfile
FROM node:16
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]
B.dockerfile
FROM node:16
WORKDIR /app
RUN npm ci
COPY . .
CMD ["node", "server.js"]
C.dockerfile
FROM node:16
WORKDIR /app
COPY . .
CMD ["node", "server.js"]
RUN npm ci
D.dockerfile
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["node", "server.js"]
Correct Answer: dockerfile
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["node", "server.js"]
Explanation:
Docker builds images in layers, and it caches each layer. A layer is invalidated if its command changes or if the files it depends on (e.g., via COPY) change. The most effective strategy is to copy and install the dependencies (which change rarely) before copying the application source code (which changes frequently). In the correct option, the RUN npm ci layer only depends on package*.json. As long as those files don't change, this time-consuming layer will be cached. Subsequent builds where only .js files change will only invalidate the COPY . . layer and rebuild from there, which is much faster than re-running npm ci every time, as would happen in the other examples.
Incorrect! Try again.
59When comparing imperative IaC (e.g., a custom bash script using AWS CLI) with declarative IaC (e.g., Terraform), what is the key challenge an imperative approach faces when attempting to achieve idempotency for resource updates?
Using Infrastructure as Code: Understand infrastructure as code
Hard
A.Imperative scripts lack the ability to perform a 'dry run' to preview changes before they are made.
B.The script must explicitly check the current state of every single attribute of a resource before deciding whether to issue an 'update' command, making the logic complex and brittle.
C.Imperative tools cannot store state, so they must query the cloud provider for all resources on every run.
D.It is impossible for an imperative script to be idempotent; they are by nature procedural.
Correct Answer: The script must explicitly check the current state of every single attribute of a resource before deciding whether to issue an 'update' command, making the logic complex and brittle.
Explanation:
In a declarative tool, you define the desired end state. The tool's engine is responsible for figuring out the current state and calculating the minimal set of actions to get there. In an imperative script, you define the actions. To make an update idempotent, your script can't just blindly run aws ec2 modify-instance-attribute.... It must first run aws ec2 describe-instance-attribute..., parse the result, compare it to the desired value, and only if they differ, run the modify command. This explicit checking logic must be written for every single property of every resource, which is incredibly complex, error-prone, and difficult to maintain.
Incorrect! Try again.
60A team uses a Git workflow where feature branches are merged into main using --no-ff (no-fast-forward) to preserve branch history. A bug is discovered in production, and git bisect is used on the main branch to find the faulty commit. The bisect process identifies a merge commit as the first 'bad' commit. What does this result signify?
Using Infrastructure as Code: Manage version control with Git
Hard
A.The git bisect process failed because it cannot properly analyze merge commits.
B.The bug was caused by a faulty resolution of a merge conflict during the merge.
C.The bug was introduced by the combination of the feature branch and the main branch state at the time of the merge; the bug does not exist on the feature branch itself when viewed in isolation.
D.The bug exists in one of the commits on the feature branch that was merged.
Correct Answer: The bug was introduced by the combination of the feature branch and the main branch state at the time of the merge; the bug does not exist on the feature branch itself when viewed in isolation.
Explanation:
When git bisect identifies a merge commit as the culprit, it means the parent commits of that merge were 'good', but the state after the merge is 'bad'. This points to an integration issue. The code on the feature branch worked fine on its own, and the code on the main branch worked fine on its own, but when they were combined, an unforeseen negative interaction occurred. This is a classic example of an integration bug, and it highlights the importance of --no-ff merges, as they create a single point in history that represents the act of integration, making such bugs easier to pinpoint with tools like bisect.