Unit3 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Define the Critical Section Problem. What are the three primary requirements that a solution to the critical section problem must satisfy?
The Critical Section Problem is to design a protocol that processes can use to cooperate. Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, etc. The system must ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
A solution must satisfy the following three requirements:
- Mutual Exclusion: If process is executing in its critical section, then no other processes can be executing in their critical sections.
- Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
- Bounded Waiting: There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Distinguish between Co-operating Processes and Independent Processes. List two benefits of allowing process co-operation.
Independent Processes:
- A process is independent if it cannot affect or be affected by the other processes executing in the system.
- It does not share data with any other process.
Co-operating Processes:
- A process is co-operating if it can affect or be affected by the other processes executing in the system.
- Clearly, any process that shares data with other processes is a co-operating process.
Benefits of Process Co-operation:
- Information Sharing: Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information.
- Computation Speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others.
Explain the concept of Semaphores. Differentiate between Binary Semaphores and Counting Semaphores.
A Semaphore is a synchronization tool used to manage concurrent processes. It is essentially an integer variable (let's call it ) that, apart from initialization, is accessed only through two standard atomic operations: wait() (often called ) and signal() (often called ).
Operations:
- Wait (): Decrements the semaphore value. If the value becomes negative, the process blocks.
- Signal (): Increments the semaphore value. If the value is not positive, it wakes up a blocked process.
Types of Semaphores:
-
Counting Semaphore:
- The value can range over an unrestricted domain.
- It is used to control access to a given resource consisting of a finite number of instances (e.g., 3 printers).
-
Binary Semaphore:
- The value can range only between 0 and 1.
- They are often known as mutex locks (mutual exclusion locks) as they are used to solve the critical section problem.
Describe Peterson's Solution for the critical section problem for two processes. Provide the algorithmic structure.
Peterson's Solution is a classic software-based solution to the critical section problem for two processes. It provides mutual exclusion, progress, and bounded waiting. It uses two shared variables:
int turn;// Indicates whose turn it is to enter the critical section.boolean flag[2];// Indicates if a process is ready to enter the critical section.
Structure for Process (where the other process is ):
c
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
// busy wait
/* CRITICAL SECTION */
flag[i] = false;
/* REMAINDER SECTION */
} while (true);
Explanation: To enter the critical section, process sets flag[i] to true and turn to . If both processes try to enter at the same time, turn will be set to both and at roughly the same time, but only one of these assignments will last. The eventual value of turn determines which of the two processes is allowed to enter its critical section first.
Explain the Producer-Consumer Problem. Provide a solution using Semaphores for a bounded buffer.
The Producer-Consumer Problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer generates data and puts it into the buffer, while the consumer removes data from the buffer.
Synchronization constraints:
- Producer must not insert data if the buffer is full.
- Consumer must not remove data if the buffer is empty.
- Access to the buffer pool requires mutual exclusion.
Solution using Semaphores:
We use three semaphores:
mutex(initialized to 1): Provides mutual exclusion for buffer access.empty(initialized to ): Counts empty slots.full(initialized to 0): Counts filled slots.
Producer Process:
c
do {
// produce an item in next_produced
wait(empty);
wait(mutex);
// add next_produced to buffer
signal(mutex);
signal(full);
} while (true);
Consumer Process:
c
do {
wait(full);
wait(mutex);
// remove an item from buffer to next_consumed
signal(mutex);
signal(empty);
// consume the item in next_consumed
} while (true);
What is a Race Condition? Explain with a suitable example involving a shared variable.
A Race Condition is a situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. The final value of the shared data may become incorrect or inconsistent.
Example:
Consider a shared kernel variable counter initialized to 5. Two processes, (Producer) and (Consumer), run concurrently.
- executes
counter++ - executes
counter--
At a low level (assembly), counter++ might look like:
register1 = counterregister1 = register1 + 1counter = register1
counter-- might look like:
register2 = counterregister2 = register2 - 1counter = register2
Race Scenario:
If the OS schedules the instructions in the order: 1, 2, 4, 5, 3, 6.
- loads 5 into
reg1. - increments
reg1to 6. - (Context Switch) loads 5 into
reg2. - decrements
reg2to 4. - (Context Switch) saves 6 to
counter. - saves 4 to
counter.
Result: The final value is 4, but it should have been 5. If the order was 1, 2, 4, 5, 6, 3, the result would be 6. This inconsistency is a race condition.
Explain the Dining Philosophers Problem. Why is it considered a classic synchronization problem?
The Problem:
Consider 5 philosophers sitting at a round table. There is a bowl of rice in the middle and 5 chopsticks, one between each pair of philosophers. A philosopher alternates between thinking and eating. To eat, a philosopher needs two chopsticks (left and right). A philosopher can only pick up one chopstick at a time. After eating, they put down both chopsticks and think.
Synchronization Issues:
- Deadlock: If every philosopher picks up their left chopstick simultaneously, they will all wait forever for the right chopstick.
- Starvation: A philosopher might never get both chopsticks if the neighbors are eating alternately and continuously.
Significance:
It is considered a classic problem not because of the practical importance of philosophers eating rice, but because it represents a large class of concurrency control problems where multiple processes need to allocate several resources among themselves in a deadlock-free and starvation-free manner.
Describe the Readers-Writers Problem. Provide the structure of the solution that gives priority to Readers.
The Readers-Writers Problem involves a shared database accessed by two types of processes: Readers (who only read data) and Writers (who can read and write). The constraints are:
- Multiple readers can read the file simultaneously.
- Only one writer can access the file at a time (exclusive access).
- If a writer is writing, no other reader or writer can access the file.
Readers-Preference Solution:
Variables:
mutex(semaphore, init 1): Protectsread_count.wrt(semaphore, init 1): Ensures mutual exclusion for writers.read_count(integer, init 0): Tracks number of active readers.
Writer Process:
c
wait(wrt);
// writing is performed
signal(wrt);
Reader Process:
c
wait(mutex);
read_count++;
if (read_count == 1)
wait(wrt); // First reader locks out writers
signal(mutex);
// reading is performed
wait(mutex);
read_count--;
if (read_count == 0)
signal(wrt); // Last reader releases writer lock
signal(mutex);
In this solution, no reader is kept waiting unless a writer has already obtained permission to use the shared object.
What is a Monitor in process synchronization? How does it differ from a Semaphore?
Monitor:
A monitor is a high-level synchronization construct (an abstract data type) that ensures that only one process can be active within the monitor at any time. It encapsulates:
- Shared data structures (variables).
- Procedures (methods) that operate on the shared data.
- Initialization code.
Processes cannot access the internal variables of the monitor directly; they must call the monitor procedures.
Differences from Semaphores:
- Level of Abstraction: Monitors are high-level programming language constructs (supported in Java, C#), whereas semaphores are low-level OS or hardware primitives.
- Ease of Use: Monitors are generally easier to use and less error-prone. In semaphores, forgetting a
signal()or reversingwait()andsignal()can cause deadlocks or race conditions. Monitors handle the mutual exclusion automatically (implicit locking). - Control: Semaphores are extremely flexible but dangerous; Monitors provide a structured approach to synchronization.
Explain the TestAndSet hardware instruction. How can it be used to implement mutual exclusion?
TestAndSet Instruction:
It is a special atomic (indivisible) hardware instruction used for synchronization. It reads a boolean variable and sets it to true in a single atomic step.
Definition:
c
boolean TestAndSet(boolean target) {
boolean rv = target;
*target = true;
return rv;
}
Implementing Mutual Exclusion:
We can use a shared boolean variable lock, initialized to false.
c
do {
while (TestAndSet(&lock))
; // do nothing (Busy Waiting)
// critical section
lock = false;
// remainder section
} while (true);
Logic: When a process tries to enter, it calls TestAndSet. If lock was false, it returns false (breaking the while loop) and sets lock to true atomically, locking out others. If lock was already true, it returns true, keeping the process in the while loop until the holder sets lock to false.
Differentiate between a Process and a Thread. Why are threads often called lightweight processes?
Process vs. Thread:
| Feature | Process | Thread |
|---|---|---|
| Definition | A program in execution. Heavyweight unit. | A basic unit of CPU utilization. Lightweight unit. |
| Memory | Has its own independent address space (Code, Data, Heap, Stack). | Shares address space (Code, Data, Heap) with other threads of the same process. Has its own Stack and Registers. |
| Communication | Inter-process communication (IPC) is slow and complex (OS intervention required). | Thread communication is fast (via shared memory/variables). |
| Context Switch | High overhead (switching memory maps, cache flushing). | Low overhead (switching registers/stack only). |
| Isolation | Processes are isolated; crash in one doesn't affect others. | Threads share memory; one crashing thread can crash the whole process. |
Why Lightweight?
Threads are called lightweight processes because creating a thread requires less time and fewer resources than creating a process. Since they share the memory space and open files of the parent process, there is no need to allocate a new address space, making context switching significantly faster.
Compare User-Level Threads and Kernel-Level Threads with respect to management and performance.
User-Level Threads (ULT):
- Management: Managed by a thread library in user space (e.g., POSIX Pthreads, Java threads on some older JVMs) without kernel awareness.
- Performance: Thread creation and switching are fast because no system calls are required.
- Blocking: If one thread performs a blocking system call, the entire process (and all other threads) is blocked because the kernel sees the process as a single unit.
- Multiprocessing: Cannot take advantage of multiple processors (the kernel only schedules the process on one CPU).
Kernel-Level Threads (KLT):
- Management: Managed directly by the Operating System (e.g., Windows, Linux).
- Performance: Creation and switching are slower than ULTs because they involve system calls and kernel intervention.
- Blocking: If one thread blocks, the kernel can schedule another thread from the same process.
- Multiprocessing: Can run on different processors simultaneously in a multiprocessor environment.
Describe the three common Multithreading Models used to map user threads to kernel threads.
-
Many-to-One Model:
- Maps many user-level threads to one kernel thread.
- Pros: Thread management is done in user space (efficient).
- Cons: If one thread makes a blocking system call, the entire process blocks. Multiple threads cannot run in parallel on multicore systems.
- Example: Green threads in Solaris.
-
One-to-One Model:
- Maps each user thread to a kernel thread.
- Pros: Provides more concurrency; if one thread blocks, others can run. Allows parallelism on multiprocessors.
- Cons: Creating a user thread requires creating a corresponding kernel thread (overhead), which may limit the number of threads supported.
- Example: Windows, Linux.
-
Many-to-Many Model:
- Multiplexes many user-level threads to a smaller or equal number of kernel threads.
- Pros: Developers can create as many user threads as needed, and the corresponding kernel threads can run in parallel on a multiprocessor. Blocking issues are mitigated.
- Cons: Complex to implement.
- Example: Older versions of Solaris (Two-level model).
What are Scheduler Activations? Explain their role in thread management.
Scheduler Activations provide a communication mechanism between the kernel and the user-level thread library to improve the performance of the Many-to-Many model.
The Problem: In Many-to-Many models, the kernel knows little about user threads. If a user thread blocks (e.g., page fault), the kernel might block the whole kernel thread, stopping other ready user threads mapped to it.
The Solution:
- The kernel provides an LWP (Lightweight Process) to the application.
- Upcall: When a kernel event occurs (like a thread blocking), the kernel notifies the user-level thread library via an "upcall."
- The library can then save the state of the blocked thread and schedule another user thread on a new or existing LWP.
- Essentially, it mimics the functionality of kernel threads while maintaining the performance of user threads.
What is a Precedence Graph? How is it used to represent the execution relationships between concurrent processes?
A Precedence Graph is a directed acyclic graph (DAG) used to model the dependencies and execution order among concurrent processes or statements.
- Nodes: Represent individual processes or tasks (e.g., ).
- Edges: A directed edge from node to () signifies that process must complete execution before process can begin.
Usage:
It is used to detect potential parallelism. If there is no path between two nodes (processes), they are independent and can be executed concurrently. If there is a path, they must be executed sequentially.
Example:
If we have calculation () and (), followed by ().
- and have no dependencies on each other; they can run in parallel.
- depends on both. The graph would show edges and .
Using the concept of Condition Variables within a Monitor, explain the x.wait() and x.signal() operations.
Monitors often need to allow a process to wait until a specific condition becomes true. This is handled via Condition Variables (e.g., condition x;).
Operations:
-
x.wait():- The process invoking this operation is suspended (blocked) and placed in a queue associated with the condition variable
x. - Crucially, the process releases the monitor lock, allowing other processes to enter the monitor (otherwise, no one could enter to change the condition).
- The process invoking this operation is suspended (blocked) and placed in a queue associated with the condition variable
-
x.signal():- This operation resumes exactly one of the processes (if any) that suspended itself on condition
x. - If no process is waiting on
x, thesignaloperation has no effect (unlike semaphores where signals are remembered/counted).
- This operation resumes exactly one of the processes (if any) that suspended itself on condition
Signal Semantics:
- Signal-and-Wait (Hoare style): The signaler waits until the resumed process leaves the monitor.
- Signal-and-Continue (Mesa style): The signaler continues, and the resumed process waits for the lock to become available.
Explain the Dining Philosophers solution using Monitors. How does it prevent deadlock?
A monitor solution avoids deadlock by imposing the restriction that a philosopher may pick up chopsticks only if both of them are available.
Monitor Structure:
- State Array:
enum {THINKING, HUNGRY, EATING} state[5]; - Condition Variables:
condition self[5];(used to delay a hungry philosopher if chopsticks aren't available).
Key Procedures:
- pickup(i):
- Set
state[i] = HUNGRY. - Call
test(i). - If
state[i]is notEATING, callself[i].wait().
- Set
- putdown(i):
- Set
state[i] = THINKING. - Call
test((i+4)%5)(check left neighbor). - Call
test((i+1)%5)(check right neighbor).
- Set
- test(i):
- Checks if philosopher is
HUNGRYand if neighbors and are NOTEATING. - If true, set
state[i] = EATINGandself[i].signal().
- Checks if philosopher is
Prevention of Deadlock:
Deadlock is prevented because the state change to EATING and the acquisition of chopsticks is atomic within the monitor. A philosopher waits unless both resources are free, ensuring no circular wait where everyone holds one and waits for another.
Implementation of Semaphores often leads to Busy Waiting. How can this be resolved? Provide the structure of a semaphore with a waiting queue.
Busy waiting (spinlocks) wastes CPU cycles checking a condition. To resolve this, we can modify the semaphore implementation. Instead of spinning, a process blocks itself.
Structure:
Define a semaphore as a structure:
c
typedef struct {
int value;
struct process *list; // Queue of PCBs
} semaphore;
Operations:
*1. wait(semaphore S):**
c
S->value--;
if (S->value < 0) {
// Add this process to S->list
block(); // Suspend process, call scheduler
}
*2. signal(semaphore S):**
c
S->value++;
if (S->value <= 0) {
// Remove a process P from S->list
wakeup(P); // Place P in ready queue
}
In this implementation, semaphore values can be negative. If is , there are processes waiting in the queue.
What are the primary benefits of Multithreading in application development?
The benefits of multithreading can be categorized into four major areas:
- Responsiveness: Multithreading allows an interactive application to continue running even if part of it is blocked or is performing a lengthy operation. For example, a web browser can render a page in one thread while downloading an image in another.
- Resource Sharing: Threads share the memory and resources of the process they belong to by default. This allows several different threads of activity within the same address space efficiently.
- Economy: Allocating memory and resources for process creation is costly. Because threads share the resources of the process, it is more economical to create and context-switch threads.
- Scalability (Utilization of Multiprocessor Architectures): In a multiprocessor architecture, threads can run in parallel on different processors. A single-threaded process can only run on one CPU, regardless of how many are available.
Describe the Swap() hardware instruction and how it facilitates process synchronization.
Swap() Instruction:
Like TestAndSet, Swap (or Exchange) is an atomic hardware instruction. It swaps the contents of two memory words.
Definition:
c
void Swap(boolean a, boolean b) {
boolean temp = a;
a = b;
b = temp;
}
Implementing Mutual Exclusion:
We use a global boolean variable lock initialized to false. Each process has a local boolean variable key.
c
do {
key = true;
while (key == true)
Swap(&lock, &key);
// Critical Section
lock = false;
// Remainder Section
} while (true);
Logic:
The process sets its local key to true. It attempts to swap key with lock.
- If
lockis currently false (free), the swap results inlockbecoming true (locked) andkeybecoming false. The loop terminates, and the process enters the CS. - If
lockis currently true (held by another), the swap simply exchanges true for true, and the loop continues (busy wait).