Unit3 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Define the Critical Section Problem and explain the three necessary conditions that a solution to the critical section problem must satisfy.
The Critical Section Problem is to design a protocol that processes use to cooperate. A Critical Section is a segment of code in a process where the process may be changing common variables, updating a table, writing a file, etc. The system must ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
A solution to the critical section problem must satisfy the following three requirements:
- Mutual Exclusion: If process is executing in its critical section, then no other processes can be executing in their critical sections.
- Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
- Bounded Waiting: There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Explain Semaphores. Distinguish between Counting Semaphores and Binary Semaphores. Write the pseudo-code for wait() and signal() operations.
A Semaphore is a synchronization tool (an integer variable) that is accessed only through two standard atomic operations: wait() and signal() (originally called and ).
Types of Semaphores:
- Counting Semaphore: The value can range over an unrestricted domain. It can be used to control access to a given resource consisting of a finite number of instances.
- Binary Semaphore: The value can range only between 0 and 1. On some systems, these are known as mutex locks because they provide mutual exclusion.
Pseudo-code:
-
wait(S):
c
wait(S) {
while (S <= 0)
; // busy wait
S--;
} -
signal(S):
c
signal(S) {
S++;
}
Note: To avoid busy waiting, operating systems often implement semaphores using a waiting queue where processes block instead of spinning.
Describe Peterson's Solution for the critical section problem for two processes. Prove how it satisfies Mutual Exclusion.
Peterson's Solution is a classic software-based solution to the critical section problem restricted to two processes. It uses two shared data items:
int turn;(Indicates whose turn it is to enter the critical section).boolean flag[2];(Indicates if a process is ready to enter).
Structure for Process (where the other process is ):
c
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
// CRITICAL SECTION
flag[i] = false;
// REMAINDER SECTION
} while (true);
Proof of Mutual Exclusion:
Each enters its critical section only if either flag[j] == false or turn == i. If both processes can be executing in their critical sections at the same time, then flag[0] == true and flag[1] == true. These imply that and could not have successfully executed their while loops at the same time. Since the value of turn can be either 0 or 1 (not both), one of the processes must have been waiting in the while loop, preventing simultaneous access.
Explain the Producer-Consumer Problem. Provide a solution using Semaphores.
The Producer-Consumer Problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer. The producer generates data and puts it into the buffer; the consumer consumes the data from the buffer. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
Solution using Semaphores:
We need three semaphores:
mutex(Binary, initialized to 1): Provides mutual exclusion for buffer access.empty(Counting, initialized to Buffer Size ): Counts empty slots.full(Counting, initialized to 0): Counts filled slots.
Producer Process:
c
do {
// produce an item in next_produced
wait(empty);
wait(mutex);
// add next_produced to buffer
signal(mutex);
signal(full);
} while (true);
Consumer Process:
c
do {
wait(full);
wait(mutex);
// remove an item from buffer to next_consumed
signal(mutex);
signal(empty);
// consume the item in next_consumed
} while (true);
What are Monitors in the context of process synchronization? How do they differ from semaphores?
Monitors are a high-level synchronization construct (an abstract data type) that provides a convenient and effective mechanism for process synchronization. The monitor construct ensures that only one process at a time can be active within the monitor.
Structure:
A monitor consists of shared data structures and a set of procedures (operations) that operate on them. Processes cannot directly access the internal data.
Condition Variables:
To allow a process to wait within the monitor, a condition variable type is used (e.g., condition x, y;). Two operations are allowed on condition variables:
x.wait(): The process invoking this operation is suspended until another process invokesx.signal().x.signal(): Resumes exactly one suspended process. If no process is suspended, the operation has no effect.
Difference from Semaphores:
- Abstraction: Monitors are a higher-level abstraction (often language-supported like in Java synchronized blocks) compared to semaphores (low-level OS primitives).
- Ease of Use: Monitors encapsulate data and synchronization, making them less error-prone than semaphores where a misplaced wait/signal can cause deadlocks easily.
- Automatic Mutex: Mutual exclusion is provided automatically by the monitor construct, whereas semaphores require the programmer to explicitly manage the mutex.
Discuss the Dining Philosophers Problem. Explain the issues involved and provide a strategy to avoid deadlock.
The Problem:
Five philosophers sit at a circular table with a bowl of rice. There are five chopsticks, one between each pair of philosophers. A philosopher needs two chopsticks (left and right) to eat. They cycle between thinking and eating.
Issues:
- Deadlock: If all philosophers become hungry simultaneously and pick up their left chopstick, they will all wait forever for the right chopstick.
- Starvation: A philosopher might never get both chopsticks if the neighbors keep eating alternately.
Deadlock-Free Strategy (Resource Hierarchy Solution):
One solution is to impose a partial order on the resources (chopsticks). Number the chopsticks 0 through 4.
Algorithm:
- All philosophers (where ) pick up the lower-numbered chopstick first, then the higher-numbered one.
- The last philosopher () picks up the higher-numbered chopstick (0) first, then the lower-numbered one (4).
This asymmetry breaks the circular wait condition necessary for deadlock.
Compare User-level Threads and Kernel-level Threads. Highlight their advantages and disadvantages.
| Feature | User-Level Threads (ULT) | Kernel-Level Threads (KLT) |
|---|---|---|
| Management | Managed by user-level thread libraries without kernel support. | Managed directly by the Operating System. |
| Speed | Creation and context switching are faster (no system calls required). | Creation and context switching are slower due to system call overhead. |
| Blocking | If one thread executes a blocking system call, the entire process (all threads) blocks. | If one thread blocks, the kernel can schedule another thread from the same process. |
| Multiprocessing | Cannot utilize multiprocessor architectures easily (kernel sees 1 process). | Can run on different processors simultaneously in a multiprocessor environment. |
| Dependency | OS independent. | OS dependent. |
Advantages:
- ULT: Fast, portable.
- KLT: Supports true parallelism, non-blocking system calls.
Disadvantages:
- ULT: No true concurrency on multiple CPUs, blocking issue.
- KLT: High overhead.
Explain the Readers-Writers Problem. Distinguish between the 'First' and 'Second' variations of this problem.
The Readers-Writers Problem models access to a database shared among several concurrent processes. Some processes (Readers) only read the database; others (Writers) update the database.
Constraints:
- Multiple readers can read the file simultaneously.
- Only one writer can write at a time.
- If a writer is writing, no other process (reader or writer) can access the database.
Variations:
-
First Readers-Writers Problem (Reader Priority):
- No reader should be kept waiting unless a writer has already obtained permission to use the shared object.
- Result: Writers may starve if new readers keep arriving.
-
Second Readers-Writers Problem (Writer Priority):
- Once a writer is ready, that writer performs its write as soon as possible. If a writer is waiting, no new readers may start reading.
- Result: Readers may starve.
Define Cooperating Processes and explain how they communicate.
Cooperating Processes are processes that can affect or be affected by other processes executing in the system. They may share a logical address space (threads) or share data through files or messages.
Reasons for Cooperation:
- Information Sharing: Access to the same file (e.g., database).
- Computation Speedup: Breaking a task into subtasks to run in parallel.
- Modularity: Constructing a system in modular fashion.
Communication Mechanisms:
Cooperating processes require an Inter-Process Communication (IPC) mechanism. The two fundamental models are:
- Shared Memory: A region of memory that resides in the address space of a process creating it is shared. Communication is fast (memory speeds).
- Message Passing: Communication takes place by exchanging messages between cooperating processes. Useful for distributed environments. Operations:
send(message)andreceive(message).
Discuss Hardware Primitives for synchronization. Explain TestAndSet and Swap instructions.
Hardware primitives are special machine instructions that operate atomically (indivisibly). They are used to implement mutual exclusion and lock mechanisms.
1. TestAndSet Instruction:
It tests a memory word and sets the value.
-
Definition (Atomic):
c
boolean TestAndSet(boolean target) {
boolean rv = target;
*target = true;
return rv;
} -
Usage: If
TestAndSetreturnsfalse, the process enters the critical section (and lock is nowtrue). If it returnstrue, the process waits.
2. Swap Instruction:
It swaps the contents of two memory words.
-
Definition (Atomic):
c
void Swap(boolean a, boolean b) {
boolean temp = a;
a = b;
b = temp;
} -
Usage: A global
lockis initialized tofalse. Each process has a localkey. The process setskey = trueand loops callingSwap(&lock, &key)untilkeybecomesfalse.
What are Multithreading Models? Explain Many-to-One, One-to-One, and Many-to-Many models.
Multithreading models map user-level threads (ULT) to kernel-level threads (KLT).
-
Many-to-One Model:
- Maps many user-level threads to one kernel thread.
- Pros: Thread management is efficient (user space).
- Cons: The entire process blocks if one thread makes a blocking system call. No parallel execution on multicore systems.
- Example: Solar Green Threads.
-
One-to-One Model:
- Maps each user thread to a kernel thread.
- Pros: More concurrency; blocking one thread doesn't block others; runs on multiple processors.
- Cons: Creating a user thread requires creating a kernel thread (overhead). Often restricted on the number of threads.
- Example: Windows, Linux.
-
Many-to-Many Model:
- Multiplexes many user-level threads to a smaller or equal number of kernel threads.
- Pros: Developers can create as many user threads as needed, and kernel threads can run in parallel on a multiprocessor.
- Example: Older versions of Solaris.
Explain the concept of Scheduler Activations.
Scheduler Activations is a mechanism that attempts to combine the best properties of user-level and kernel-level threads. It addresses the communication gap between the kernel and the user-level thread library.
Mechanism:
- The kernel provides a set of Lightweight Processes (LWPs) (virtual processors) to the application.
- The application can schedule user threads onto these LWPs.
- Upcalls: When a kernel event occurs (like a thread blocking), the kernel notifies the application via an upcall. The library can then save the state of the blocking thread and schedule another user thread on a different LWP.
- This allows the user library to maintain control over scheduling while maintaining kernel awareness, avoiding the issue where a blocking system call in a Many-to-One model blocks the whole process.
What is a Race Condition? Give an example.
Race Condition:
A race condition occurs when several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. It results in inconsistent data.
Example:
Consider two processes, and , sharing a variable counter initialized to 5.
- executes:
counter++ - executes:
counter--
At the machine level, these are 3 steps (Load, Update, Store).
Scenario:
- loads
counter(5) into register . - increments to 6.
- (Context Switch) loads
counter(still 5) into register . - decrements to 4.
- stores (4) to
counter. - (Context Switch) stores (6) to
counter.
The final value is 6, but it should be 5. The data is inconsistent because synchronization was lacking.
Differentiate between a Process and a Thread.
| Feature | Process | Thread |
|---|---|---|
| Definition | A program in execution. Heavyweight unit. | A flow of control within a process. Lightweight unit. |
| Resource Sharing | Processes are independent and do not share memory/resources by default. | Threads share the address space, code, data, and files of the process they belong to. |
| Context Switching | High overhead (switching memory maps, flushing TLB). | Low overhead (switching only registers/stack). |
| Isolation | If one process crashes, it doesn't affect others. | If one thread crashes, the entire process (and other threads) may terminate. |
| Communication | Requires IPC (pipes, messages, shared memory). | Can communicate directly via shared variables. |
Explain the concept of a Precedence Graph with a diagrammatic representation explanation.
A Precedence Graph is a directed acyclic graph (DAG) used to model the dependencies between concurrent processes or statements. It helps in determining which parts of a program can be executed in parallel and which must be executed sequentially.
Components:
- Nodes: Represent individual statements or processes ().
- Edges: Directed edges represent execution order constraints.
Interpretation:
An edge from node to () means that must complete execution before can begin.
Example:
Consider:
and are independent. depends on the results of both and .
Graph: and .
This implies and can run concurrently, but must wait for both.
Describe the Hierarchy of Processes.
The Hierarchy of Processes refers to the parent-child relationship created during process creation in an operating system.
- Creation: When a process executes a process creation system call (like
fork()in Unix), it creates a new process.- The creator is the Parent Process.
- The new process is the Child Process.
- Tree Structure: The child process can, in turn, create other processes, forming a tree of processes.
- Process Identifier (PID): The OS identifies processes using a unique PID. The root of the tree is usually a special init process (e.g.,
systemdorinitin Linux). - Resource Sharing Options:
- Parent and child share all resources.
- Child shares a subset of parent's resources.
- Parent and child share no resources.
- Execution Options:
- Parent and child execute concurrently.
- Parent waits until children terminate.
Provide an example of a Threaded Program (conceptual or pseudo-code) showing the creation and joining of threads.
Below is a conceptual example using the POSIX Pthreads API style:
Objective: Create a separate thread to calculate the sum of numbers.
c
include <pthread.h>
include <stdio.h>
int sum; / this data is shared by the thread(s) /
/ The thread will execute in this function /
void runner(void param) {
int i, upper = atoi(param);
sum = 0;
for (i = 1; i <= upper; i++)
sum += i;
pthread_exit(0);
}
int main(int argc, char argv[]) {
pthread_t tid; / the thread identifier /
pthread_attr_t attr; / set of thread attributes */
/* set the default attributes of the thread */
pthread_attr_init(&attr);
/* create the thread */
/* passing runner function and argument argv[1] */
pthread_create(&tid, &attr, runner, argv[1]);
/* wait for the thread to exit */
pthread_join(tid, NULL);
printf("sum = %d\n", sum);
}
Explanation:
pthread_createspawns a new thread executingrunner.maincontinues (concurrently).pthread_joinblocks the main thread until the child thread finishes, ensuringsumis calculated before printing.
What is Bounded Waiting in critical section solutions? Why is it important?
Bounded Waiting is one of the three requirements for a valid Critical Section solution.
Definition:
There must exist a limit (or bound) on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Importance:
- Prevents Starvation: Without bounded waiting, a process could theoretically wait forever while other processes continuously enter and exit the critical section.
- Fairness: It ensures that every process gets a fair chance to execute its critical section within a finite timeframe.
Example: In a poorly designed semaphore system, if a process waits in a queue and the queue is serviced in LIFO (Last In First Out) order, the first process might starve. FIFO (First In First Out) ensures bounded waiting.
What are the drawbacks of using Semaphores?
While semaphores are a powerful synchronization tool, they have several drawbacks:
- Complexity: They are low-level primitives. Using them requires careful programming.
- Timing Errors: They are prone to human error. A simple mistake in the sequence of
wait()andsignal()operations can lead to severe system issues.- Example:
wait(mutex) ... wait(mutex)causes Deadlock. - Example:
signal(mutex) ... wait(mutex)violates Mutual Exclusion.
- Example:
- Deadlock: Improper ordering of semaphore operations across multiple processes can easily lead to deadlock (e.g., Dining Philosophers).
- Priority Inversion: A high-priority process waiting for a semaphore held by a low-priority process can be delayed by a medium-priority process (solved by priority inheritance).
- Unstructured: The
waitandsignalcalls are scattered throughout the code, making maintenance and debugging difficult.
Explain the Classical Two-Process Solution (like Dekker's or Peterson's) vs N-Process Solutions (like Bakery Algorithm).
Two-Process Solutions:
- Algorithms like Peterson’s Algorithm are designed specifically for two processes ().
- They use simple shared variables (flags and turn).
- They are computationally inexpensive but not scalable directly to processes without modification.
N-Process Solutions (e.g., Lamport's Bakery Algorithm):
- Designed to handle processes competing for a critical section.
- Concept: Based on a scheduling system like a bakery. A process receives a number. The process with the lowest number is served next.
- Mechanism:
choosing[i] = truenumber[i] = max(number[0]...number[n-1]) + 1choosing[i] = false- Wait for all processes with smaller numbers (or same number but smaller PID).
- Complexity: Higher overhead due to iterating through arrays of size to determine the max number or check conditions.