Unit5 - Subjective Questions
CSE325 • Practice Questions with Detailed Answers
Explain the concept of a thread within an Operating System and distinguish it from a process.
Concept of a Thread
A thread is a basic unit of CPU utilization, consisting of a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals.
Differences between Thread and Process
| Feature | Process | Thread |
|---|---|---|
| Definition | An instance of a program in execution. | A subset of a process; a path of execution within a process. |
| Resource Sharing | Processes are independent and do not share memory by default. | Threads share the memory address space of their parent process. |
| Context Switching | Context switching between processes is heavy (slow) due to invalidating cache/TLB. | Context switching between threads is lightweight (fast). |
| Communication | Requires Inter-Process Communication (IPC) mechanisms. | Can communicate directly via shared variables/memory. |
| Overhead | High creation and termination overhead. | Low creation and termination overhead. |
Describe the syntax and parameters of the pthread_create() function in the POSIX library.
The pthread_create() function is used to create a new thread. It starts execution in the function pointed to by start_routine.
Syntax
c
int pthread_create(pthread_t thread, const pthread_attr_t attr,
void (start_routine) (void ), void arg);
Parameters
thread: A pointer to apthread_tvariable where the unique identifier of the newly created thread will be stored.attr: A pointer topthread_attr_tstructure to specify thread attributes (e.g., stack size, scheduling policy). If passed asNULL, default attributes are used.start_routine: A pointer to the C function that the thread will execute once created. This function must accept avoid *argument and return avoid *.arg: The argument to be passed to thestart_routine. If multiple arguments are needed, a pointer to a structure containing the data is passed. If no argument is needed,NULLis passed.
Return Value
On success, it returns 0. On error, it returns a non-zero error number.
Explain the significance of pthread_join() and pthread_exit() in thread management.
pthread_exit()
This function is used to terminate the calling thread.
- Usage:
void pthread_exit(void *retval); - Significance: It ensures that the thread cleans up its own stack and registers but does not close the entire process (unless it is the last thread). It allows the thread to return a value (
retval) to the thread that joins it.
pthread_join()
This function is used to wait for the termination of a specific thread.
- Usage:
int pthread_join(pthread_t thread, void **retval); - Significance:
- Synchronization: It acts as a barrier, suspending the execution of the calling thread until the target
threadterminates. - Resource Release: In many implementations, a thread's resources are not fully reclaimed until it is joined (similar to
wait()for processes to prevent zombies). - Return Value Retrieval: It allows the calling thread to retrieve the exit status/value returned by the target thread via
pthread_exit.
- Synchronization: It acts as a barrier, suspending the execution of the calling thread until the target
Define a Race Condition and provide a scenario where it might occur.
Definition
A Race Condition is a situation in concurrent programming where the system's behavior or output depends on the uncontrollable sequence or timing of unrelated events (like the scheduling order of threads). It occurs when two or more threads access shared data and try to change it at the same time.
Scenario: The Bank Account Problem
Consider a shared variable balance = 1000.
Thread A (Deposit 500):
- Read
balance(1000) balance=balance+ 500- Write
balance
Thread B (Withdraw 200):
- Read
balance(1000) balance=balance- 200- Write
balance
Race Condition Execution:
- Thread A reads 1000.
- Context switch to Thread B.
- Thread B reads 1000.
- Thread B calculates 800 and writes 800.
- Context switch back to Thread A.
- Thread A (using its old register value of 1000) calculates 1500 and writes 1500.
Result: The withdrawal of 200 is lost. The final balance is 1500 instead of the correct 1300. This is a race condition.
What is the Critical Section Problem? List the three conditions that a solution to the critical section problem must satisfy.
Critical Section Problem
The Critical Section Problem involves designing a protocol that processes/threads use to cooperate. A Critical Section is a segment of code where a process accesses shared resources (common variables, files, tables). The problem is to ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
Required Conditions
- Mutual Exclusion: If process is executing in its critical section, then no other processes can be executing in their critical sections.
- Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
- Bounded Waiting: There must be a limit (bound) on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. This prevents starvation.
What is a Mutex Lock? Explain how it is used to solve the Critical Section problem using POSIX APIs.
Mutex Lock
A Mutex (Mutual Exclusion) is a synchronization primitive used to protect shared resources. It acts like a lock: a thread acquires the lock before entering a critical section and releases it upon exiting. If the lock is already held by another thread, the requesting thread blocks until the lock becomes available.
POSIX Usage
-
Initialization: The mutex must be initialized before use.
c
pthread_mutex_t lock;
pthread_mutex_init(&lock, NULL); -
Locking (Entry Section): Before entering the critical section, the thread attempts to lock the mutex.
c
pthread_mutex_lock(&lock);
// Critical Section (Access shared resource) -
Unlocking (Exit Section): After finishing, the thread releases the mutex.
c
pthread_mutex_unlock(&lock);
// Remainder Section -
Destruction: When no longer needed.
c
pthread_mutex_destroy(&lock);
This ensures Mutual Exclusion because only the thread holding the lock can execute the code between the lock and unlock calls.
Differentiate between Binary Semaphores and Counting Semaphores.
Semaphores
A semaphore is an integer variable accessed only through two standard atomic operations: wait() and signal().
| Feature | Binary Semaphore | Counting Semaphore |
|---|---|---|
| Value Range | Can range only between 0 and 1. | Can range over an unrestricted domain (0 to N). |
| Functionality | Behaves similarly to a Mutex lock. Used primarily for mutual exclusion. | Used to control access to a resource with a finite number of instances. |
| Initialization | Initialized to 1 (usually). | Initialized to the number of available resources (). |
| Operation | If value is 1, wait() succeeds and sets it to 0. If 0, it blocks. Signal() sets it to 1. | wait() decrements the count; if count , it blocks. signal() increments the count. |
| Use Case | Protecting a single critical section. | Managing a pool of resources (e.g., connection pool, buffer slots). |
Mathematically define the wait() and signal() operations of a Semaphore .
Semaphores are accessed via two atomic operations, historically called (wait) and (signal).
1. Wait Operation (sem_wait or )
The wait() operation decrements the semaphore value. If the value becomes negative, the process executing the wait is blocked.
Definition:
Note: In modern OS implementations involving blocking queues, the logic is usually: decrement ; if , add process to waiting queue and block.
2. Signal Operation (sem_post or )
The signal() operation increments the semaphore value. If there are processes blocked waiting on this semaphore, one of them is woken up.
Definition:
Note: If after incrementing, a process is removed from the waiting queue and placed in the ready queue.
Compare Mutexes and Semaphores. When would you choose one over the other?
Comparison
-
Ownership:
- Mutex: Has a concept of ownership. The thread that locks the mutex must be the one to unlock it.
- Semaphore: No ownership. One thread can wait (decrement) and another thread can signal (increment). This makes semaphores suitable for signaling between threads.
-
Nature:
- Mutex: Strictly a locking mechanism for mutual exclusion.
- Semaphore: A signaling mechanism (Counting semaphores handle resource counting).
Selection Criteria
- Choose Mutex: When you strictly need Mutual Exclusion for a critical section (e.g., modifying a shared variable). It is generally faster and lighter than a semaphore for this specific purpose.
- Choose Semaphore:
- When you need to synchronize the execution order of different threads (e.g., Thread A must finish before Thread B starts - Thread A signals, Thread B waits).
- When managing a pool of identical resources (Counting Semaphore).
- When solving Producer-Consumer problems (handling full/empty slots).
Explain the solution to the Producer-Consumer Problem using Semaphores. Provide the pseudocode logic.
The Producer-Consumer problem involves a fixed-size buffer. Producers add items, Consumers remove them. We must ensure Producers don't add to a full buffer and Consumers don't remove from an empty one, while maintaining mutual exclusion on the buffer indices.
Semaphores Required
mutex: A binary semaphore (init 1) for mutual exclusion on buffer access.empty: A counting semaphore (init , where is buffer size) tracking empty slots.full: A counting semaphore (init 0) tracking filled slots.
Producer Logic
c
do {
// produce an item in next_produced
wait(empty); // Wait for an empty slot
wait(mutex); // Lock buffer access
// add next_produced to buffer
signal(mutex); // Unlock buffer
signal(full); // Increment count of full slots
} while (true);
Consumer Logic
c
do {
wait(full); // Wait for a filled slot
wait(mutex); // Lock buffer access
// remove an item from buffer to next_consumed
signal(mutex); // Unlock buffer
signal(empty); // Increment count of empty slots
// consume the item in next_consumed
} while (true);
What are the arguments passed to sem_init() in the POSIX semaphore library?
The sem_init() function initializes an unnamed semaphore.
Syntax
c
int sem_init(sem_t *sem, int pshared, unsigned int value);
Arguments
sem: A pointer to thesem_tstructure to be initialized.pshared: An integer indicating whether the semaphore is shared between threads of a process or between processes.- If
pshared == 0: The semaphore is shared between threads of the calling process. - If
pshared != 0: The semaphore is shared between processes (requires shared memory).
- If
value: The initial value to set the semaphore to.- For a binary semaphore/mutex equivalent, this is usually 1.
- For a resource counter, this is the number of resources N.
Explain the concept of Deadlock in the context of Multithreading using Mutex locks.
Deadlock Definition
Deadlock is a state where a set of threads are blocked because each thread is holding a resource and waiting to acquire another resource held by some other thread in the same set. No thread can proceed.
Scenario with Mutex Locks
Suppose there are two threads () and two mutex locks ().
Thread 1 Execution:
c
pthread_mutex_lock(&M_A); // Acquires A
// Context Switch happens here
pthread_mutex_lock(&M_B); // Waits for B
Thread 2 Execution:
c
pthread_mutex_lock(&M_B); // Acquires B
pthread_mutex_lock(&M_A); // Waits for A
The Deadlock:
- holds .
- holds .
- requests and blocks (because has it).
- requests and blocks (because has it).
Neither thread can release the lock they hold because they are stuck waiting for the other. This circular dependency creates a deadlock.
How can you pass arguments to a thread function during creation? Provide a code snippet.
Arguments are passed as the fourth parameter in pthread_create, which is of type void *. If a single variable is needed, its address is cast to void *. If multiple arguments are needed, a structure is defined.
Code Snippet (Using a Structure)
c
include <pthread.h>
include <stdio.h>
// Define a structure for arguments
struct thread_args {
int id;
int value;
};
void myThreadFun(void arg) {
// Cast back to struct pointer
struct thread_args my_data = (struct thread_args )arg;
printf("ID: %d, Value: %d
", my_data->id, my_data->value);
return NULL;
}
int main() {
pthread_t tid;
struct thread_args args;
args.id = 1;
args.value = 100;
// Pass address of struct cast to void*
pthread_create(&tid, NULL, myThreadFun, (void *)&args);
pthread_join(tid, NULL);
return 0;
}
Discuss the difference between pthread_mutex_lock() and pthread_mutex_trylock().
Both functions interact with a mutex object, but they behave differently when the mutex is already locked by another thread.
pthread_mutex_lock()
- Behavior: It is a blocking call.
- If the mutex is available, it locks it and returns immediately.
- If the mutex is already locked, the calling thread is suspended (put to sleep) until the mutex becomes available.
pthread_mutex_trylock()
- Behavior: It is a non-blocking call.
- If the mutex is available, it locks it and returns 0.
- If the mutex is already locked, it does not block. Instead, it returns immediately with an error code (usually
EBUSY).
Use Case for trylock
Used when a thread wants to do alternative work if the resource is busy, rather than waiting idle, preventing potential deadlocks or improving responsiveness.
What are Condition Variables in Pthreads, and why are they used with Mutexes?
Condition Variables (pthread_cond_t)
A condition variable is a synchronization primitive that allows threads to suspend execution and relinquish the processor until some condition is true. They allow threads to synchronize based on the actual value of data.
Why used with Mutexes?
Condition variables are always associated with a mutex lock to avoid race conditions on the condition being checked.
The Pattern:
- Wait (
pthread_cond_wait):- A thread acquires a mutex.
- It checks a condition (e.g.,
buffer_count > 0). - If the condition is false, it calls
pthread_cond_wait(&cond, &mutex). - Crucially, this function atomically releases the mutex and blocks the thread. This allows other threads to acquire the mutex to change the condition.
- When signaled, it re-acquires the mutex before returning.
- Signal (
pthread_cond_signal):- Another thread acquires the mutex, changes the state (e.g., adds to buffer), and signals the condition variable to wake up waiting threads.
What is Busy Waiting? How do Semaphores/Mutexes in modern OS avoid this?
Busy Waiting
Busy waiting (or spinning) occurs when a process/thread repeatedly checks a condition in a loop (e.g., while(locked);) to see if it can proceed.
- Disadvantage: It wastes CPU cycles that could be used by other processes. This is inefficient in multiprogramming systems.
How Synchronization Primitives Avoid It
Modern implementations of Mutexes and Semaphores use a Block/Sleep and Wakeup mechanism instead of busy waiting.
- Block: When a thread attempts to acquire a locked mutex or wait on a zero-valued semaphore, the OS kernel moves the thread from the Running state to the Waiting state.
- Context Switch: The CPU is yielded to another process.
- Wakeup: When the mutex is unlocked or the semaphore is signaled, the OS kernel moves the waiting thread from the Waiting state to the Ready queue.
This ensures the CPU is not idle checking a variable, thus solving the inefficiency of busy waiting.
Explain the Readers-Writers Problem and the challenges in solving it.
The Problem
Multiple threads (readers and writers) access a shared database.
- Readers: Only read the data (multiple readers can read simultaneously).
- Writers: Read and write data (writers require exclusive access).
Constraints
- If a Writer is active, no other Writer or Reader can be active.
- If a Reader is active, other Readers can join, but no Writer can enter.
Challenges (Variations)
- First Readers-Writers Problem (Reader Preference):
- Logic: No reader is kept waiting unless a writer has already obtained permission to use the shared object.
- Issue: Starvation of Writers. If new readers keep arriving, the writer may never get a chance to write.
- Second Readers-Writers Problem (Writer Preference):
- Logic: Once a writer is ready, it performs the write as soon as possible. No new readers may start reading if a writer is waiting.
- Issue: Starvation of Readers.
What is the function of pthread_self() and how is it useful?
Function
pthread_self() is a function in the POSIX thread library that returns the Thread ID of the calling thread.
Syntax: pthread_t pthread_self(void);
Utility
- Identification: It allows a thread to know its own identity. This is useful for logging or debugging purposes to trace which thread is performing an action.
- Comparison: It is used with
pthread_equal(tid1, tid2)to check if two thread IDs correspond to the same thread. - Detaching: A thread might use it to detach itself using
pthread_detach(pthread_self()), ensuring its resources are released immediately upon termination without being joined.
Write a C program snippet to initialize a semaphore, wait on it, and then destroy it.
c
include <stdio.h>
include <semaphore.h>
include <pthread.h>
int main() {
sem_t mySem;
// 1. Initialize Semaphore
// pshared = 0 (thread shared), value = 1 (binary)
if (sem_init(&mySem, 0, 1) != 0) {
perror("Semaphore init failed");
return 1;
}
printf("Semaphore initialized.
");
// 2. Wait (P operation)
// Decrements value. If value was 1, it proceeds. If 0, it blocks.
sem_wait(&mySem);
printf("Entered Critical Section.
");
// ... Critical Section Operations ...
// Signal (V operation) to release
sem_post(&mySem);
printf("Exited Critical Section.
");
// 3. Destroy Semaphore
sem_destroy(&mySem);
return 0;
}
List the advantages of Multithreading over a single-threaded approach.
- Responsiveness: In an interactive application, multithreading allows a program to continue running even if part of it is blocked or is performing a lengthy operation (e.g., a GUI remains responsive while a background thread downloads a file).
- Resource Sharing: Threads share the memory and resources of the process they belong to by default. This avoids the overhead of setting up shared memory or message passing used in inter-process communication.
- Economy: Allocating memory and resources for process creation is costly. Because threads share resources, it is more economical to create and context-switch threads.
- Scalability (Utilization of MP Architectures): On multiprocessor or multicore architectures, multithreading can greatly increase parallelism. Different threads can run on different cores simultaneously, speeding up computation.