Unit5 - Subjective Questions
CSE325 • Practice Questions with Detailed Answers
Explain the fundamental differences between a Process and a Thread in the context of Operating Systems.
Difference between Process and Thread:
-
Definition:
- Process: A program in execution. It is the unit of resource allocation.
- Thread: A segment of a process (lightweight process). It is the unit of CPU scheduling.
-
Memory Sharing:
- Process: Processes run in separate memory spaces. Communication between them requires Inter-Process Communication (IPC).
- Thread: Threads within the same process share the same memory space (code segment, data segment, and open files), but have their own stack and register set.
-
Overhead:
- Process: Context switching between processes is heavy and time-consuming.
- Thread: Context switching between threads is faster and requires fewer resources.
-
Creation:
- Process: Creating a new process (e.g., using
fork()) duplicates the entire parent process. - Thread: Creating a thread (e.g., using
pthread_create) is lightweight.
- Process: Creating a new process (e.g., using
Describe the syntax and parameters of the pthread_create function in the POSIX thread library.
The pthread_create function is used to create a new thread. Its syntax is:
c
int pthread_create(pthread_t thread, const pthread_attr_t attr,
void (start_routine) (void ), void arg);
Parameters:
thread: A pointer to apthread_tvariable where the unique identifier of the newly created thread will be stored.attr: A pointer to thread attributes (e.g., stack size, scheduling policy). If set toNULL, default attributes are used.start_routine: A pointer to the function that the thread will execute once created. This function must accept avoid *and return avoid *.arg: A pointer to the argument passed to thestart_routine. If multiple arguments are needed, a structure pointer is typically passed. If no argument is needed,NULLis passed.
Return Value: Returns 0 on success; otherwise, it returns an error number.
What is a Race Condition? Explain with a conceptual example involving a shared variable.
Race Condition:
A race condition occurs when two or more threads (or processes) access and manipulate shared data concurrently, and the final outcome depends on the particular order (timing) in which the access takes place. It leads to inconsistent or incorrect data.
Example:
Consider a shared variable counter = 5 and two threads attempting to increment it (counter++).
The operation counter++ is not atomic; it consists of three steps in machine code:
- Load
counterinto a register. - Increment the register.
- Store the register value back to
counter.
Scenario:
- Thread A reads
counter(value 5). - Context switch occurs; Thread B reads
counter(value 5). - Thread A increments to 6 and writes 6 to memory.
- Thread B increments to 6 (based on its read value) and writes 6 to memory.
Result: The final value is 6, whereas the expected value was 7. The update from Thread A was lost.
Define the Critical Section Problem. What are the three conditions that a solution to the critical section problem must satisfy?
Critical Section Problem:
Consider a system of processes/threads where each has a segment of code, called a Critical Section (CS), in which the process may be changing common variables, updating a table, writing a file, etc. The problem is to ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
Required Conditions:
- Mutual Exclusion: If process is executing in its critical section, then no other processes can be executing in their critical sections.
- Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
- Bounded Waiting: There exists a bound, or limit, on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted (preventing starvation).
What is a Mutex Lock? Explain the functions used to initialize, lock, and unlock a mutex in Pthreads.
Mutex Lock (Mutual Exclusion Object):
A mutex is a synchronization primitive used to protect shared resources from race conditions. It acts like a lock; a thread acquires the lock before entering a critical section and releases it upon exiting.
Pthread Mutex Functions:
-
Initialization:
c
pthread_mutex_init(pthread_mutex_t mutex, const pthread_mutexattr_t attr);
// Or statically:
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;Initializes the mutex object.
-
Locking:
c
int pthread_mutex_lock(pthread_mutex_t *mutex);If the mutex is available, the calling thread locks it and proceeds. If it is already locked by another thread, the calling thread blocks (sleeps) until the mutex becomes available.
-
Unlocking:
c
int pthread_mutex_unlock(pthread_mutex_t *mutex);Releases the lock. If other threads are waiting for this mutex, one of them is unblocked and acquires the lock.
Explain the role of pthread_join and pthread_exit. Why is pthread_join necessary in the main thread?
pthread_exit:
- Function:
void pthread_exit(void *retval); - Role: Terminates the calling thread and makes the value
retvalavailable to any thread joining this thread. It performs cleanup (e.g., popping cancellation cleanup handlers) but does not release process resources like file descriptors unless it is the last thread.
pthread_join:
- Function:
int pthread_join(pthread_t thread, void **retval); - Role: Suspends the execution of the calling thread until the thread specified by
threadterminates. It acts similarly towait()for processes.
Necessity in Main Thread:
If the main thread (the initial thread of the process) finishes execution (e.g., returns from main()) without calling pthread_join on created worker threads, the entire process may terminate immediately. This kills all running worker threads abruptly before they complete their tasks. pthread_join ensures the main thread waits for workers to finish.
Differentiate between Binary Semaphores and Counting Semaphores.
Comparison:
-
Value Range:
- Binary Semaphore: The integer value can range only between 0 and 1.
- Counting Semaphore: The integer value can range over an unrestricted domain (typically non-negative integers).
-
Functionality:
- Binary Semaphore: Primarily used for Mutual Exclusion (locking). It behaves similarly to a Mutex.
- Counting Semaphore: Used to control access to a given resource consisting of a finite number of instances (resource counting).
-
Initialization:
- Binary: Initialized to 1.
- Counting: Initialized to , where is the number of available resources.
-
Behavior:
- In a binary semaphore, if the value is 0, a
waitoperation blocks. If 1, it proceeds and sets it to 0. - In a counting semaphore,
waitdecrements the count. If the count becomes negative (implementation dependent) or was 0, it blocks.
- In a binary semaphore, if the value is 0, a
Describe the Producer-Consumer Problem and outline how Semaphores can be used to solve it.
Producer-Consumer Problem:
Also known as the Bounded-Buffer problem. Two processes, the producer and the consumer, share a common, fixed-size buffer. The producer generates data and puts it into the buffer. The consumer consumes data from the buffer. The problem is to ensure that the producer does not add data into the buffer if it is full and that the consumer does not remove data if the buffer is empty.
Semaphore Solution:
We use three semaphores:
mutex(Binary, init = 1): Provides mutual exclusion for buffer access.empty(Counting, init = ): Counts empty slots in the buffer.full(Counting, init = 0): Counts filled slots in the buffer.
Logic:
-
Producer:
c
wait(empty); // Wait for empty slot
wait(mutex); // Lock buffer
// ... add item to buffer ...
signal(mutex); // Unlock buffer
signal(full); // Signal that a slot is full -
Consumer:
c
wait(full); // Wait for filled slot
wait(mutex); // Lock buffer
// ... remove item from buffer ...
signal(mutex); // Unlock buffer
signal(empty); // Signal that a slot is empty
What are the standard POSIX semaphore operations? Explain the difference between sem_wait and sem_post.
POSIX semaphores are defined in <semaphore.h>. The two atomic operations are:
*1. `sem_wait(sem_t sem)` (The P operation):**
- Logic: Decrements the value of the semaphore pointed to by
sem. - Blocking: If the semaphore's value is greater than zero, the decrement proceeds, and the function returns immediately. If the semaphore currently has a value of zero, the call blocks (waits) until it becomes possible to perform the decrement (i.e., another thread increments it).
*2. `sem_post(sem_t sem)` (The V operation):**
- Logic: Increments (unlocks) the semaphore pointed to by
sem. - Waking: If the semaphore's value consequently becomes greater than zero, then another process or thread blocked in a
sem_waitcall will be woken up and proceed to lock the semaphore.
Compare Mutex Locks and Semaphores. When would you choose one over the other?
Comparison:
-
Ownership:
- Mutex: Has the concept of ownership. Only the thread that locked the mutex can unlock it.
- Semaphore: No ownership. A thread can wait on a semaphore, and a different thread (or process) can post (signal) it.
-
State:
- Mutex: Only two states (Locked/Unlocked).
- Semaphore: Can have a counting value (Counting Semaphore) or 0/1 (Binary Semaphore).
-
Nature:
- Mutex: Strictly a locking mechanism for mutual exclusion.
- Semaphore: A signaling mechanism (can be used for ordering/synchronization as well as exclusion).
Usage Choice:
- Use Mutex for strictly protecting a critical section (Mutual Exclusion) where only one thread should access a resource at a time.
- Use Semaphores for signaling between threads (e.g., Thread A tells Thread B it's done), managing resources with multiple instances (Counting), or for synchronization problems like Producer-Consumer.
What is a Deadlock in the context of multithreading using Mutexes? Provide a scenario using two mutexes.
Deadlock Definition:
A situation where two or more threads are blocked forever, waiting for each other to release resources (locks) that they hold.
Scenario with Two Mutexes (M1 and M2):
-
Thread A executes:
c
pthread_mutex_lock(&M1);
// Context switch happens here
pthread_mutex_lock(&M2); // Thread A waits for M2 -
Thread B executes:
c
pthread_mutex_lock(&M2);
// ...
pthread_mutex_lock(&M1); // Thread B waits for M1
Explanation: Thread A holds M1 and waits for M2. Thread B holds M2 and waits for M1. Neither can proceed, resulting in a deadlock (Circular Wait).
How can arguments be passed to a thread function in POSIX? Give a code snippet example.
Arguments are passed to a thread function via the fourth parameter of pthread_create, which is a void *. If a single variable (like an int) is passed, it is cast to void *. If multiple arguments are needed, a structure is defined, initialized, and its pointer is passed.
Code Snippet (Passing multiple arguments):
c
include <pthread.h>
include <stdio.h>
// 1. Define structure
struct thread_args {
int id;
int value;
};
void myThread(void arg) {
// 3. Cast back inside thread
struct thread_args data = (struct thread_args )arg;
printf("Thread ID: %d, Value: %d\n", data->id, data->value);
return NULL;
}
int main() {
pthread_t tid;
struct thread_args t_data;
// 2. Initialize data
t_data.id = 1;
t_data.value = 100;
pthread_create(&tid, NULL, myThread, (void *)&t_data);
pthread_join(tid, NULL);
return 0;
}
Explain the concept of Thread Attributes and specifically the distinction between Joinable and Detached threads.
Thread Attributes:
POSIX threads allow specifying attributes at creation time using the pthread_attr_t object. This includes stack size, scheduling policy, and detach state.
Joinable vs. Detached:
-
Joinable Thread (Default):
- The system keeps the thread's resources (like the stack and exit status) allocated even after the thread terminates.
- These resources are only freed when another thread calls
pthread_joinon it. - If
pthread_joinis never called, it leads to a "zombie thread" resource leak.
-
Detached Thread:
- The thread's resources are automatically released back to the system immediately when the thread terminates.
- Other threads cannot join or wait for a detached thread.
- Useful for daemon threads or background tasks where the return value is not needed.
- Created by setting
PTHREAD_CREATE_DETACHEDattribute or callingpthread_detach(tid).
What does pthread_self() return, and how is it useful in a multithreaded program?
Function: pthread_t pthread_self(void);
Description:
It returns the unique thread ID (pthread_t) of the calling thread.
Utility:
- Identification: Useful for logging or debugging to identify which thread is performing a specific action.
- Resource Management: Used when a thread needs to perform operations on itself, such as detaching itself (
pthread_detach(pthread_self())) or modifying its own priority/attributes. - Comparisons: Used with
pthread_equal(tid1, tid2)to check if two thread IDs refer to the same thread.
Derive a solution for the Readers-Writers Problem (First variation: Readers preference) using Semaphores.
Problem Definition:
A database is shared among several concurrent processes. 'Readers' only read; 'Writers' can read and write. If two readers access shared data simultaneously, no error occurs. If a writer and some other process (reader or writer) access simultaneously, chaos ensues. We must ensure exclusive access for writers.
Variables:
semaphore rw_mutex = 1;// Common mutex for writers (and first reader)semaphore mutex = 1;// To protectread_countint read_count = 0;// Tracks active readers
Writer Process:
c
wait(rw_mutex);
// ... writing is performed ...
signal(rw_mutex);
Reader Process:
c
wait(mutex); // Lock to update read_count
read_count++;
if (read_count == 1) // If first reader
wait(rw_mutex); // Lock out writers
signal(mutex);
// ... reading is performed ...
wait(mutex);
read_count--;
if (read_count == 0) // If last reader
signal(rw_mutex); // Allow writers to enter
signal(mutex);
Explain the purpose of pthread_mutex_trylock. How does it differ from standard pthread_mutex_lock?
Purpose:
pthread_mutex_trylock attempts to lock the mutex pointed to by mutex.
Difference from pthread_mutex_lock:
-
Blocking Behavior:
pthread_mutex_lock: If the mutex is already locked, the calling thread is suspended (blocks) until the mutex becomes available.pthread_mutex_trylock: If the mutex is already locked, the function returns immediately with an error code (EBUSY) instead of waiting.
-
Usage:
trylockis useful in scenarios where a thread can perform alternative work if it cannot acquire the lock immediately, preventing it from getting stuck or to help avoid deadlocks.
What is the Dining Philosophers Problem? Briefly describe the synchronization challenge it represents.
Dining Philosophers Problem:
Five philosophers sit at a round table with a bowl of rice. There are five chopsticks, one placed between each pair of adjacent philosophers. A philosopher alternates between thinking and eating. To eat, a philosopher needs two chopsticks (left and right).
Synchronization Challenge:
- Resource Contention: Chopsticks are shared resources (Critical Sections). Only one philosopher can hold a specific chopstick at a time.
- Deadlock: If every philosopher picks up their left chopstick simultaneously, they will all wait forever for the right chopstick (Circular Wait).
- Starvation: A solution must ensure that every philosopher eventually gets to eat, preventing a scenario where a philosopher is perpetually denied access to chopsticks because neighbors are eating.
In the context of synchronization, what is a Spinlock? How does it differ from a Mutex?
Spinlock:
A spinlock is a lock where the thread simply waits in a loop ("spins") repeatedly checking if the lock is available.
Differences:
-
Waiting Mechanism:
- Mutex: If locked, the thread yields the CPU and goes to sleep (context switch). The OS wakes it up when the lock is free.
- Spinlock: The thread keeps the CPU busy (Busy Waiting) checking the lock condition.
-
Context Context:
- Mutex: Better for critical sections that take a long time to execute, as sleeping saves CPU cycles.
- Spinlock: Better for very short critical sections (especially on Multi-core systems) where the time spent spinning is less than the overhead of a context switch (sleep/wake).
Write a C code snippet using Mutex to safely increment a global counter variable accessed by multiple threads.
c
include <pthread.h>
include <stdio.h>
// Shared resources
int counter = 0;
pthread_mutex_t lock;
void increment_counter(void arg) {
for (int i = 0; i < 1000; i++) {
// 1. Acquire Lock
pthread_mutex_lock(&lock);
// 2. Critical Section
counter++;
// 3. Release Lock
pthread_mutex_unlock(&lock);
}
return NULL;
}
int main() {
pthread_t t1, t2;
// Initialize Mutex
pthread_mutex_init(&lock, NULL);
pthread_create(&t1, NULL, increment_counter, NULL);
pthread_create(&t2, NULL, increment_counter, NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Final Counter: %d\n", counter);
// Destroy Mutex
pthread_mutex_destroy(&lock);
return 0;
}
What are condition variables in Pthreads, and why are they usually used in conjunction with a mutex?
Condition Variables:
A synchronization primitive (pthread_cond_t) that allows threads to suspend execution and relinquish the processor until some condition is true (signaled by another thread).
Relationship with Mutex:
Condition variables are stateless; they do not store the signal. They must be used with a mutex to avoid a race condition called the "lost wake-up problem".
- Waiting (
pthread_cond_wait): A thread holds the mutex, checks a shared variable (condition), and if the condition is not met, it calls wait. The wait function atomically releases the mutex and sleeps. Upon waking (signaled), it automatically re-acquires the mutex. - Signaling: Another thread locks the mutex, changes the shared variable (making the condition true), and signals the waiting thread. The mutex ensures checking and updating the condition is atomic.