Unit2 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Differentiate between Preemptive and Non-preemptive scheduling with examples.
The main differences between Preemptive and Non-preemptive scheduling are:
-
Basic Concept:
- Preemptive: The CPU can be taken away from a running process before it completes its execution if a higher-priority process arrives or a time quantum expires.
- Non-preemptive: Once the CPU is allocated to a process, the process keeps the CPU until it releases it voluntarily (by terminating or switching to the waiting state).
-
Interrupt Handling:
- Preemptive: Requires a clock interrupt or specific hardware support to pause the currently running process.
- Non-preemptive: Does not require interrupts for process switching; relies on the process to yield.
-
Overhead:
- Preemptive: Higher overhead due to frequent context switching.
- Non-preemptive: Lower overhead as context switching occurs less frequently.
-
Examples:
- Preemptive: Round Robin (RR), Shortest Remaining Time First (SRTF), Priority (Preemptive).
- Non-preemptive: First-Come, First-Served (FCFS), Shortest Job First (SJF).
Define the role of the Dispatcher in CPU scheduling. What is dispatch latency?
Dispatcher:
The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. It involves the following functions:
- Switching context: Saving the state of the old process and loading the state of the new process.
- Switching to user mode: Changing the mode bit from kernel mode to user mode.
- Jumping: Jumping to the proper location in the user program to restart that program.
Dispatch Latency:
Dispatch latency is the time it takes for the dispatcher to stop one process and start another running. It is a critical performance metric; generally, a lower dispatch latency is desired for better system responsiveness.
Explain the various Scheduling Criteria used to compare CPU scheduling algorithms.
The criteria used to compare CPU scheduling algorithms include:
- CPU Utilization: The percentage of time the CPU is busy executing processes. Ideally, this should be as high as possible (ranging from 40% to 90%).
- Throughput: The number of processes that complete their execution per unit of time.
- Turnaround Time (): The interval from the time of submission of a process to the time of completion. .
- Waiting Time (): The sum of the periods spent waiting in the ready queue. .
- Response Time (): The time from the submission of a request until the first response is produced (important in interactive systems).
Describe the Convoy Effect in the context of First-Come, First-Served (FCFS) scheduling.
The Convoy Effect is a phenomenon associated with the First-Come, First-Served (FCFS) algorithm.
- Scenario: It occurs when a CPU-bound process with a very long burst time arrives before many shorter I/O-bound processes.
- Mechanism: The long CPU-bound process hogs the CPU. The I/O-bound processes, which need the CPU only briefly to issue an I/O request, are stuck waiting in the ready queue (forming a "convoy" behind the big process).
- Consequence:
- Lower I/O Utilization: I/O devices sit idle while the I/O-bound processes wait for the CPU.
- Lower CPU Utilization: Once the CPU-bound process finally does I/O, the CPU sits idle while the convoy processes quickly move to I/O queues.
- Result: This leads to increased average waiting time and poor resource utilization compared to other algorithms like SJF or Round Robin.
Explain the Shortest Job First (SJF) scheduling algorithm. Why is it difficult to implement in short-term scheduling?
Shortest Job First (SJF):
- SJF associates with each process the length of its next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
- It is provably optimal, giving the minimum average waiting time for a given set of processes.
- It can be preemptive (Shortest Remaining Time First) or non-preemptive.
Difficulty in Implementation:
- The main difficulty is knowing the length of the next CPU request () before it happens.
- In long-term scheduling (batch systems), the user might specify the time limit. However, in short-term scheduling, there is no way to know the exact burst time.
- Solution: The value is approximated using exponential averaging of previous burst times: .
Discuss the mathematical formula for Exponential Averaging used to predict the next CPU burst in SJF. Explain the role of
.Since the exact length of the next CPU burst is unknown in interactive systems, it is predicted using Exponential Averaging:
Where:
- = Predicted value for the next CPU burst.
- = Actual length of the CPU burst.
- = Predicted value for the past CPU burst.
- = Weighting factor, where .
Role of :
- If : . Recent history has no effect; the prediction is constant.
- If : . Only the most recent actual burst matters; old history is ignored.
- Typically, is set to $0.5$ so that recent history and past history are weighted equally.
Explain the Round Robin (RR) scheduling algorithm. How does the size of the Time Quantum affect system performance?
Round Robin (RR) is a preemptive scheduling algorithm designed for time-sharing systems.
- It defines a small unit of time called a Time Quantum (or time slice), typically 10-100 milliseconds.
- The ready queue is treated as a circular queue. The scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to one time quantum.
Impact of Time Quantum Size:
- Very Large Quantum: If the quantum is larger than the longest burst time, RR degenerates into FCFS (First-Come, First-Served).
- Very Small Quantum: This mimics processor sharing (creating the illusion that each of processes has its own processor running at speed). However, Context Switching Overhead becomes very high. If the quantum is too small, the CPU spends more time switching between processes than executing them.
- Rule of Thumb: The time quantum should be large with respect to the context switch time (e.g., 80% of CPU bursts should be shorter than the time quantum).
What is the Starvation problem in Priority Scheduling? How can Aging resolve it?
Starvation (Indefinite Blocking):
In a priority scheduling algorithm, the CPU is allocated to the process with the highest priority. If there is a constant stream of high-priority processes arriving in the system, a low-priority process may never get the CPU. This situation, where a runnable process is indefinitely denied service, is called starvation.
Solution: Aging:
- Aging is a technique to solve starvation.
- It involves gradually increasing the priority of processes that wait in the system for a long time.
- Example: Assume priorities range from 0 (low) to 127 (high). If a process with priority 0 waits for 15 minutes, the scheduler could increase its priority to 1. Eventually, even the lowest priority process will become the highest priority process in the system and will be executed.
Describe the Multilevel Feedback Queue (MLFQ) scheduling algorithm. What parameters define an MLFQ scheduler?
Multilevel Feedback Queue (MLFQ) allows processes to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
Parameters defining MLFQ:
- The number of queues.
- The scheduling algorithm for each queue (e.g., RR for top queues, FCFS for bottom).
- Method used to determine when to upgrade a process to a higher-priority queue (e.g., aging).
- Method used to determine when to demote a process to a lower-priority queue (e.g., consuming the full time quantum).
- Method used to determine which queue a process will enter when that process needs service.
Consider the following set of processes with arrival times and burst times:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 5 |
| P2 | 1 | 3 |
| P3 | 2 | 8 |
| P4 | 3 | 6 |
Calculate the Average Waiting Time and Average Turnaround Time using Preemptive SJF (Shortest Remaining Time First).
Gantt Chart Construction:
- Time 0: P1 arrives (Burst 5). P1 starts.
- Time 1: P2 arrives (Burst 3). P1 remaining: 4. Since , P2 preempts P1.
- Time 2: P3 arrives (Burst 8). P2 remaining: 2. Min(4, 2, 8) is P2. P2 continues.
- Time 3: P4 arrives (Burst 6). P2 remaining: 1. Min(4, 1, 8, 6) is P2. P2 continues.
- Time 4: P2 finishes. Remaining: P1(4), P3(8), P4(6). P1 is shortest. P1 resumes.
- Time 8: P1 finishes. Remaining: P3(8), P4(6). P4 is shortest. P4 starts.
- Time 14: P4 finishes. Remaining: P3(8). P3 starts.
- Time 22: P3 finishes.
Calculations:
-
Turnaround Time ():
- P1:
- P2:
- P3:
- P4:
- Avg = ms
-
Waiting Time ():
- P1:
- P2:
- P3:
- P4:
- Avg = ms
Distinguish between Symmetric Multiprocessing (SMP) and Asymmetric Multiprocessing (AMP) scheduling.
Asymmetric Multiprocessing (AMP):
- Structure: Only one processor (the master server) accesses the system data structures, reducing the need for data sharing. The other processors execute only user code.
- Scheduling: The master server handles all scheduling decisions, I/O processing, and other system activities.
- Pros/Cons: Simple to write but the master server becomes a bottleneck.
Symmetric Multiprocessing (SMP):
- Structure: Each processor is self-scheduling. All processors are peers; there is no master-slave relationship.
- Scheduling: The scheduler for each processor examines the ready queue and selects a process to execute. They may share a common ready queue or have private ready queues.
- Synchronization: Requires careful programming (locking mechanisms) to ensure two processors don't choose the same process or lose a process from the queue.
Explain the concept of Processor Affinity and Load Balancing in multiprocessor scheduling.
Processor Affinity:
- When a process runs on a specific processor, the cache memory of that processor gets populated with the process's data.
- If the process moves to another processor, the cache contents must be invalidated and repopulated, which is costly.
- Soft Affinity: The OS attempts to keep a process on the same processor but doesn't guarantee it.
- Hard Affinity: The OS strictly allows a process to run only on a subset of processors (via system calls).
Load Balancing:
- Required in SMP systems to keep the workload evenly distributed across all processors.
- Push Migration: A specific task checks the load on each processor and evenly distributes the load by moving (pushing) processes from overloaded to idle processors.
- Pull Migration: An idle processor pulls a waiting task from a busy processor's ready queue.
What is Real-Time Scheduling? Differentiate between Hard and Soft real-time systems.
Real-Time Scheduling involves systems where the correctness of the system depends not only on the logical result of computation but also on the time at which the results are produced. Tasks often have strict deadlines.
Hard Real-Time Systems:
- Definition: A task must be completed by its deadline. Missing a deadline is a total system failure.
- Characteristics: Often lacks secondary storage or virtual memory to ensure deterministic latency. Admission control is strictly used.
- Example: Pacemaker control, missile guidance systems.
Soft Real-Time Systems:
- Definition: Critical real-time tasks have priority over other tasks and retain that priority until they complete. Missing a deadline degrades performance but is not a catastrophic failure.
- Characteristics: Supported by most general-purpose OSs (like Linux, Windows) via priority scheduling and low dispatch latency.
- Example: Multimedia streaming, virtual reality.
Describe Rate Monotonic Scheduling (RMS). What are the conditions for its applicability?
Rate Monotonic Scheduling (RMS):
- It is a static priority scheduling algorithm for periodic tasks in real-time systems.
- Logic: Priorities are assigned based on the cycle period (inverse of the frequency). Shorter periods get higher priority; longer periods get lower priority.
- It is a preemptive algorithm.
Conditions/Assumptions:
- Processes must be periodic.
- Processes are independent.
- Processing time is constant for each burst.
- The deadline is the start of the next period.
Limitations: RMS is optimal for static priorities, but CPU utilization is bounded (approx 69% for many tasks). If CPU utilization is higher, RMS cannot guarantee deadlines.
Explain Earliest Deadline First (EDF) scheduling. How does it differ from Rate Monotonic Scheduling?
Earliest Deadline First (EDF):
- EDF is a dynamic priority scheduling algorithm for real-time systems.
- Logic: Priorities are assigned dynamically according to deadlines. The process with the closest deadline gets the highest priority.
- When a process becomes runnable, it must announce its deadline to the system.
Differences from Rate Monotonic (RMS):
- Priority Type: RMS uses static priorities (fixed based on period); EDF uses dynamic priorities (change based on absolute deadline).
- Periodicity: RMS requires periodic processes; EDF does not require processes to be periodic or have a constant burst time per period.
- Utilization: Theoretically, EDF can reach 100% CPU utilization, whereas RMS is bounded (approx 69% as ).
Discuss Thread Scheduling focusing on the distinction between Process-Contention Scope (PCS) and System-Contention Scope (SCS).
On operating systems that support kernel-level threads, it is the kernel-level threads—not processes—that are being scheduled. The distinction depends on the threading model (Many-to-One vs. One-to-One).
-
Process-Contention Scope (PCS):
- Used in Many-to-One and Many-to-Many models.
- The thread library schedules user-level threads to run on an available LWP (Lightweight Process).
- Competition for the CPU takes place among threads belonging to the same process.
- Typically uses priority scheduling set by the programmer.
-
System-Contention Scope (SCS):
- Used in One-to-One models (and Many-to-Many where the OS schedules the kernel thread).
- The kernel decides which kernel-level thread to schedule onto a CPU.
- Competition takes place among all threads in the system.
- Windows and Linux primarily use SCS.
Using Little's Law, explain the relationship between queue length, arrival rate, and waiting time in a stable system.
Little's Law is a fundamental theorem used to analyze scheduling algorithms analytically.
Formula:
Where:
- = The long-term average number of processes in the system (Queue Length).
- = The long-term average arrival rate of new processes (processes per second).
- = The long-term average time a process spends in the system (Waiting Time + Service Time).
Significance:
It implies that if we know two of the variables, we can calculate the third. For example, if we know the arrival rate () and the average waiting time (), we can determine how many processes are usually sitting in the queue (). This is valid for any scheduling algorithm and arrival distribution, provided the system is in a stable state.
Compare First-Come, First-Served (FCFS), Shortest Job First (SJF), and Round Robin (RR) based on Average Waiting Time and Response Time.
| Feature | FCFS | SJF | Round Robin |
|---|---|---|---|
| Logic | Allocate CPU in order of arrival. | Allocate CPU to process with smallest burst. | Allocate CPU for time quantum . |
| Average Waiting Time | Generally High (suffers from Convoy Effect). | Minimum (Optimal). | Average (depends on quantum). |
| Response Time | Low (Bad for interactive systems). | Medium (Good for short jobs, bad for long). | Good (Best for time-sharing systems). |
| Preemption | Non-preemptive. | Can be Preemptive (SRTF) or Non-preemptive. | Preemptive. |
| Starvation | No. | Yes (Long jobs may starve). | No. |
Consider 3 processes P1, P2, P3 with burst times 24, 3, and 3 ms respectively. Compare the Average Waiting Time if they arrive in order P1, P2, P3 vs P2, P3, P1 using FCFS.
Case 1: Order P1 (24), P2 (3), P3 (3)
- Gantt Chart: [P1: 0-24] -> [P2: 24-27] -> [P3: 27-30]
- Waiting Times:
- P1: 0
- P2: 24
- P3: 27
- Average Waiting Time: ms.
Case 2: Order P2 (3), P3 (3), P1 (24)
- Gantt Chart: [P2: 0-3] -> [P3: 3-6] -> [P1: 6-30]
- Waiting Times:
- P2: 0
- P3: 3
- P1: 6
- Average Waiting Time: ms.
Conclusion: The order of arrival drastically affects FCFS performance. Case 1 demonstrates the Convoy Effect, where short processes wait behind a long process, significantly increasing the average waiting time.
What are the three different types of schedulers in an Operating System? Briefly explain each.
-
Long-Term Scheduler (Job Scheduler):
- Selects processes from the pool (disk) and loads them into memory for execution.
- Controls the Degree of Multiprogramming.
- Executes infrequently (seconds/minutes).
-
Short-Term Scheduler (CPU Scheduler):
- Selects from the processes that are ready to execute (in memory) and allocates the CPU to one of them.
- Must be very fast as it executes frequently (milliseconds).
-
Medium-Term Scheduler:
- Part of the swapping function.
- Removes processes from memory (swaps out) to reduce the degree of multiprogramming and later reintroduces them (swaps in).
- Helps manage memory and improve process mix.