Unit2 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Differentiate between Preemptive and Non-preemptive scheduling. Give an example algorithm for each.
Difference between Preemptive and Non-preemptive Scheduling:
| Feature | Preemptive Scheduling | Non-preemptive Scheduling |
|---|---|---|
| Basic Definition | The OS allocates the CPU to a process for a limited time and can take it back (preempt) before the process finishes. | Once the CPU is allocated to a process, the process keeps it until it releases it voluntarily (terminates or switches to waiting state). |
| Interrupts | Uses hardware timer interrupts to switch contexts. | Does not require timer interrupts for context switching. |
| Overhead | Higher overhead due to frequent context switching. | Lower overhead as context switching is less frequent. |
| Flexibility | More flexible; allows high-priority processes to run immediately. | Rigid; a short high-priority process must wait for a long low-priority process to finish. |
| Throughput | Generally higher for interactive systems. | Can be lower if a long process hogs the CPU. |
Examples:
- Preemptive: Round Robin (RR), Shortest Remaining Time First (SRTF).
- Non-preemptive: First-Come, First-Served (FCFS), Standard Shortest Job First (SJF).
What is a Dispatcher? Explain its functions and define Dispatch Latency.
Dispatcher:
The dispatcher is the module within the operating system that gives control of the CPU to the process selected by the short-term scheduler. It is the mechanism that actually performs the context switch.
Functions of the Dispatcher:
- Switching Context: Saving the state of the old process and loading the state of the new process.
- Switching to User Mode: Changing the processor mode from kernel mode (where the scheduler runs) back to user mode.
- Jumping to Location: Jumping to the proper location in the user program to restart that program.
Dispatch Latency:
Dispatch latency is the time it takes for the dispatcher to stop one process and start another running. Operating systems aim to keep this latency as low as possible to maximize CPU utilization.
List and explain the five criteria used to evaluate CPU scheduling algorithms.
The efficiency of scheduling algorithms is evaluated based on the following criteria:
-
CPU Utilization:
- Ideally, the CPU should be as busy as possible. It ranges from 0% to 100%.
-
Throughput:
- The number of processes that complete their execution per time unit. High throughput is desirable.
-
Turnaround Time ():
- The total time taken from the submission of a process to its completion.
-
Waiting Time ():
- The total time a process spends waiting in the ready queue.
-
Response Time ():
- The time from the submission of a request until the first response is produced (important for interactive systems).
Explain the First-Come, First-Served (FCFS) scheduling algorithm. What is the Convoy Effect?
First-Come, First-Served (FCFS):
- This is the simplest CPU scheduling algorithm.
- Processes are assigned the CPU in the order they request it.
- It is managed using a FIFO (First-In, First-Out) queue.
- It is Non-preemptive: Once a process gets the CPU, it runs until completion or I/O request.
The Convoy Effect:
- The Convoy Effect occurs in FCFS when one CPU-intensive process (with a very long burst time) arrives before several I/O-intensive processes (with short burst times).
- Scenario: The long process hogs the CPU. The shorter processes wait in the ready queue for a long time. Once the long process finishes, the short processes execute quickly and go to I/O queues. The CPU sits idle while the long process does I/O.
- Result: This results in lower CPU and device utilization and significantly increases the average waiting time for the short processes.
Describe the Shortest Job First (SJF) scheduling algorithm. Why is it considered optimal?
Shortest Job First (SJF):
- This algorithm associates with each process the length of its next CPU burst.
- When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
- If two processes have the same length next CPU burst, FCFS is used to break the tie.
- It can be Preemptive (Shortest Remaining Time First) or Non-preemptive.
Optimality:
- SJF is provably optimal regarding the Average Waiting Time.
- By moving a short process before a long one, the waiting time of the short process decreases more than the waiting time of the long process increases.
- Consequently, the average waiting time decreases compared to FCFS.
Challenge: The difficulty lies in knowing the length of the next CPU request before it happens.
Explain the Round Robin (RR) scheduling algorithm. How does the size of the Time Quantum affect performance?
Round Robin (RR) Scheduling:
- RR is designed specifically for time-sharing systems.
- It is similar to FCFS but adds preemption.
- A small unit of time, called a Time Quantum () or Time Slice, is defined.
- The ready queue is treated as a circular queue. The scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 quantum.
Impact of Time Quantum Size:
-
Very Large Quantum ():
- RR degenerates into FCFS scheduling.
- Response time increases.
-
Very Small Quantum ():
- The system behaves like it has processor speed for processes (Processor Sharing).
- However, Context Switch Overhead becomes very high. If the quantum is smaller than the context switch time, the CPU spends more time switching than executing.
-
Optimal Quantum:
- It should be large enough that most processes (e.g., 80%) complete their CPU burst within one quantum.
Consider the following set of processes with the length of the CPU burst time given in milliseconds:
| Process | Burst Time |
|---|---|
| P1 | 24 |
| P2 | 3 |
| P3 | 3 |
Calculate the Average Waiting Time using FCFS scheduling if the processes arrive in the order P1, P2, P3.
Gantt Chart:
Since FCFS is non-preemptive and serves in order of arrival:
- P1 runs from 0 to 24.
- P2 runs from 24 to 27 ().
- P3 runs from 27 to 30 ().
[0 -- P1 -- 24 -- P2 -- 27 -- P3 -- 30]
Calculations:
-
Waiting Time ():
- Process P1: Starts at 0. Arrival 0. ms.
- Process P2: Starts at 24. Arrival 0. ms.
- Process P3: Starts at 27. Arrival 0. ms.
-
Average Waiting Time:
Consider the following processes arriving at time 0:
| Process | Burst Time |
|---|---|
| P1 | 6 |
| P2 | 8 |
| P3 | 7 |
| P4 | 3 |
Calculate the Average Waiting Time using Non-Preemptive SJF.
Logic:
In Non-Preemptive SJF, the process with the shortest burst time is selected first. Assuming all arrive at , we sort by burst time: P4 (3), P1 (6), P3 (7), P2 (8).
Execution Order (Gantt Chart):
- P4: Runs 0 to 3.
- P1: Runs 3 to 9 ().
- P3: Runs 9 to 16 ().
- P2: Runs 16 to 24 ().
[0 -- P4 -- 3 -- P1 -- 9 -- P3 -- 16 -- P2 -- 24]
Waiting Time Calculations:
Average Waiting Time:
Explain Priority Scheduling. Discuss the problem of Starvation (Indefinite Blocking) and how Aging solves it.
Priority Scheduling:
- A priority number (integer) is associated with each process.
- The CPU is allocated to the process with the highest priority (conventionally, smallest integer = highest priority, or vice versa depending on OS).
- It can be preemptive or non-preemptive.
Problem: Starvation (Indefinite Blocking):
- In a heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU.
- The low-priority process waits indefinitely, theoretically forever.
Solution: Aging:
- Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.
- Example: Assume priorities range from 0 (low) to 127 (high). If a process waits for 15 minutes, increase its priority by 1. Eventually, even a process with initial priority 0 will become the highest priority process and will be executed.
Describe Multilevel Queue scheduling. How is Multilevel Feedback Queue (MLFQ) different?
Multilevel Queue Scheduling:
- Partitions the ready queue into several separate queues (e.g., foreground/interactive and background/batch).
- Processes are permanently assigned to one queue based on properties like memory size or process type.
- Each queue has its own scheduling algorithm (e.g., RR for foreground, FCFS for background).
- Scheduling must also be done between queues (usually fixed-priority preemptive scheduling).
Multilevel Feedback Queue (MLFQ):
- Difference: Allows processes to move between queues.
- Logic: If a process uses too much CPU time, it is moved to a lower-priority queue (penalizing CPU-bound processes). If a process waits too long in a lower-priority queue, it may be moved to a higher-priority queue (aging).
- Parameters defining MLFQ:
- Number of queues.
- Scheduling algorithm for each queue.
- Method used to determine when to upgrade a process.
- Method used to determine when to demote a process.
Using Shortest Remaining Time First (SRTF) (Preemptive SJF), calculate the Average Turnaround Time for the following:
| Process | Arrival Time | Burst Time |
|---|---|---|
| P1 | 0 | 8 |
| P2 | 1 | 4 |
| P3 | 2 | 9 |
| P4 | 3 | 5 |
Gantt Chart Construction:
- t=0: P1 arrives (Burst 8). P1 starts.
- t=1: P2 arrives (Burst 4). P1 remaining: 7. , so P1 is preempted. P2 starts.
- t=2: P3 arrives (Burst 9). P2 remaining: 3. . P2 continues.
- t=3: P4 arrives (Burst 5). P2 remaining: 2. . P2 continues.
- t=5: P2 finishes. Remaining processes: P1(7), P3(9), P4(5). Shortest is P4. P4 starts.
- t=10: P4 finishes. Remaining: P1(7), P3(9). Shortest is P1. P1 starts.
- t=17: P1 finishes. Only P3(9) left. P3 starts.
- t=26: P3 finishes.
[0-P1-1], [1-P2-5], [5-P4-10], [10-P1-17], [17-P3-26]
Turnaround Time ():
- P1:
- P2:
- P3:
- P4:
Average Turnaround Time:
Discuss Multiprocessor Scheduling. Differentiate between Asymmetric and Symmetric (SMP) multiprocessing.
Multiprocessor Scheduling:
When multiple CPUs are available, load sharing becomes possible, but the scheduling problem becomes more complex.
1. Asymmetric Multiprocessing (AMP):
- One processor (the Master Server) handles all scheduling decisions, I/O processing, and other system activities.
- The other processors execute only user code.
- Pros: Simple, data sharing is reduced (master handles system data).
- Cons: The master server becomes a bottleneck.
2. Symmetric Multiprocessing (SMP):
- Each processor is self-scheduling.
- All processors are peers; there is no master.
- Either all processors examine a common ready queue, or each processor has its own private queue of ready processes.
- Pros: Efficient, scalable, no single bottleneck.
- Cons: Synchronization is critical to ensure two processors don't choose the same process or lose a process from the queue.
What is Load Balancing in multiprocessor systems? Explain Push Migration and Pull Migration.
Load Balancing:
On symmetric multiprocessor (SMP) systems, it is important to keep the workload evenly distributed across all processors to prevent scenarios where one processor is overloaded while others are idle. Load balancing attempts to keep the workload approximately even.
Mechanisms:
-
Push Migration:
- A specific task (or process) periodically checks the load on each processor.
- If it finds an imbalance, it evenly distributes the load by moving (pushing) processes from overloaded processors to idle or less-busy processors.
-
Pull Migration:
- Occurs when an idle processor "pulls" a waiting task from a busy processor's ready queue.
Note: These two are often implemented in parallel in OS schedulers (e.g., Linux).
Explain the concept of Processor Affinity in multiprocessor scheduling. Why is it used?
Processor Affinity:
Processor affinity allows a process to have a preference for running on a specific CPU (or set of CPUs) in a multiprocessor system.
Why it is used (Rationale):
- Cache Locality: When a process runs on a specific processor, the data accessed by that process populates the processor's cache memory. If the process moves to another processor, the cache contents must be invalidated and repopulated on the new processor, which is costly.
- Performance: Staying on the same processor (Warm Cache) improves execution speed compared to migrating (Cold Cache).
Types:
- Soft Affinity: The OS attempts to keep the process on the same processor but doesn't guarantee it (load balancing might move it).
- Hard Affinity: The application specifies exactly which subset of processors it can run on (via system calls).
What distinguishes Real-Time Scheduling from general purpose scheduling? Define Hard and Soft real-time systems.
Real-Time Scheduling:
The primary objective is not maximizing throughput or minimizing average response time, but meeting deadlines. The scheduler must guarantee that critical tasks complete within a specified time constraint.
1. Hard Real-Time Systems:
- Requires strict guarantees that critical tasks complete on time.
- A missed deadline is considered a total system failure.
- Example: Flight control systems, pacemakers, car airbag systems.
2. Soft Real-Time Systems:
- Critical real-time tasks have priority over other tasks and retain that priority until they complete.
- A missed deadline is undesirable and degrades performance/quality, but is not a catastrophic failure.
- Example: Video streaming (dropped frames), online gaming.
Explain Rate Monotonic Scheduling (RMS). What is the basis for assigning priorities in this algorithm?
Rate Monotonic Scheduling (RMS):
- RMS is a static priority scheduling algorithm used in real-time operating systems.
- It is designed for periodic tasks (tasks that occur at regular intervals).
- It uses preemptive scheduling with static priorities.
Basis for Priority Assignment:
- Priorities are assigned based on the Period (cycle duration) of the task.
- Inverse Relationship: The shorter the period (the more frequently the task occurs), the higher the priority.
- The longer the period, the lower the priority.
Rationale: Tasks that must run more frequently have tighter deadlines within their periods, so they need the CPU sooner.
Describe Earliest Deadline First (EDF) scheduling. How does it differ from Rate Monotonic Scheduling?
Earliest Deadline First (EDF):
- EDF is a dynamic priority scheduling algorithm.
- Priorities are assigned according to the deadline of the current request.
- The process with the closest deadline is assigned the highest priority.
Difference from Rate Monotonic (RMS):
| Feature | Rate Monotonic (RMS) | Earliest Deadline First (EDF) |
|---|---|---|
| Priority Type | Static (Fixed per task). | Dynamic (Changes per job instance). |
| Assignment Basis | Based on Period (Cycle time). | Based on absolute Deadline. |
| CPU Utilization | Can theoretically guarantee schedulability only up to approx 69% (for many tasks). | Can theoretically achieve 100% CPU utilization and still meet deadlines. |
| Complexity | Simpler to implement. | More complex (requires frequent priority updates). |
In the context of Thread Scheduling, explain the difference between Process-Contention Scope (PCS) and System-Contention Scope (SCS).
On operating systems that support kernel-level threads, scheduling occurs at two levels:
1. Process-Contention Scope (PCS):
- Context: Used in systems implementing Many-to-One or Many-to-Many threading models.
- Function: The thread library schedules user-level threads to run on an available LWP (Lightweight Process/Kernel thread).
- Competition: Competition for the CPU takes place among threads belonging to the same process.
- Programmer Control: Typically done via priority set by the programmer.
2. System-Contention Scope (SCS):
- Context: Used in One-to-One models (like Windows, Linux) and by the OS kernel in Many-to-Many.
- Function: The kernel decides which kernel-level thread to schedule onto a physical CPU core.
- Competition: Competition takes place among all threads in the system.
- Scope: Global across the system.
Compare FCFS, SJF, and Round Robin algorithms based on average waiting time and response time.
Comparison of Scheduling Algorithms:
-
FCFS (First-Come, First-Served):
- Avg Waiting Time: Generally high, especially if convoy effect occurs.
- Response Time: Poor (variable). A short process might wait a long time.
- Best for: Background batch systems where interactivity is not required.
-
SJF (Shortest Job First):
- Avg Waiting Time: Minimal (Optimal). It gives the lowest possible average waiting time.
- Response Time: Good for short processes, but long processes may starve (infinite response time).
-
Round Robin (RR):
- Avg Waiting Time: Typically higher than SJF, often higher than FCFS (depending on quantum).
- Response Time: Excellent. Designed for time-sharing; guarantees every process gets CPU attention within time units.
- Best for: Interactive/Time-sharing systems.
Explain the CPU-I/O Burst Cycle and how it influences the design of the CPU Scheduler.
CPU-I/O Burst Cycle:
- Process execution consists of a cycle of CPU execution and I/O wait.
- Processes alternate between these two states.
- CPU Burst: The process is actively executing instructions.
- I/O Burst: The process is waiting for data transfer (disk, network, user input).
Process Types:
- I/O-Bound Process: Spends more time doing I/O than computation. Has many short CPU bursts.
- CPU-Bound Process: Spends more time doing computation. Has a few very long CPU bursts.
Influence on Scheduler Design:
- The scheduler must handle a mix of these processes.
- If the system selects only CPU-bound processes, I/O devices remain idle (low utilization).
- If the system selects I/O-bound processes, they finish CPU bursts quickly and go to I/O, potentially leaving the CPU idle if not managed well.
- Ideally, the scheduler (like Multilevel Feedback Queue) prioritizes I/O-bound processes to keep I/O devices busy and maintain system responsiveness.