Unit1 - Subjective Questions
CSE357 • Practice Questions with Detailed Answers
Define an Operating System and explain its primary objectives and functions.
Definition: An Operating System (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. It acts as an intermediary between the user of a computer and the computer hardware.
Primary Objectives:
- Convenience: Making the computer system easier to use for the user.
- Efficiency: Managing system resources (CPU, memory, I/O) in an efficient manner.
- Ability to Evolve: Permitting the effective development, testing, and introduction of new system functions without interfering with service.
Key Functions:
- Process Management: Creating, scheduling, and terminating processes.
- Memory Management: Allocation and deallocation of memory space.
- File Management: Creating, deleting, and manipulating files and directories.
- Device Management: Managing I/O devices via drivers.
- Security: Protecting data and resources from unauthorized access.
Differentiate between Batch Operating Systems and Time-Sharing Operating Systems.
Batch Operating Systems:
- Interaction: Users do not interact directly with the computer. Jobs are prepared offline (e.g., on punch cards) and submitted in a batch.
- Processing: The OS processes jobs one after another without user intervention.
- CPU Usage: CPU is often idle during I/O operations, leading to lower utilization.
- Example: Payroll systems, bank statement generation.
Time-Sharing Operating Systems:
- Interaction: Allows many users to share the computer simultaneously via terminals.
- Processing: The CPU switches rapidly between users (using time quantums), giving the impression that each user has their own dedicated machine.
- CPU Usage: Higher CPU utilization due to rapid switching and multiprogramming.
- Response Time: Minimized to provide immediate feedback to users.
Explain the concept of System Calls. List the major categories of system calls.
System Calls provide an interface to the services made available by the operating system. They are the only way a user program can request services from the kernel (privileged mode).
When a system call is executed, a software interrupt (trap) occurs, switching the system from User Mode to Kernel Mode.
Major Categories of System Calls:
- Process Control:
fork(),exit(),wait(). - File Management:
open(),read(),write(),close(). - Device Management:
ioctl(),read(),write(). - Information Maintenance:
getpid(),alarm(),time(). - Communications:
pipe(),shmget(),mmap().
Describe the difference between Hard Real-Time and Soft Real-Time operating systems.
Real-Time Operating Systems (RTOS) are used when rigid time requirements have been placed on the operation of a processor or the flow of data.
Hard Real-Time Systems:
- Constraint: Critical tasks must complete within a guaranteed time frame.
- Consequence of Failure: Missing a deadline results in total system failure or catastrophic consequences.
- Storage: often lacks secondary storage; data is stored in ROM.
- Examples: Flight control systems, medical pacemakers, industrial robotics.
Soft Real-Time Systems:
- Constraint: A critical real-time task gets priority over other tasks and retains that priority until it completes.
- Consequence of Failure: Missing a deadline results in degraded performance but not system failure.
- Examples: Multimedia streaming, virtual reality, underwater sonar.
What is a Process Control Block (PCB)? detailed the information stored in a PCB.
A Process Control Block (PCB) (also called a Task Control Block) is a data structure in the operating system kernel containing the information needed to manage a specific process.
Information stored in a PCB includes:
- Process State: The current state (New, Ready, Running, Waiting, Terminated).
- Program Counter: The address of the next instruction to be executed.
- CPU Registers: Contents of accumulators, index registers, stack pointers, etc., saved when an interrupt occurs.
- CPU Scheduling Information: Process priority, pointers to scheduling queues, and other scheduling parameters.
- Memory Management Information: Base and limit registers, or page tables/segment tables.
- Accounting Information: CPU time used, time limits, account numbers.
- I/O Status Information: List of I/O devices allocated to the process and open files.
Explain the Process State Transition Diagram with a neat description of each state.
As a process executes, it changes state. The standard five-state model includes:
- New: The process is being created. It resides in secondary memory.
- Ready: The process is waiting to be assigned to a processor. It is loaded in the main memory.
- Transition (New Ready): Admitted by the long-term scheduler.
- Running: Instructions are being executed.
- Transition (Ready Running): Dispatched by the short-term scheduler.
- Waiting (Blocked): The process is waiting for some event to occur (e.g., I/O completion or signal reception).
- Transition (Running Waiting): Process requests I/O or event.
- Transition (Waiting Ready): I/O or event is completed.
- Terminated: The process has finished execution.
- Transition (Running Terminated): Exit.
Explain the difference between Logical Address Space and Physical Address Space. How does the Memory Management Unit (MMU) function?
Logical Address (Virtual Address):
- Generated by the CPU during program execution.
- It refers to the address viewed by the user/process.
- The set of all logical addresses is the Logical Address Space.
Physical Address:
- The actual address seen by the memory unit (RAM).
- It identifies a specific location in the physical memory hardware.
- The set of all physical addresses corresponding to logical addresses is the Physical Address Space.
Memory Management Unit (MMU):
- The MMU is a hardware device that maps logical addresses to physical addresses at run-time.
- In a simple scheme (using a Relocation Register), the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.
Define Fragmentation in memory management. Distinguish between Internal and External Fragmentation.
Fragmentation occurs when memory is broken into small, non-contiguous blocks, making it difficult to allocate memory to processes efficiently.
1. External Fragmentation:
- Definition: Total memory space exists to satisfy a request, but it is not contiguous.
- Cause: Frequent allocation and deallocation of variable-sized memory blocks (common in Segmentation).
- Solution: Compaction (shuffling memory to place free memory together) or Paging.
2. Internal Fragmentation:
- Definition: The memory block allocated to a process is slightly larger than the requested memory. The difference is unused memory internal to a partition.
- Cause: Allocating memory in fixed-sized blocks (common in Paging).
- Solution: Best-fit allocation strategies can minimize it, but it is inherent to fixed-partition schemes.
Detailed the concept of Paging. How does it solve the problem of external fragmentation?
Paging is a memory management scheme that permits the physical address space of a process to be non-contiguous.
Mechanism:
- Frames: Physical memory is broken into fixed-sized blocks called frames.
- Pages: Logical memory is broken into blocks of the same size called pages.
- Page Table: A data structure used to map logical pages to physical frames. It contains the base address of each page in physical memory.
Address Translation:
Every address generated by the CPU is divided into two parts:
- Page Number (): Used as an index into the page table.
- Page Offset (): Displacement within the page/frame.
Solving External Fragmentation:
Since any logical page can be placed in any free physical frame, memory does not need to be contiguous. The OS keeps a list of free frames. When a process needs pages, the OS finds free frames (anywhere in memory) and loads the pages, eliminating external fragmentation completely. However, it may suffer from internal fragmentation in the last frame.
What is Virtual Memory? Explain the concept of Demand Paging.
Virtual Memory is a technique that allows the execution of processes that remain not completely in memory. It separates user logical memory from physical memory, allowing the logical address space to be much larger than the physical address space.
Demand Paging:
Demand paging is the most common implementation of virtual memory (Lazy Swapper).
- Concept: Pages are only loaded into memory when they are actually required (demanded) by the program execution.
- Mechanism:
- When a process tries to access a page not currently in memory, the hardware traps to the OS (generating a Page Fault).
- The OS checks an internal table to verify if the reference was valid.
- The OS finds a free frame in physical memory.
- It schedules a disk operation to read the desired page into the newly allocated frame.
- The page table is updated to indicate the page is now in memory (Valid bit set).
- The instruction that caused the page fault is restarted.
Compare Paging and Segmentation memory management techniques.
| Feature | Paging | Segmentation |
|---|---|---|
| Basic Unit | Fixed-size block (Page). | Variable-size block (Segment). |
| View | Physical view of memory (system-centric). | Logical view of memory (user-centric). |
| Fragmentation | Suffers from Internal Fragmentation. No External Fragmentation. | Suffers from External Fragmentation. No Internal Fragmentation. |
| Address | CPU generates Page Number + Offset. | CPU generates Segment Number + Offset. |
| Size | Determined by hardware/architecture. | Determined by the user/compiler (e.g., Code, Stack, Data). |
| Protection | Difficult to protect specific data structures separately. | Easy to protect/share specific segments (e.g., Read-only code segment). |
Define the following Scheduling Criteria: CPU utilization, Throughput, Turnaround time, Waiting time, and Response time.
- CPU Utilization: The percentage of time the CPU is busy executing processes. Ideally, this should be maximized (approaching 100%).
- Throughput: The number of processes that complete their execution per time unit.
- Turnaround Time (): The interval from the time of submission of a process to the time of completion.
- Waiting Time (): The sum of periods spent waiting in the ready queue.
- Response Time: The time from the submission of a request until the first response is produced (important in time-sharing systems).
Explain the Round Robin (RR) scheduling algorithm with an example. Why is it suitable for time-sharing systems?
Round Robin (RR) is a preemptive scheduling algorithm designed for time-sharing systems.
Mechanism:
- A small unit of time called a Time Quantum () (or time slice) is defined.
- The ready queue is treated as a circular queue.
- The scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to $1$ quantum.
- If a process's burst time is less than , it releases the CPU voluntarily.
- If the burst time is greater than , the timer interrupts, the context is saved, and the process is put at the tail of the ready queue.
Example:
- Quantum: 4ms
- Process P1: Burst 20ms
- Process P2: Burst 3ms
- Execution: P1 runs for 4ms P2 runs for 3ms (finishes) P1 runs for 4ms ...
Suitability for Time-Sharing:
It is ideal because it ensures fairness (no starvation) and provides a fast response time for interactive users. No process monopolizes the CPU.
Distinguish between Preemptive and Non-Preemptive scheduling.
Non-Preemptive Scheduling:
- Once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state (e.g., for I/O).
- Pros: Low overhead (fewer context switches).
- Cons: Poor response time; bugs can freeze the machine.
- Algorithms: First-Come First-Served (FCFS), Shortest Job First (SJF - Non-preemptive version).
Preemptive Scheduling:
- The CPU can be taken away from a currently executing process if a higher priority process arrives or a time quantum expires.
- Pros: Better responsiveness; prevents CPU monopolization.
- Cons: Higher overhead due to frequent context switching; requires handling race conditions in shared data.
- Algorithms: Round Robin (RR), Shortest Remaining Time First (SRTF), Priority (Preemptive).
What is the Critical Section Problem? List the three requirements that a solution to the critical section problem must satisfy.
Critical Section Problem:
Consider a system of processes. Each process has a segment of code, called a Critical Section (CS), in which the process may be changing common variables, updating a table, writing a file, etc. The problem is to ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
Requirements for a Solution:
- Mutual Exclusion: If process is executing in its critical section, then no other processes can be executing in their critical sections.
- Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not in their remainder sections can participate in deciding which will enter the CS next, and this selection cannot be postponed indefinitely.
- Bounded Waiting: There exists a bound (limit) on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted (prevents starvation).
Explain Semaphores. Distinguish between Counting Semaphores and Binary Semaphores.
A Semaphore is a synchronization tool consisting of an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait() (or P) and signal() (or V).
- wait(S):
- signal(S):
Binary Semaphore (Mutex):
- The integer value can range only between 0 and 1.
- Used primarily to solve the Critical Section problem (Mutual Exclusion).
- If initialized to 1, the first process calls
wait(sets to 0) and enters. The second process callswaitand blocks until the first callssignal.
Counting Semaphore:
- The integer value can range over an unrestricted domain.
- Used to control access to a given resource consisting of a finite number of instances.
- Initialized to the number of resources available.
waitdecreases the count;signalincreases it.
Describe the Producer-Consumer Problem and how it can be solved using Semaphores.
The Problem:
A Producer process generates data and puts it into a shared fixed-size buffer. A Consumer process removes data from the buffer. The problem is to synchronize them so that the producer does not add data when the buffer is full, and the consumer does not remove data when the buffer is empty.
Solution using Semaphores:
We use three semaphores:
mutex(initialized to 1): Provides mutual exclusion for buffer access.empty(initialized to buffer size ): Counts empty slots.full(initialized to 0): Counts filled slots.
Producer Logic:
text
wait(empty); // Wait for empty slot
wait(mutex); // Enter Critical Section
/ add item to buffer /
signal(mutex); // Exit Critical Section
signal(full); // Signal that a slot is full
Consumer Logic:
text
wait(full); // Wait for filled slot
wait(mutex); // Enter Critical Section
/ remove item from buffer /
signal(mutex); // Exit Critical Section
signal(empty); // Signal that a slot is empty
What is a Race Condition? Provide an example.
Race Condition:
A race condition occurs when several processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place. The final value of the shared data depends on the scheduling order.
Example:
Imagine two processes, and , trying to increment a shared variable counter (initially 5).
Code: counter++ is actually implemented as:
Register = counterRegister = Register + 1counter = Register
Scenario:
- executes step 1 (Reg1 = 5).
- Context switch to .
- executes steps 1, 2, 3 (Reg2 = 5, Reg2 = 6, counter = 6).
- Context switch back to .
- executes steps 2, 3 (Reg1 = 6, counter = 6).
Result: The counter is 6, but it should be 7 (5 + 1 + 1). Data consistency is lost due to the race condition.
Explain the two fundamental models of Inter-Process Communication (IPC): Shared Memory and Message Passing.
IPC is a mechanism that allows processes to communicate and synchronize their actions.
1. Shared Memory:
- Concept: A region of memory is established that is shared by cooperating processes. Processes can exchange information by reading and writing data to the shared region.
- Speed: Faster, as memory access happens at bus speeds. No kernel intervention is required after setup.
- Synchronization: Major issue; processes must ensure they are not writing to the same location simultaneously (requires Semaphores/Mutex).
2. Message Passing:
- Concept: Communication takes place by exchanging messages between cooperating processes.
- Operations:
send(message)andreceive(message). - Implementation: Can be direct (Process A sends to Process B) or indirect (via Mailboxes/Ports).
- Synchronization: Easier to implement (blocking or non-blocking calls).
- Overhead: Slower than shared memory due to kernel intervention (system calls) for every message.
List and explain the four necessary conditions for a Deadlock to occur (Coffman Conditions).
For a deadlock to arise, all four of the following conditions must hold simultaneously:
- Mutual Exclusion: At least one resource must be held in a non-sharable mode. If another process requests that resource, the requesting process must be delayed until the resource has been released.
- Hold and Wait: A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.
- No Preemption: Resources cannot be preempted. A resource can be released only voluntarily by the process holding it, after that process has completed its task.
- Circular Wait: A set of processes must exist such that is waiting for a resource held by , is waiting for a resource held by , ..., and is waiting for a resource held by .