Unit1 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Define an Operating System. Explain the primary functions performed by an Operating System.
Definition:
An Operating System (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. It acts as an intermediary between the computer user and the computer hardware.
Primary Functions of an OS:
- Process Management: The OS manages process creation, deletion, suspension, and resumption. It handles synchronization and communication between processes.
- Memory Management: It keeps track of which parts of memory are currently being used and by whom. It decides which processes to load when memory space becomes available.
- File Management: The OS manages files, directories, and storage space. It handles file creation, deletion, and access control.
- Device Management (I/O): It manages device drivers and buffers, ensuring efficient communication between I/O devices and the CPU.
- Security and Protection: It protects data and resources from unauthorized access and ensures that one process does not interfere with another.
- User Interface: Provides a Command Line Interface (CLI) or Graphical User Interface (GUI) for users to interact with the system.
Differentiate between User Mode and Supervisor (Kernel) Mode. How does the hardware switch between these modes?
User Mode vs. Supervisor Mode:
| Feature | User Mode | Supervisor (Kernel) Mode |
|---|---|---|
| Definition | The mode in which user applications run. | The mode in which the OS kernel and privileged instructions run. |
| Access | Restricted access to hardware and memory. | Full access to all hardware and memory. |
| Privileged Instructions | Cannot execute privileged instructions (e.g., I/O control, interrupt mgmt). | Can execute all instructions, including privileged ones. |
| Mode Bit | Usually set to 1. | Usually set to 0. |
| Crash Impact | If a user program crashes, only that process is affected. | If the kernel crashes, the entire system halts. |
Mode Switching:
- User to Kernel: When a user program needs an OS service (System Call) or when an interrupt/trap occurs, the hardware switches the mode bit from 1 to 0.
- Kernel to User: After the OS handles the request or interrupt, it executes a return instruction, switching the mode bit back from 0 to 1 before passing control back to the user program.
Explain the concept of Multiprogramming. How does it improve CPU utilization compared to Simple Batch Systems?
Multiprogramming:
Multiprogramming is a technique where multiple programs reside in the main memory simultaneously. The Operating System keeps several jobs in memory and picks one of them to execute.
Working Mechanism:
- The OS picks a job from the job pool and starts executing it.
- When that job needs to wait for an I/O operation (e.g., reading from a disk), the CPU does not sit idle.
- Instead, the OS switches the CPU to another job that is ready to execute.
- This cycle continues, ensuring the CPU is always busy as long as there is at least one job to execute.
Comparison with Batch Systems:
- Batch Systems: The CPU is often idle because I/O speeds are much slower than CPU speeds. If the current job waits for I/O, the CPU stops.
- Improvement: Multiprogramming overlaps CPU and I/O operations. While one program waits for I/O, the CPU executes another, significantly increasing CPU Utilization and Throughput.
What is a Time-Sharing (Multitasking) Operating System? How does it differ from a Multiprogramming System?
Time-Sharing (Multitasking) System:
Time-sharing is a logical extension of multiprogramming. The CPU switches jobs so frequently that users can interact with each program while it is running. The CPU is allocated to each process for a specific time slice (Time Quantum).
Differences:
| Feature | Multiprogramming | Time-Sharing (Multitasking) |
|---|---|---|
| Objective | Maximize CPU utilization (keep CPU busy). | Minimize response time (interactive computing). |
| Switching Condition | Switches only when the current process performs I/O or terminates. | Switches when the time quantum expires or I/O occurs. |
| User Interaction | Generally not interactive (Batch based). | Highly interactive. |
| Responsiveness | User waits for the entire batch to process. | User gets immediate feedback (feels like a dedicated CPU). |
Describe Real-Time Operating Systems (RTOS). Distinguish between Hard and Soft real-time systems.
Real-Time Operating System (RTOS):
An RTOS is meant for applications where data processing must happen within a fixed, rigid time constraint. Processing must be done within a defined time limit, otherwise, the system fails.
Types of RTOS:
-
Hard Real-Time Systems:
- Constraint: Critical tasks must be completed on time. Missing a deadline results in total system failure.
- Storage: often lacks secondary storage; data is stored in ROM.
- Examples: Flight control systems, pacemakers, industrial robotics.
-
Soft Real-Time Systems:
- Constraint: A critical real-time task gets priority over other tasks and retains that priority until it completes. Missing a deadline is undesirable but not catastrophic; it results in degraded quality.
- Examples: Multimedia streaming (lag causes video drop, not crash), Virtual Reality, scientific projects.
Explain the architecture and advantages of Distributed Operating Systems.
Distributed Operating Systems:
A distributed OS manages a group of distinct computers and makes them appear to be a single computer. The processors communicate with one another through various communication lines (such as high-speed buses or telephone lines). These are often referred to as loosely coupled systems.
Architecture:
- Nodes: Multiple independent CPUs with their own memory and clock.
- Communication: Nodes communicate via message passing.
- Transparency: The user does not need to know which node provides the resource.
Advantages:
- Resource Sharing: A user at one site can use resources (printers, files) at another site.
- Computation Speedup: A large computation can be partitioned into sub-computations that run concurrently across various sites (Load Balancing).
- Reliability: If one site fails, the remaining sites can continue operating.
- Communication: Provides a mechanism for human-to-human communication (email, etc.).
What are System Calls? Explain the sequence of operations involved when a user program invokes a system call.
System Calls:
A system call provides the interface between a running program (process) and the Operating System. It allows the user-level process to request services from the kernel (like reading a file or creating a process).
Sequence of Operations:
- Parameters: The user program places arguments for the system call in registers or a stack.
- Trap: The program executes a special instruction (trap/interrupt) that switches the system from User Mode to Kernel Mode.
- Dispatch: The hardware transfers control to a specific location in the OS (Interrupt Vector table), which identifies the type of system call.
- Execution: The OS executes the requested service in Kernel Mode.
- Return: The OS places the return value (status/result) in a register.
- Switch Back: The
return from system callinstruction is executed, switching the mode back to User Mode and returning control to the user program.
Explain the Process Control Block (PCB). List and describe the specific information maintained in a PCB.
Process Control Block (PCB):
A PCB (also called a Task Control Block) is a data structure in the Operating System kernel representing a specific process. It serves as the repository for any information that varies from process to process.
Contents of PCB:
- Process State: Current state (New, Ready, Running, Waiting, Terminated).
- Program Counter (PC): Indicates the address of the next instruction to be executed for this process.
- CPU Registers: Contents of accumulators, index registers, stack pointers, etc., which must be saved when an interrupt occurs.
- CPU-Scheduling Information: Priority, pointers to scheduling queues, and other scheduling parameters.
- Memory-Management Information: Base and limit registers, or page tables/segment tables.
- Accounting Information: CPU time used, time limits, account numbers, process ID (PID).
- I/O Status Information: List of I/O devices allocated to the process, list of open files, etc.
Draw and explain the Process State Transition Diagram (Process Life Cycle).
Process States:
A process changes state as it executes. The typical states are:
- New: The process is being created.
- Ready: The process is waiting to be assigned to a processor.
- Running: Instructions are being executed.
- Waiting (Blocked): The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
- Terminated: The process has finished execution.
Transitions:
- New Ready: The OS admits the process to the ready queue.
- Ready Running: The Scheduler dispatches the process (Context Switch).
- Running Ready: An interrupt occurs (e.g., time quantum expires).
- Running Waiting: The process invokes an I/O operation or waits for an event.
- Waiting Ready: The I/O operation completes or the event occurs.
- Running Terminated: The process finishes execution (
exit).
Define Context Switching. Why is it considered an overhead?
Context Switching:
Context switching is the process of saving the state (context) of the currently running process and restoring the state of the next process to be run. The context is represented in the Process Control Block (PCB).
Mechanism:
When an interrupt or system call occurs:
- Save the registers, Program Counter, and stack pointer of the old process into its PCB.
- Load the registers, Program Counter, and stack pointer from the new process's PCB.
Overhead:
Context switching is considered pure overhead because the system does no useful work while switching. The CPU is busy executing OS administration instructions rather than user programs. The speed varies depending on memory speed, number of registers, and hardware support.
Distinguish between a Program and a Process.
Program vs. Process:
| Feature | Program | Process |
|---|---|---|
| Definition | A set of instructions stored on a secondary storage device (e.g., hard disk). | A program in execution, loaded into main memory. |
| Nature | Passive Entity (it does nothing until executed). | Active Entity (it performs actions). |
| Resources | Requires only storage space. | Requires CPU, memory, I/O resources, and registers. |
| Lifespan | Exists permanently until deleted. | Exists only while it is executing (bounded life cycle). |
| Cardinality | One program can correspond to multiple processes (e.g., opening two Calculator windows). | Each process is a unique instance. |
Compare Monolithic Kernel and Microkernel OS structures.
Monolithic Kernel:
- Structure: The entire OS runs as a single large program in kernel mode. All services (file system, device drivers, memory management) are part of the kernel space.
- Pros: High performance because components communicate directly via function calls (low overhead).
- Cons: Complex structure; difficult to maintain. A bug in a device driver can crash the entire system (e.g., Early Unix, Linux).
Microkernel:
- Structure: Only the most essential functions (scheduling, basic IPC, low-level hardware handling) are in the kernel. Other services (file servers, device drivers) run as user-mode processes.
- Pros: Easier to extend; more secure and reliable (if a driver crashes, the kernel survives).
- Cons: Lower performance due to increased system overhead from message passing between user modules and the kernel (e.g., Mach, QNX, Minix).
Differentiate between Independent and Co-operating processes. Why do processes need to co-operate?
Distinction:
- Independent Process: A process that cannot affect or be affected by other processes executing in the system. It does not share data with others.
- Co-operating Process: A process that can affect or be affected by other processes. It shares data or resources with other processes.
Reasons for Co-operation:
- Information Sharing: Several users may be interested in the same piece of information (e.g., a shared file or database).
- Computation Speedup: If a task can be broken into sub-tasks that run in parallel, execution is faster (requires multi-core hardware).
- Modularity: Dividing system functions into separate processes or threads for better organization.
- Convenience: A user may want to perform multiple tasks at one time (e.g., editing, printing, and compiling).
Explain the concept of Multiprocessing. Differentiate between Symmetric Multiprocessing (SMP) and Asymmetric Multiprocessing (ASMP).
Multiprocessing:
A multiprocessing system (also known as a parallel system or tightly coupled system) has two or more processors in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices.
Symmetric Multiprocessing (SMP):
- Concept: Each processor performs all tasks, including operating system functions and user processes.
- Relationship: All processors are peers; there is no master-slave relationship.
- Performance: Can be more complex to synchronize but allows efficient load balancing. Most modern OSs (Windows, Linux, macOS) support SMP.
Asymmetric Multiprocessing (ASMP):
- Concept: Each processor is assigned a specific task. A Master processor controls the system, schedules, and handles I/O, while Slave processors execute user tasks.
- Relationship: Master-Slave relationship.
- Simplicity: Easier to design the OS, but the master processor can become a bottleneck.
Describe Simple Batch Systems. What were their primary drawbacks?
Simple Batch Systems:
In early computers, the user did not interact directly with the machine. Users prepared jobs (program + data + control info) on punch cards and submitted them to a computer operator.
Working:
- The operator grouped jobs with similar needs into batches.
- A small program called the Resident Monitor automatically transferred control from one job to the next.
- When a job finished, the monitor loaded the next one.
Drawbacks:
- Lack of Interaction: Once a job started, the user could not intervene (debugging was tedious).
- CPU Idle Time: Mechanical I/O devices (card readers) were much slower than the electronic CPU. The CPU remained idle while waiting for I/O operations.
- Turnaround Time: It took a long time between submitting a job and getting the result.
Discuss the operations on processes, specifically explaining Process Creation and Process Termination.
Process Creation:
- A process may create several new processes via a system call (e.g.,
fork()in Unix). - The creating process is the Parent, and the new process is the Child.
- Resource Sharing: The child may share all, some, or none of the parent's resources.
- Execution: Parent and child execute concurrently, or the parent waits for the child.
- Address Space: The child acts as a duplicate of the parent or loads a new program.
Process Termination:
- A process finishes execution by executing the last statement and asking the OS to delete it (
exit()system call). - The OS deallocates resources (memory, files, I/O buffers).
- The status data is returned to the parent process.
- Cascading Termination: In some systems, if a parent terminates, all its children must also be terminated.
Distinguish between Parallel Systems and Distributed Systems.
Parallel vs. Distributed Systems:
| Feature | Parallel Systems (Tightly Coupled) | Distributed Systems (Loosely Coupled) |
|---|---|---|
| Memory | Shared global memory accessible by all processors. | Each processor has its own local memory (private). |
| Clock | Shared global clock. | No global clock; synchronization is complex. |
| Communication | Via shared memory. | Via message passing over a network. |
| Proximity | Processors are physically close (same rack/board). | Processors can be geographically separated. |
| Goal | Increase speed/computation power. | Resource sharing, reliability, and communication. |
| Coupling | Tightly Coupled. | Loosely Coupled. |
List and briefly explain the major categories of System Calls.
System calls can be grouped into five major categories:
- Process Control:
load,execute,end,abort,create process,terminate process,wait,signal.- Used to control the execution flow of processes.
- File Management:
create file,delete file,open,close,read,write.- Used to manipulate files and directories.
- Device Management:
request device,release device,read,write,get attributes.- Used to control hardware peripherals.
- Information Maintenance:
get time,set time,get system data,set system data.- Used to transfer information between the user and the OS.
- Communications:
create connection,delete connection,send message,receive message.- Used for inter-process communication (IPC) locally or over a network.
What is the Layered Approach in Operating System structure? What are its benefits?
Layered Approach:
In this structure, the Operating System is broken down into a number of layers (levels). The lowest layer (Layer 0) is the hardware, and the highest layer (Layer N) is the user interface.
- Modularity: Each layer uses the functions (operations) and services of only the lower-level layers.
- Information Hiding: Layer does not need to know how operations in Layer are implemented, only what they do.
Benefits:
- Ease of Debugging: Layers can be debugged sequentially. If Layer 0 works, and a bug appears in Layer 1, the problem is definitely in Layer 1.
- Modularity: Easier to update or replace specific layers without affecting the whole system.
- Simplicity: Simplifies verification and system design.
Briefly trace the evolution of Operating Systems from serial processing to modern systems.
The evolution of Operating Systems proceeded through several distinct phases:
- Serial Processing (No OS): In the 1940s/50s, programmers interacted directly with hardware using machine language. No OS existed; users booked machine time.
- Simple Batch Systems: Jobs were grouped into batches to reduce setup time. Introduction of the Monitor program (early OS).
- Multiprogramming Batch Systems: Memory partitioned to hold multiple jobs. When one job waited for I/O, the CPU switched to another to improve utilization.
- Time-Sharing Systems: Extension of multiprogramming. Rapid switching allowed user interaction. Introduction of terminals.
- Personal Computer Systems: Focus shifted to user convenience and responsiveness (GUI) rather than just CPU efficiency (e.g., DOS, Windows, Mac OS).
- Distributed Systems: Networked computers sharing resources.
- Mobile/Handheld Systems: Optimized for power consumption and touch interfaces (Android, iOS).