Unit1 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Define an Operating System and explain its primary functions.
Definition
An Operating System (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. It acts as an intermediary between the user of a computer and the computer hardware.
Primary Functions of an OS
- Process Management: The OS manages process creation, deletion, suspension, and resumption. It handles synchronization and communication between processes.
- Memory Management: It keeps track of which parts of memory are currently being used and by whom. It decides which processes (or parts thereof) and data to move into and out of memory.
- File Management: The OS manages files, directories, and storage space. It handles file creation, deletion, and access control.
- Device Management: It manages device communication via their respective drivers. It handles input/output operations.
- Security and Protection: The OS ensures that access to system resources is controlled and protects the system from unauthorized access and malware.
Differentiate between User Mode and Supervisor (Kernel) Mode in an Operating System. How does the system switch between them?
To ensure proper execution of the operating system, we must be able to distinguish between the execution of operating-system code and user-defined code.
Differences
| Feature | User Mode | Supervisor (Kernel) Mode |
|---|---|---|
| Access | Restricted access to hardware and instructions. | Full access to all hardware and privileged instructions. |
| Mode Bit | Usually represented by bit 1. | Usually represented by bit 0. |
| Purpose | Used for executing user applications. | Used for executing OS core tasks (interrupts, system calls). |
| Crash Impact | If a user program crashes, only that process fails. | If a kernel process crashes, the entire system may halt. |
Mode Switching
- User to Kernel: When a user application requests a service from the OS (via a System Call) or an interrupt occurs (hardware or software trap), the hardware switches the mode bit from 1 to 0.
- Kernel to User: Before passing control back to the user program, the OS switches the mode bit back to 1.
Explain the concept of Multiprogramming. How does it increase CPU utilization?
Concept of Multiprogramming
Multiprogramming is an operating system concept where multiple programs are kept in the main memory simultaneously, ready for execution. The objective is to maximize CPU utilization.
Mechanism for Increased CPU Utilization
In a non-multiprogrammed system (like a simple batch system), the CPU sits idle whenever a running program needs to perform an I/O operation (like reading from a disk).
In a Multiprogrammed System:
- The OS keeps several jobs in memory at once.
- The OS picks and begins executing one of the jobs in memory.
- Eventually, the job may have to wait for some task, such as an I/O operation.
- In a non-multiprogrammed system, the CPU would sit idle. However, in a multiprogramming system, the OS simply switches to, and executes, another job.
- When that job needs to wait, the CPU switches to another, and so on.
This ensures that the CPU always has something to execute, assuming there are enough jobs, thereby significantly increasing CPU utilization.
Describe the Process Control Block (PCB) and list its key components.
Process Control Block (PCB)
A Process Control Block (PCB), also known as a Task Control Block, is a data structure in the operating system kernel containing the information needed to manage a specific process. It serves as the repository for any information that may vary from process to process.
Key Components of PCB
- Process State: The state may be new, ready, running, waiting, halted, etc.
- Program Counter: The counter indicates the address of the next instruction to be executed for this process.
- CPU Registers: Includes accumulators, index registers, stack pointers, and general-purpose registers used to save state during interrupts.
- CPU Scheduling Information: Includes process priority, pointers to scheduling queues, and other scheduling parameters.
- Memory-Management Information: Includes the value of the base and limit registers or page tables.
- Accounting Information: Includes the amount of CPU and real time used, time limits, account numbers, etc.
- I/O Status Information: The list of I/O devices allocated to the process, a list of open files, etc.
With a neat diagram, explain the Process State Transition model (Life Cycle of a Process).
A process executes, it changes state. The state of a process is defined in part by the current activity of that process.
The Five Process States
- New: The process is being created.
- Ready: The process is waiting to be assigned to a processor. It resides in main memory.
- Running: Instructions are being executed. The process has control of the CPU.
- Waiting (Blocked): The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
- Terminated: The process has finished execution.
Transitions
- New Ready: The OS admits the process to the ready queue.
- Ready Running: The Scheduler Dispatch assigns the CPU to the process.
- Running Ready: An interrupt occurs (e.g., timer expiry).
- Running Waiting: The process requests I/O or an event wait.
- Waiting Ready: The I/O or event completes.
- Running Terminated: The process exits.
(Note: In an exam, a diagram with bubbles representing states and arrows representing transitions should be drawn).
What are System Calls? Discuss the different methods used to pass parameters to the OS during a system call.
System Calls
System calls provide an interface to the services made available by the operating system. They are the only way a user program can request services (like file access, process creation) from the kernel.
Methods for Parameter Passing
Three general methods are used to pass parameters to the OS:
-
Registers:
- The simplest approach is to pass the parameters in the CPU registers.
- Limitation: There may be more parameters than registers.
-
Block/Table in Memory:
- Parameters are stored in a block or table, in memory, and the address of the block is passed as a parameter in a register.
- This is used by Linux and Solaris.
-
Stack:
- Parameters are pushed onto the stack by the program and popped off the stack by the operating system.
- This method does not limit the number or length of parameters.
Compare Multitasking (Time-Sharing) and Multiprocessing systems.
Comparison
| Feature | Multitasking (Time-Sharing) | Multiprocessing |
|---|---|---|
| Definition | Logical extension of multiprogramming where the CPU switches jobs so frequently that users can interact with each job while it is running. | Use of two or more CPUs (processors) within a single computer system. |
| Number of CPUs | Typically One (Single Processor). | Multiple (Two or more Processors). |
| Objective | To minimize Response Time for the user and allow interactivity. | To increase Throughput (work done per unit time) and reliability. |
| Mechanism | Uses CPU scheduling and time slices (quantums) to switch contexts rapidly. | Parallel execution of processes on different CPUs. |
| Example | A user typing in a word processor while music plays in the background on a single-core laptop. | A server with 4 physical cores handling thousands of database queries simultaneously. |
What is a Real-Time Operating System (RTOS)? Distinguish between Hard and Soft RTOS.
Real-Time Operating System (RTOS)
An RTOS is a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. Processing must be done within defined time constraints.
Hard vs. Soft RTOS
-
Hard Real-Time Systems:
- Constraint: Critical tasks must complete on time. Missing a deadline results in a total system failure.
- Examples: Flight control systems, airbag deployment systems, medical pacemakers.
- Storage: Often limited secondary storage; data is stored in short-term memory (ROM/RAM).
-
Soft Real-Time Systems:
- Constraint: A critical real-time task gets priority over other tasks and retains that priority until it completes. Missing a deadline is undesirable but not catastrophic; performance degrades.
- Examples: Multimedia streaming, virtual reality, banking transaction systems.
- Flexibility: Can coexist with general-purpose OS features.
Explain the structure of a Layered Operating System. What are its advantages?
Layered Approach
The operating system is broken into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
- Modularity: A layer consists of data structures and routines that can be invoked by higher-level layers. A layer can generally only invoke operations on lower-level layers ( etc.).
- Abstraction: Lower layers hide the details of the hardware implementation from the higher layers.
Advantages
- Simplicity of Construction and Debugging: The layers are selected so that each uses functions (operations) and services of only lower-level layers. This simplifies debugging; if an error is found during the debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged.
- Information Hiding: High-level layers do not need to know how the operations are implemented in the lower layers, only what those operations do.
Define Distributed Operating Systems. How do they differ from Centralized Systems?
Distributed Operating System
A distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The processors communicate with one another through various communication lines (like high-speed buses or telephone lines). These are often referred to as loosely coupled systems.
Differences from Centralized Systems
| Feature | Distributed System | Centralized System |
|---|---|---|
| Resource Sharing | Resources (files, printers) are shared across the network. | Resources are located on a single machine. |
| Reliability | High. If one site fails, the remaining sites can continue operating (Fault Tolerance). | Low. If the main system fails, the entire operation stops. |
| Computation Speed | Load sharing allows computations to be distributed, potentially increasing speed. | Limited by the speed of the single central processor. |
| Communication | Requires message passing for IPC (Inter-Process Communication). | Uses shared memory for IPC. |
Describe the Process Memory Layout in detail.
A process in memory is typically divided into multiple sections to organize data and instructions effectively:
-
Text Section (Code Segment):
- Contains the executable code (program instructions).
- It is usually read-only to prevent the program from accidentally modifying its instructions.
-
Data Section:
- Contains global variables and static variables.
- Initialized data is stored separately from uninitialized data (BSS).
-
Heap:
- Used for dynamic memory allocation during process runtime (e.g., using
mallocin C ornewin C++). - The heap grows upward in memory address.
- Used for dynamic memory allocation during process runtime (e.g., using
-
Stack:
- Contains temporary data such as function parameters, return addresses, and local variables.
- The stack grows downward in memory address.
Note: There is a space between the heap and the stack to allow for the growth of both.
Explain Cooperating Processes and Independent Processes. Why do processes need to cooperate?
Independent Processes
A process is independent if it cannot affect or be affected by the other processes executing in the system. It does not share data with any other process.
Cooperating Processes
A process is cooperating if it can affect or be affected by other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.
Reasons for Process Cooperation
- Information Sharing: Several users may be interested in the same piece of information (e.g., a shared file).
- Computation Speedup: If we want a particular task to run faster, we must break it into subtasks, each executing in parallel (requires multicore/multiprocessor).
- Modularity: Constructing the system in a modular fashion, dividing the system functions into separate processes or threads.
- Convenience: An individual user may work on many tasks at the same time (e.g., editing, printing, and compiling simultaneously).
What is Context Switching? Why is it considered an overhead?
Context Switching
Context switching is the process of storing the state of a currently running process (or thread) so that it can be paused and resumed later, and then restoring the state of a different process to resume its execution.
The context is represented in the Process Control Block (PCB). The switch involves:
- Saving the context (registers, PC, stack pointer) of the old process.
- Loading the saved context of the new process.
Overhead
Context switching is considered pure overhead because the system does no useful work while switching.
- Time Consumption: It takes CPU cycles to save and load registers and memory maps.
- Cache Performance: Switching processes often invalidates the cache, causing a performance hit as the new process populates the cache.
- Dependency: The speed depends on memory speed, the number of registers, and the existence of special instructions.
Discuss the Evolution of Operating Systems from Serial Processing to Batch Systems.
1. Serial Processing (No OS)
- Era: 1940s - early 1950s.
- Operation: Programmers interacted directly with hardware. No operating system existed.
- Method: Users booked machine time, loaded punch cards/tape manually.
- Drawback: Huge setup time, low CPU utilization.
2. Simple Batch Systems
- Era: Mid-1950s.
- Concept: To reduce setup time, jobs with similar needs were batched together.
- The Monitor: A small piece of software (resident monitor) automatically transferred control from one job to the next.
- JCL: Users used Job Control Language to tell the monitor about the job.
- Drawback: CPU was often idle during I/O operations (speed mismatch between electronic CPU and mechanical I/O).
3. Spooling (Simultaneous Peripheral Operation On-Line)
- Improvement: Used disk as a buffer. Input was read from cards to disk; CPU read from disk. Output went to disk, then to print.
- Result: Allowed overlapping of I/O of one job with computation of another, though still strictly sequential execution.
What are the operations performed on processes? Explain Process Creation and Process Termination.
1. Process Creation
A process may create several new processes, via a create-process system call, during the course of execution.
- Parent and Child: The creating process is called a parent process, and the new processes are called the children of that process. This forms a tree of processes.
- Resource Sharing Options:
- Parent and children share all resources.
- Children share subset of parent’s resources.
- Parent and child share no resources.
- Execution Options:
- Parent and children execute concurrently.
- Parent waits until children terminate.
- System Call: In UNIX,
fork()creates a new process. The new process consists of a copy of the address space of the original process.
2. Process Termination
A process terminates when it finishes executing its final statement and asks the operating system to delete it using the exit() system call.
- Output: The process may return a status value (typically an integer) to its parent process (via the
wait()system call). - Resource Deallocation: All the resources of the process (memory, open files, I/O buffers) are deallocated by the operating system.
- Cascading Termination: Some systems do not allow a child to exist if its parent has terminated. If a process terminates (or is killed), all its children must also be terminated.
Distinguish between Symmetric Multiprocessing (SMP) and Asymmetric Multiprocessing (ASMP).
Symmetric Multiprocessing (SMP)
- Architecture: Each processor performs all tasks, including operating system functions and user processes.
- Peers: All processors are peers; there is no master-slave relationship.
- OS Design: More complex, as the OS must ensure that two processors do not choose the same process or update the same data structure simultaneously.
- Usage: Most common in modern desktop and server systems (e.g., Windows, Linux, macOS).
Asymmetric Multiprocessing (ASMP)
- Architecture: Each processor is assigned a specific task. A Master processor controls the system; the other processors look to the master for instruction or have predefined tasks.
- Master-Slave: A master-slave relationship exists. The master handles scheduling and I/O.
- Simplicity: Easier to design the OS logic because only the master accesses system data structures.
- Bottleneck: The master processor can become a bottleneck, limiting system scalability.
Explain the concept of Parallel Systems (Tightly Coupled Systems). What are their advantages?
Parallel Systems
Parallel systems, also known as Multiprocessor systems or Tightly Coupled systems, have more than one processor in close communication. They share the computer bus, the clock, and mostly memory and peripheral devices.
Advantages
- Increased Throughput: By increasing the number of processors, we hope to get more work done in less time. (Note: The speed-up ratio is not perfectly linear due to overhead).
- Economy of Scale: Multiprocessor systems can cost less than equivalent multiple single-processor systems because they share peripherals, mass storage, and power supplies.
- Increased Reliability: If functions can be distributed properly among several processors, the failure of one processor will not halt the system, only slow it down. This ability to continue providing service proportional to the level of surviving hardware is called Graceful Degradation or Fault Tolerance.
What are the major categories of System Calls? Give examples for each.
System calls can be grouped roughly into five major categories:
-
Process Control:
load,executecreate process,terminate processwait,signal
-
File Management:
create file,delete fileopen,closeread,write,reposition
-
Device Management:
request device,release deviceread,writelogically attach/detach devices
-
Information Maintenance:
get time or date,set time or dateget system data,set system dataget process attributes
-
Communications:
create connection,delete connectionsend message,receive messageattach remote devices
Explain the role of the Scheduler in Process Management. Differentiate between Long-term and Short-term schedulers.
Role of Scheduler
The operating system must select processes from queues (Job Queue, Ready Queue) for execution. This selection process is carried out by the scheduler.
Comparison
| Feature | Long-Term Scheduler (Job Scheduler) | Short-Term Scheduler (CPU Scheduler) |
|---|---|---|
| Function | Selects processes from the storage pool (disk) and loads them into memory for execution. | Selects from among the processes that are ready to execute (in memory) and allocates the CPU to one of them. |
| Frequency | Executes infrequently (seconds, minutes). It controls the Degree of Multiprogramming. | Executes very frequently (milliseconds). Must be very fast. |
| Goal | Controls the mix of I/O-bound and CPU-bound processes. | Maximize CPU efficiency and fairness. |
| State Transition | New Ready | Ready Running |
Describe Clustered Systems and how they differ from standard Multiprocessor systems.
Clustered Systems
Clustered systems are a form of multiprocessor system, but they are composed of two or more individual systems (nodes) coupled together.
- They usually share storage via a Storage Area Network (SAN).
- They are linked via a high-speed Local Area Network (LAN).
- They provide High Availability: If one node fails, service continues on the surviving nodes.
Difference from Multiprocessor Systems
- Coupling: Multiprocessor systems are tightly coupled (share memory and clock). Clustered systems are loosely coupled (each node has its own memory and OS instance).
- Scalability: Clusters are generally easier to scale (just add another computer to the network) compared to adding processors to a single motherboard in SMP.
- Types: Clusters can be Asymmetric (one machine runs, one stands by) or Symmetric (all machines run applications and monitor each other).