Unit5 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Differentiate between Logical Address Space and Physical Address Space. Explain the role of the Memory Management Unit (MMU).
Logical vs. Physical Address Space:
- Logical Address: An address generated by the CPU (also referred to as a virtual address). The set of all logical addresses generated by a program is a logical address space.
- Physical Address: An address seen by the memory unit (loaded into the memory-address register). The set of all physical addresses corresponding to these logical addresses is a physical address space.
Key Differences:
| Feature | Logical Address | Physical Address |
|---|---|---|
| Visibility | User/CPU views this address. | Memory hardware views this address. |
| Generation | Generated by the CPU during execution. | Computed by the MMU. |
| Space | Virtual address space. | Physical RAM locations. |
Role of the Memory Management Unit (MMU):
The MMU is a hardware device that maps virtual to physical addresses at run time. In a simple scheme (Base Register scheme):
- The base register (relocation register) holds the smallest physical address.
- The value in the relocation register is added to every address generated by a user process at the time it is sent to memory.
Explain the concept of Swapping. How does it improve the degree of multiprogramming?
Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store (usually a fast disk) and then brought back into memory for continued execution.
Mechanism:
- Swap Out: The Operating System moves a process from the main memory to the backing store when the memory is full or the process is idle/blocked.
- Swap In: The OS loads the process back from the backing store to the main memory when it is ready to execute.
Impact on Multiprogramming:
- Swapping allows the total physical address space of all processes to exceed the real physical memory of the system.
- It increases the degree of multiprogramming by allowing more processes to exist in the system (on disk) than can fit in physical RAM simultaneously, ensuring the CPU always has processes to execute.
Compare Internal Fragmentation and External Fragmentation. Which memory allocation schemes suffer from them?
Fragmentation occurs when memory is broken into small, non-contiguous blocks.
1. Internal Fragmentation:
- Definition: Occurs when allocated memory may be slightly larger than the requested memory. The difference between the allocated and requested size is unused memory strictly internal to a partition.
- Cause: Fixed-partition allocation or Paging (where the last page of a process does not fill a frame).
- Example: If a block is 500 bytes and a process requests 450 bytes, 50 bytes are wasted internally.
2. External Fragmentation:
- Definition: Occurs when there is enough total memory space to satisfy a request, but the available spaces are not contiguous; storage is fragmented into a large number of small holes.
- Cause: Dynamic contiguous memory allocation strategies (First-fit, Best-fit).
- Solution: Compaction or Paging (which eliminates external fragmentation by allowing non-contiguous allocation).
Summary:
- Paging suffers from Internal Fragmentation.
- Segmentation and Contiguous Allocation suffer from External Fragmentation.
Describe the Contiguous Memory Allocation strategies: First-fit, Best-fit, and Worst-fit. Which is generally considered the most efficient?
In contiguous memory allocation, the OS must find a hole (free block) large enough to accommodate a process.
-
First-fit:
- Allocate the first hole that is big enough.
- Searching can start either at the beginning of the set of holes or where the previous first-fit search ended.
- Pros: Fast.
-
Best-fit:
- Allocate the smallest hole that is big enough.
- Requires searching the entire list (unless ordered by size).
- Pros: Produces the smallest leftover hole.
-
Worst-fit:
- Allocate the largest hole.
- Requires searching the entire list.
- Pros: Produces the largest leftover hole, which may be more useful than the tiny hole left by Best-fit.
Efficiency:
Simulations have shown that First-fit and Best-fit are better than Worst-fit in terms of speed and storage utilization. First-fit is generally faster.
Define Paging. Explain the hardware implementation of a Page Table and the translation from logical to physical addresses using LaTeX.
Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. It avoids external fragmentation.
Concepts:
- Logical Memory is divided into blocks of the same size called Pages.
- Physical Memory is divided into fixed-sized blocks called Frames.
- Page Size = Frame Size.
Hardware Implementation:
- Page Table: A data structure kept in main memory that maps Page Numbers () to Frame Numbers ().
- PTBR: Page-Table Base Register points to the page table.
Address Translation:
Every address generated by the CPU is divided into two parts: a page number () and a page offset ().
- Extract and use it as an index into the Page Table.
- Extract the corresponding base frame number () from the page table.
- Combine with the page offset () to define the physical address.
What is a TLB (Translation Look-aside Buffer) and why is it necessary in a paging system?
Necessity of TLB:
In a standard paging scheme, every data/instruction access requires two memory accesses:
- One to access the Page Table (to get the frame number).
- One to access the actual Data/Instruction.
This effectively doubles the memory access time, slowing down the system significantly.
Translation Look-aside Buffer (TLB):
- The TLB is a small, fast-lookup hardware cache located inside the MMU.
- It stores recent Page-to-Frame translations.
Operation:
- When a logical address is generated, the TLB is checked.
- TLB Hit: If the page number is found, the frame number is retrieved immediately (no memory access for the page table needed).
- TLB Miss: If not found, the system accesses the page table in memory, calculates the physical address, and updates the TLB for future references.
Explain the concept of Segmentation. How does it differ from Paging in terms of the user's view of memory?
Segmentation is a memory management scheme that supports the user view of memory. A program is a collection of segments. A logical address is a two-dimensional tuple: .
User View:
- Users do not think of memory as a linear array of bytes (like Paging).
- Users view memory as a collection of variable-sized logic units: Main program, Procedure, Function, Method, Symbol Table, Stack, etc.
Hardware Support:
- Segment Table: Maps two-dimensional user-defined addresses into one-dimensional physical addresses. Each entry has:
- Base: Starting physical address of the segment.
- Limit: Length of the segment.
Difference from Paging:
- Paging: Invisible to the programmer; fixed-size blocks; handles internal fragmentation issues.
- Segmentation: Visible to the programmer; variable-sized blocks based on logical division; eliminates internal fragmentation but suffers from external fragmentation.
Describe Segmentation with Paging (Paged Segmentation). Why is this hybrid approach used?
Segmentation with Paging is a scheme that combines the benefits of both segmentation and paging.
Why it is used:
- Segmentation aligns with the user's view of memory and facilitates protection/sharing.
- Paging manages physical memory efficiently by eliminating external fragmentation.
- Pure segmentation causes external fragmentation. By paging the segments, we solve this issue.
Mechanism:
- The logical address is still .
- The Segment Table entry points to a Page Table for that specific segment (instead of the base address of the segment in physical memory).
- The Offset in the logical address is broken down into a Page Number and a Page Offset.
- The Page Table maps the page number to a physical frame.
Example: The Intel Pentium processor uses this hybrid approach (Selector identifies segment -> Linear Address -> Paging Hardware -> Physical Address).
Explain Hierarchical Paging (Multi-level Paging). Why is it required for systems with large logical address spaces?
The Problem:
In modern systems with large logical address spaces (e.g., 32-bit or 64-bit), the Page Table itself becomes too large to fit contiguously in memory. For a 32-bit system with 4KB pages, the page table could have 1 million entries, requiring 4MB of contiguous space.
Hierarchical (Multi-level) Paging:
To solve this, we "page the page table."
- Two-Level Paging Example:
- The logical address is divided into: , , and .
- Outer Page Table: Contains pointers to Inner Page Tables. This table is small enough to fit in one frame.
- Inner Page Table: Contains the actual frame numbers.
Benefits:
- The page table does not need to be contiguous in memory.
- Only the parts of the page table actually used by the process need to be in memory (concept of demand paging applied to page tables).
What are Overlays? Explain how they allow a process larger than available memory to execute.
Overlays are a technique used to enable a process to be larger than the amount of memory allocated to it.
Concept:
- The idea is to keep in memory only those instructions and data that are needed at any given time.
- When other instructions are needed, they are loaded into space that was previously occupied by instructions that are no longer needed.
Implementation:
- Implemented by the user/programmer, not the Operating System.
- The programmer must design the overlay structure (e.g., Pass 1 and Pass 2 of an assembler can occupy the same memory space sequentially).
- Special overlay drivers are used to swap code in and out.
Usage: Largely obsolete now due to Virtual Memory, but historically significant for embedded systems or early OSs without VM support.
Define Virtual Memory. What are the major benefits of using Virtual Memory?
Virtual Memory is a technique that allows the execution of processes that are not completely in memory. It separates user logical memory from physical memory.
Key Benefits:
- Logical Space > Physical Space: Programs can be larger than physical memory.
- Higher Degree of Multiprogramming: More programs can be in the system simultaneously because each process occupies less physical RAM.
- Less I/O: Less I/O is needed to load or swap processes, making the system faster, as only needed pages are loaded.
- Sharing: Address spaces can be shared easily by several processes (e.g., shared libraries).
- Process Creation: Speeds up process creation (using techniques like Copy-on-Write).
Explain the concept of Demand Paging. How does the "Lazy Swapper" work?
Demand Paging is the most common implementation of virtual memory. It is a paging system with swapping.
Concept:
Pages are only loaded into memory when they are actually demanded (accessed) during program execution. Pages that are never accessed are never loaded into physical memory.
Lazy Swapper (Pager):
- A traditional swapper manipulates entire processes.
- A Pager (used in demand paging) is concerned with individual pages of a process.
- When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again.
- Instead of swapping in the whole process, the pager brings only those necessary pages into memory.
Hardware Support:
Requires a Valid-Invalid Bit in the Page Table. If the bit is 'invalid', the page is either not in the logical address space or is currently on the disk, triggering a Page Fault.
What is a Page Fault? List the steps involved in handling a page fault.
Page Fault: An interrupt that occurs when a program attempts to access a page that is mapped in the logical address space but is not currently loaded in physical memory (marked as 'invalid' in the page table).
Steps in Handling a Page Fault:
- Trap: The OS is trapped by the hardware (interrupt) due to the failure to locate the desired page.
- Check: The OS checks an internal table to decide: Is the reference invalid (abort) or just not in memory?
- Find Free Frame: The OS looks for a free frame in the physical memory list.
- Disk Operation: The OS schedules a disk operation to read the desired page into the newly allocated frame.
- Update Table: When the disk read is complete, the OS modifies the internal table and the page table to indicate the page is now in memory (set validation bit to 'valid').
- Restart: The instruction that was interrupted is restarted.
Derive the formula for Effective Access Time (EAT) in a demand paging system. Calculate EAT if memory access is 200ns and page fault overhead is 8ms, with a page fault rate of .
Formula:
Let be the probability of a page fault ().
Let be the memory access time.
Let be the page fault time (service time).
Where Page Fault Overhead includes:
- Trap to OS
- Save user registers and process state
- Wait for page to be read from disk (dominant factor)
- Update tables and restart
Calculation:
Given:
Note: Even a small drastically increases EAT.
Describe FIFO Page Replacement Algorithm. What is Belady's Anomaly?
FIFO (First-In-First-Out):
- This is the simplest page replacement algorithm.
- The OS maintains a queue of all pages in memory.
- When a page must be replaced, the page at the head of the queue (the oldest page) is chosen.
Belady's Anomaly:
- Intuitively, one might expect that allocating more frames to a process would result in fewer page faults.
- Belady's Anomaly is a phenomenon where increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns.
- This anomaly is specific to the FIFO algorithm (and Second-Chance) but does not occur in Stack-based algorithms like LRU or Optimal.
Explain the Optimal Page Replacement Algorithm. Why is it difficult to implement?
Optimal Page Replacement (OPT or MIN):
- Rule: Replace the page that will not be used for the longest period of time.
- Purpose: It guarantees the lowest possible page-fault rate for a fixed number of frames.
Example:
If frames hold pages {7, 0, 1} and the future reference string is 2, 0, 3...
- To load 2, we look ahead. If 0 is used soon, and 7 is used later than 1, we replace 7.
Difficulty in Implementation:
- It requires future knowledge of the reference string (i.e., the OS needs to know exactly which pages the process will reference next).
- In general purpose systems, this is impossible to know in advance.
- It is primarily used as a benchmark to measure the efficiency of other algorithms.
Describe the Least Recently Used (LRU) page replacement algorithm. How can it be implemented using a Stack?
LRU (Least Recently Used):
- Rule: Replace the page that has not been used for the longest period of time.
- Logic: It uses the recent past as an approximation of the near future.
Stack Implementation:
- Maintain a stack of page numbers.
- Whenever a page is referenced (read or written), it is removed from the stack and put on the top.
- In this way, the most recently used page is always at the top of the stack, and the least recently used page is always at the bottom.
- When replacement is needed, remove the page from the bottom of the stack.
Advantages:
- Does not suffer from Belady's Anomaly.
- More efficient than FIFO.
What is Thrashing? What causes it, and how can the Working Set Model prevent it?
Thrashing:
Thrashing occurs when a process spends more time paging (swapping pages in and out) than executing. The CPU utilization drops drastically because the OS is busy handling page faults.
Cause:
- If the degree of multiprogramming is too high, the total memory required by the processes exceeds physical memory.
- Processes don't have enough frames to hold their active pages.
- A page fault occurs a page is replaced that page is needed immediately another fault occurs.
Working Set Model (Prevention):
- Based on the Locality of Reference principle.
- The Working Set () is the set of pages referenced in the most recent time units.
- Strategy: The OS monitors the size of the working set for each process.
- If (Working Set Sizes) Total Frames available, Thrashing is imminent.
- The OS suspends one or more processes to free up frames and prevent thrashing.
Compare Paging and Segmentation based on specific criteria.
| Feature | Paging | Segmentation |
|---|---|---|
| Basic Unit | Fixed-size Pages. | Variable-size Segments. |
| Programmer View | Transparent (System view). | Visible (User view). |
| Fragmentation | Suffers from Internal Fragmentation. | Suffers from External Fragmentation. |
| Address Space | One-dimensional (linear). | Two-dimensional (Segment #, Offset). |
| Table Used | Page Table (maps Page to Frame). | Segment Table (maps Segment to Base Address). |
| Protection | Difficult to separate code/data accurately. | Easy (e.g., Read-only code segment). |
Given the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 and 3 physical frames. Calculate the number of page faults using the FIFO algorithm.
Algorithm: FIFO (First-In-First-Out)
Frames: 3
String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
| Ref | Frame 1 | Frame 2 | Frame 3 | Status |
|---|---|---|---|---|
| 1 | 1 | - | - | Miss |
| 2 | 1 | 2 | - | Miss |
| 3 | 1 | 2 | 3 | Miss |
| 4 | 4 | 2 | 3 | Miss (1 out) |
| 1 | 4 | 1 | 3 | Miss (2 out) |
| 2 | 4 | 1 | 2 | Miss (3 out) |
| 5 | 5 | 1 | 2 | Miss (4 out) |
| 1 | 5 | 1 | 2 | Hit |
| 2 | 5 | 1 | 2 | Hit |
| 3 | 5 | 3 | 2 | Miss (1 out) |
| 4 | 5 | 3 | 4 | Miss (2 out) |
| 5 | 5 | 3 | 4 | Hit |
Total Page Faults = 9