Unit5 - Subjective Questions
CSE316 • Practice Questions with Detailed Answers
Differentiate between Logical Address Space and Physical Address Space. Explain the role of the Memory Management Unit (MMU) in this context.
Differences between Logical and Physical Address Space
| Feature | Logical Address (Virtual Address) | Physical Address |
|---|---|---|
| Definition | An address generated by the CPU during program execution. | The actual address in the main memory (RAM) where data is stored. |
| Visibility | User program sees this address. | User program does not directly see this; handled by the OS. |
| Space | The set of all logical addresses is the Logical Address Space. | The set of all physical addresses is the Physical Address Space. |
| Generation | Generated by CPU. | Computed by MMU. |
Role of Memory Management Unit (MMU)
The MMU is a hardware device that maps virtual to physical addresses at run time.
- Relocation Register: The value in the relocation register is added to every address generated by a user process at the time it is sent to memory.
- Mapping: If the logical address is and the base register value is , the physical address is calculated as:
- This ensures protection and allows dynamic relocation of code.
Explain the concept of Swapping in memory management with a neat diagram.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store (secondary storage) and then brought back into memory for continued execution. This allows the system to run more processes than can fit into the physical memory.
Key Components
- Backing Store: Usually a fast disk large enough to accommodate copies of all memory images for all users.
- Roll out, Roll in:
- Roll out: Lower-priority process is swapped out.
- Roll in: Higher-priority process is loaded into memory.
Diagram Description
(Imagine a diagram with Main Memory on one side and a Backing Store/Disk on the other):
- Main Memory: Contains the OS and User Space. Process is leaving (Swap Out).
- Backing Store: Contains stored process images. Process is entering (Swap In).
- Dispatcher: Checks if the next process in the queue is in memory. If not, it invokes the swapper.
Context Switch Time: The major part of swap time is transfer time, which is directly proportional to the amount of memory swapped.
Discuss the problem of Fragmentation. Distinguish between Internal and External fragmentation.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces. Fragmentation occurs when memory is wasted and cannot be used by processes.
Internal vs. External Fragmentation
| Feature | Internal Fragmentation | External Fragmentation |
|---|---|---|
| Definition | Occurs when the allocated memory block is larger than the requested memory. The difference is unused memory internal to a partition. | Occurs when there is enough total memory space to satisfy a request, but the available spaces are not contiguous. |
| Cause | Fixed-partition allocation (e.g., Paging). | Dynamic variable-partition allocation (e.g., Segmentation). |
| Solution | Best-fit algorithm or using variable partitions (though harder to manage). | Compaction (shuffling memory to place free memory together) or Paging (allows non-contiguous allocation). |
Compaction: A technique to solve external fragmentation by moving all free memory into one large block. It is only possible if relocation is dynamic and done at execution time.
Describe the Dynamic Storage Allocation strategies: First-fit, Best-fit, and Worst-fit.
These are algorithms used to select a free hole from the set of available holes in contiguous memory allocation.
-
First-Fit:
- Allocate the first hole that is big enough.
- Pros: Generally faster as it searches from the beginning.
- Cons: Does not minimize fragmentation.
-
Best-Fit:
- Allocate the smallest hole that is big enough.
- Must search the entire list (unless ordered by size).
- Pros: Produces the smallest leftover hole.
- Cons: Slower search; creates very small, unusable internal fragments.
-
Worst-Fit:
- Allocate the largest hole.
- Must search the entire list.
- Pros: Produces the largest leftover hole, which might be useful for another process.
- Cons: Wastes the most space quickly; generally yields poor performance compared to First-fit and Best-fit.
Explain the basic method of Paging hardware. Define Frames, Pages, and the Page Table.
Paging Concept
Paging is a memory management scheme that permits the physical address space of a process to be non-contiguous. It avoids external fragmentation and the need for compaction.
Key Terminology
- Frames (Physical): Physical memory is broken into fixed-sized blocks called frames.
- Pages (Logical): Logical memory is broken into blocks of the same size called pages.
- Page Table: A data structure used by the hardware to map logical page numbers to physical frame numbers.
Hardware Implementation
- The CPU generates an address divided into two parts:
- Page Number (): Used as an index into a page table.
- Page Offset (): Combined with the base address defined by the page table to define the physical memory address.
- If the page size is bytes and the logical address space is , then the high-order bits designate the page number, and the low-order bits designate the page offset.
Derive the Physical Address calculation in paging and explain the role of the Translation Look-aside Buffer (TLB) in improving performance.
Physical Address Calculation
Given a logical address composed of a Page Number () and an Offset ():
- The OS looks up in the Page Table.
- It retrieves the corresponding Frame Number ().
- The Physical Address is constructed by appending the offset to the frame number .
Translation Look-aside Buffer (TLB)
The Page Table is stored in main memory. Therefore, every data/instruction access requires two memory accesses (one for the page table entry, one for the data). This slows down processing by a factor of 2.
- Solution: Use a special, small, fast-lookup hardware cache called TLB.
- Operation:
- The TLB contains a few of the page-table entries.
- When a logical address is generated, the CPU checks the TLB.
- TLB Hit: If the page number is found, the frame number is immediately available (fast).
- TLB Miss: If not found, the memory is accessed to find the frame, and the TLB is updated.
Effective Access Time (EAT)
Let be the associative lookup time, be the memory cycle time, and be the hit ratio.
Explain Hierarchical Paging (Multi-level Paging). Why is it necessary for large logical address spaces?
Necessity of Multi-level Paging
Modern computers have very large logical address spaces ( to ). In a standard paging scheme, the page table itself becomes excessively large to fit contiguously in memory.
- Example: For a 32-bit system with 4 KB page size, the page table might have 1 million entries. If each entry is 4 bytes, the table is 4 MB. Allocating 4 MB contiguously is difficult.
Hierarchical Paging Mechanism
The solution is to page the page table.
-
Two-Level Paging:
- The logical address is divided into three parts: (index for outer page table), (offset within the page of the inner page table), and (page offset).
- Outer Page Table: Contains pointers to inner page tables.
- Inner Page Table: Contains the actual frame numbers.
-
Process:
- CPU uses to find the entry in the outer table.
- That entry points to an inner table.
- CPU uses to find the frame number in the inner table.
- Frame number + forms the physical address.
This avoids allocating one large contiguous block for the page table; the outer table is small, and inner tables are allocated only as needed.
What is Segmentation? How does it differ from Paging in terms of the user's view of memory?
Segmentation
Segmentation is a memory management scheme that supports the user view of memory. A logical address space is a collection of segments.
- Structure: Each segment has a name (or number) and a length.
- Logical Address: A tuple .
- Segment Table: Maps two-dimensional user-defined addresses into one-dimensional physical addresses. Each entry has:
- Base: Starting physical address.
- Limit: Length of the segment.
Difference from Paging
| Feature | Segmentation | Paging |
|---|---|---|
| User View | Reflects the user's view. Memory is divided into logical units like main(), stack, symbol table, arrays, etc. | User view is separated from physical memory. Memory is divided into fixed-size blocks (system view). |
| Size | Segments are of variable size. | Pages are of fixed size. |
| Fragmentation | Suffers from External Fragmentation. | Suffers from Internal Fragmentation. |
| Address | where must be . | hardware determined. |
Describe the architecture of Segmentation with Paging (e.g., Intel Pentium or MULTICS). What are the advantages of this hybrid approach?
Segmentation with Paging
Pure segmentation suffers from external fragmentation, while pure paging yields internal fragmentation and page table overhead. A hybrid approach combines both.
Architecture
- Logical Address: Still looks like (Segment number, offset).
- Linear Address Generation (Segmentation Unit):
- The CPU uses the segment number to look up the Segment Descriptor Table.
- Checks protection and limits.
- Adds the segment base address to the offset to generate a Linear Address.
- Physical Address Generation (Paging Unit):
- The Linear Address is then treated as a logical address for the paging unit.
- It is divided into Directory, Page, and Offset.
- Passed through a Two-Level Page Table structure to find the final Physical Address.
Advantages
- Reduced Fragmentation: Paging the segments eliminates external fragmentation.
- Manageability: Segments grow and shrink easily (logical view), while paging manages the physical memory efficienty.
- Protection: Security can be handled at the segment level (Read/Write attributes), while allocation is handled at the page level.
Define Virtual Memory. What are the major benefits of using virtual memory in an operating system?
Virtual Memory
Virtual memory is a technique that allows the execution of processes that are not completely in memory. It separates the user's logical memory from physical memory.
Benefits
- Large Address Space: Users can write programs for a much larger virtual address space than the available physical memory.
- Higher Degree of Multiprogramming: Since each program takes less physical memory (only active pages are loaded), more programs can be run concurrently.
- Simplified Loading: Programs do not need to be loaded entirely to start execution.
- Efficient Process Creation: Techniques like Copy-on-Write allow fast process creation (forking).
- Swapping Reduced: Less I/O overhead needed to load or swap programs compared to full swapping.
Explain the concept of Demand Paging. Outline the steps involved in handling a page fault.
Demand Paging
Demand paging is a system where pages are loaded into memory only when they are needed (demanded) during program execution, rather than loading the entire program at once. It uses a "Lazy Swapper".
Steps in Handling a Page Fault
- Reference: The CPU tries to access a page. The hardware checks the page table.
- Trap: If the valid-invalid bit is 'i' (invalid), a Page Fault Trap is triggered to the OS.
- Check: The OS checks an internal table to determine if the reference was a valid logical address. If invalid, terminate. If valid but not in memory, proceed.
- Find Frame: The OS looks for a free frame in the physical memory.
- Disk Operation: The OS schedules a disk operation to read the desired page into the newly allocated frame.
- Update Table: When the disk read is complete, the OS modifies the internal table and the page table to indicate the page is now in memory (set bit to 'v').
- Restart: The OS restarts the instruction that caused the trap. The CPU re-fetches the address, and this time it is a hit.
What is Belady’s Anomaly? Explain it in the context of the FIFO page replacement algorithm.
Belady’s Anomaly
Belady's Anomaly is a phenomenon in which increasing the number of page frames results in an increase in the number of page faults. Intuitively, more memory should lead to fewer faults, but this is not always true for certain algorithms.
Context: FIFO Algorithm
- FIFO (First-In, First-Out): Replaces the oldest page in memory.
- The Anomaly: FIFO does not follow the "stack property". Removing a page that was brought in early (but might be heavily used) causes an immediate fault again.
Example:
Consider reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
- Case A (3 Frames): Results in 9 page faults.
- Case B (4 Frames): Results in 10 page faults.
This demonstrates that giving the process more memory actually degraded performance under FIFO.
Compare and contrast FIFO, Optimal, and LRU page replacement algorithms.
Comparison of Page Replacement Algorithms
-
FIFO (First-In-First-Out):
- Logic: Replace the page that has been in memory the longest.
- Pros: Simple to implement (queue).
- Cons: Suffers from Belady’s Anomaly; may replace heavily used pages.
-
Optimal (OPT):
- Logic: Replace the page that will not be used for the longest period of time.
- Pros: Guarantees the lowest possible page-fault rate for a fixed number of frames.
- Cons: Impossible to implement in practice because it requires future knowledge of the reference string. Used as a benchmark.
-
LRU (Least Recently Used):
- Logic: Replace the page that has not been used for the longest period of time (approximation of Optimal).
- Pros: Does not suffer from Belady’s Anomaly; good performance.
- Cons: Requires substantial hardware support (counters or stacks) to track usage history; high overhead.
What are Overlays? How do they relate to swapping, and why are they less common in modern systems?
Overlays
Overlays are a technique used to enable a process to be larger than the amount of memory allocated to it. The idea is to keep in memory only those instructions and data that are needed at any given time.
- Mechanism: The programmer manually divides the program into sections (overlays). A specific overlay driver swaps these sections in and out of memory into the same memory region.
- Relation to Swapping: Overlays are a form of "internal swapping" managed by the application/user within a single process space, whereas standard swapping is managed by the OS moving entire processes between disk and RAM.
Why Less Common?
- Complexity: Overlays require the programmer to design the overlay structure carefully (identifying independent modules), which is complex and error-prone.
- Virtual Memory: Modern OSs implement Virtual Memory (Paging), which automates this process. The OS manages loading only the needed pages, making manual overlays obsolete.
Explain the concept of Thrashing. What causes it, and how can the Working Set Model help prevent it?
Thrashing
Thrashing occurs when a process spends more time paging (swapping pages in and out) than executing. This happens when the process does not have enough frames to hold the pages it is currently using.
- Symptom: CPU utilization drops significantly. The OS attempts to increase multiprogramming by adding more processes, which further decreases available frames, worsening the problem.
Working Set Model
To prevent thrashing, the OS must ensure a process is allocated enough frames for its locality.
- Locality Model: As a process executes, it moves from locality to locality (a set of pages actively used together).
- Working Set: The set of pages in the most recent (working-set window) page references.
- Prevention:
- The OS monitors the sum of working set sizes .
- If (total memory), thrashing will occur.
- The OS suspends one of the processes to free frames and reduce .
What are Shared Pages in a paging system? How are they implemented?
Shared Pages
Shared pages allow multiple processes to access the same copy of code or data in physical memory, reducing total memory usage.
-
Reentrant Code (Pure Code):
- Shared code must be read-only and reentrant (it does not change during execution).
- Example: Text editors, compilers, window systems.
- One copy of the code is mapped into the logical address space of multiple processes.
-
Implementation:
- Page Table Map: The page table entries for the shared pages in different processes point to the same physical frames.
- Private Data: Each process holds its own private data (stack, variables) in separate pages that map to unique frames.
- Protection: The shared pages are marked as Read-Only in the page tables to prevent accidental modification.
Describe the Page Interrupt Fault (Page Fault). What hardware support is required to handle it?
Page Interrupt Fault
A Page Fault is a specific type of interrupt (trap) that occurs when a running program accesses a memory page that is mapped in the virtual address space but is not currently loaded in the physical RAM.
Hardware Support Requirements
To handle page faults and implement demand paging, the hardware must support:
- Page Table with Valid-Invalid Bit: To distinguish between pages in memory () and pages on disk ().
- Secondary Memory (Swap Space): High-speed disk to hold pages not in RAM.
- Instruction Restart Capability: The most critical requirement. The hardware must be able to restart the instruction that caused the fault exactly as if it had never happened, after the OS loads the missing page.
Explain the concept of Protection in a Paging system. How is it enforced?
Protection in Paging
Memory protection in a paged environment is accomplished by protection bits associated with each frame, kept in the page table.
-
Protection Bits:
- Read-Write / Read-Only: Can define if a page can be written to. If a process tries to write to a Read-Only page, a hardware trap occurs.
- Valid-Invalid Bit:
- Valid: The page is in the process's logical address space.
- Invalid: The page is not in the logical address space. Access causes an illegal address trap.
-
Enforcement:
- During address translation, the hardware checks these bits in parallel with the translation.
- Violations halt the operation and trigger a trap to the OS.
Differentiate between Paging and Segmentation regarding the problem of Fragmentation.
Fragmentation Comparison
-
Paging -> Internal Fragmentation:
- Memory is allocated in fixed-size units (frames).
- If a process needs bytes, and frames are size , the process gets frames.
- The last frame allocated may not be completely full. This wasted space inside the allocated frame is Internal Fragmentation.
- Result: No external fragmentation, but unavoidable internal fragmentation.
-
Segmentation -> External Fragmentation:
- Memory is allocated in variable-sized blocks based on the exact size of the segment.
- As segments are allocated and deallocated, free memory is broken into small, non-contiguous holes.
- If a new segment requires space , and the total free space , but no single hole , allocation fails. This is External Fragmentation.
- Result: No internal fragmentation (mostly), but severe external fragmentation requiring compaction.
Calculate the Effective Access Time (EAT) for a system with the following parameters: Memory access time = 100ns, TLB access time = 20ns, and TLB hit ratio = 80%.
Calculation
Parameters:
- (Memory access) = 100 ns
- (TLB access) = 20 ns
- (Hit ratio) = 0.80
Formula:
Case 1: TLB Hit
- Time = TLB Search + Data Access = ns
Case 2: TLB Miss
- Time = TLB Search + Page Table Access + Data Access = ns
Total EAT:
The effective access time is 140 nanoseconds.