1Which principle allows the Memory Hierarchy to function effectively by assuming that data accessed recently will likely be accessed again soon?
A.Principle of Relativity
B.Locality of Reference
C.Direct Memory Access
D.Cycle Stealing
Correct Answer: Locality of Reference
Explanation:The Locality of Reference (both temporal and spatial) states that programs tend to access a relatively small portion of their address space at any instant of time, justifying the use of a hierarchy.
Incorrect! Try again.
2In the context of RAM, which of the following statements distinguishes SRAM from DRAM?
A.SRAM requires periodic refreshing, while DRAM does not.
B.SRAM is slower than DRAM.
C.SRAM uses flip-flops for storage, while DRAM uses capacitors.
D.DRAM is more expensive per bit than SRAM.
Correct Answer: SRAM uses flip-flops for storage, while DRAM uses capacitors.
Explanation:Static RAM (SRAM) uses flip-flops and retains data as long as power is supplied. Dynamic RAM (DRAM) uses capacitors that leak charge and require periodic refreshing.
Incorrect! Try again.
3Arrange the following memory types in descending order of speed (Fastest to Slowest):
A.Cache, Registers, Main Memory, Magnetic Disk
B.Registers, Cache, Main Memory, Magnetic Disk
C.Main Memory, Cache, Registers, Magnetic Disk
D.Registers, Main Memory, Cache, Magnetic Disk
Correct Answer: Registers, Cache, Main Memory, Magnetic Disk
Explanation:Registers are inside the CPU and are the fastest, followed by Cache, then Main Memory (RAM), and finally Secondary Storage (Magnetic Disk).
Incorrect! Try again.
4If the CPU finds the word it is looking for in the cache memory, it is called a:
A.Page Fault
B.Cache Miss
C.Cache Hit
D.Segment Fault
Correct Answer: Cache Hit
Explanation:A Cache Hit occurs when the data requested by the processor is found in the cache memory.
Incorrect! Try again.
5Calculate the average memory access time if the Cache access time is , Main memory access time is , and the Hit ratio is $0.9$.
A.
B.
C.
D.
Correct Answer:
Explanation:Formula: (assuming simultaneous access) or generally . Using simple weighted average: .
Incorrect! Try again.
6In which mapping technique is a block of main memory mapped to a specific fixed line in the cache?
A.Associative Mapping
B.Set-Associative Mapping
C.Direct Mapping
D.Virtual Mapping
Correct Answer: Direct Mapping
Explanation:In Direct Mapping, a specific block of main memory can only be loaded into one specific line of the cache, determined by (where is block number and is cache lines).
Incorrect! Try again.
7Which hardware component is primarily required for Associative Mapping to search all cache lines simultaneously?
A.Multiplexer
B.Content Addressable Memory (CAM)
C.Counter
D.Decoder
Correct Answer: Content Addressable Memory (CAM)
Explanation:Associative Mapping allows a block to be placed anywhere. To find it, the hardware must search all tags in parallel, which requires Content Addressable Memory (CAM).
Incorrect! Try again.
8In a -way set-associative cache, each set contains how many cache lines?
A.$1$
B.
C.Total Cache Lines /
D.Dependent on block size
Correct Answer:
Explanation:In -way set-associative mapping, the cache is divided into sets, and each set contains exactly lines (blocks).
Incorrect! Try again.
9What is the primary disadvantage of the Write-through policy in cache memory?
A.It causes data inconsistency.
B.It generates high memory traffic because every write goes to main memory.
C.It is complex to implement.
D.It requires a Dirty Bit.
Correct Answer: It generates high memory traffic because every write goes to main memory.
Explanation:Write-through updates both the cache and the main memory simultaneously for every write operation, leading to high bus traffic/contention.
Incorrect! Try again.
10In the Write-back policy, when is the data updated in the main memory?
A.Immediately upon every write request.
B.Only when the cache block is evicted/replaced.
C.At fixed time intervals.
D.When the CPU is idle.
Correct Answer: Only when the cache block is evicted/replaced.
Explanation:In Write-back, updates are made only to the cache. The main memory is updated only when the modified block (marked with a dirty bit) is removed from the cache.
Incorrect! Try again.
11Which bit is used in the Write-back method to indicate that a cache block has been modified?
A.Valid bit
B.Dirty bit
C.Present bit
D.Modify bit
Correct Answer: Dirty bit
Explanation:The Dirty bit indicates whether the cache block has been modified since it was loaded from main memory. If set, the block must be written back to main memory upon eviction.
Incorrect! Try again.
12Which cache replacement algorithm replaces the block that has not been used for the longest period of time?
A.FIFO (First In First Out)
B.LFU (Least Frequently Used)
C.LRU (Least Recently Used)
D.Random
Correct Answer: LRU (Least Recently Used)
Explanation:LRU replaces the item that has not been accessed for the longest time, based on the assumption of temporal locality.
Incorrect! Try again.
13Virtual Memory allows the execution of programs that are:
A.Smaller than the main memory.
B.Larger than the physical main memory.
C.Stored only in ROM.
D.Written in Assembly language.
Correct Answer: Larger than the physical main memory.
Explanation:Virtual Memory gives the illusion of a very large main memory, allowing programs larger than physical RAM to execute by swapping pages between RAM and secondary storage.
Incorrect! Try again.
14In Virtual Memory, the addresses used by the programmer are called ____ addresses, and the addresses in the physical memory are called ____ addresses.
A.Physical, Logical
B.Logical, Physical
C.Binary, Decimal
D.Relative, Absolute
Correct Answer: Logical, Physical
Explanation:The CPU generates Logical (Virtual) addresses, which are translated into Physical addresses by the memory management unit.
Incorrect! Try again.
15The fixed-size blocks of virtual memory are called ____, and the fixed-size blocks of physical memory are called ____.
A.Frames, Pages
B.Pages, Frames
C.Segments, Blocks
D.Sectors, Tracks
Correct Answer: Pages, Frames
Explanation:Virtual address space is divided into Pages, and physical memory is divided into Frames of the same size.
Incorrect! Try again.
16What is a Page Fault?
A.An error in the page table code.
B.Accessing a page that is not currently in main memory.
C.Writing to a read-only page.
D.A hardware failure in RAM.
Correct Answer: Accessing a page that is not currently in main memory.
Explanation:A Page Fault occurs when a program tries to access a page that is mapped in the address space but is not currently loaded in the physical RAM.
Incorrect! Try again.
17What is the function of the TLB (Translation Lookaside Buffer)?
Correct Answer: To cache recent Virtual-to-Physical address translations.
Explanation:The TLB is a specialized, fast cache used to reduce the time taken to access the Page Table in main memory by storing recent translations.
Incorrect! Try again.
18Which problem occurs when memory is divided into variable-length partitions (Segmentation), leading to unused gaps between allocated memory blocks?
A.Internal Fragmentation
B.External Fragmentation
C.Page Fault
D.Thrashing
Correct Answer: External Fragmentation
Explanation:External Fragmentation happens in segmentation when free memory is separated into small blocks and is interspersed by allocated memory, making it impossible to allocate a large contiguous block.
Incorrect! Try again.
19Which access method is used by Magnetic Tapes?
A.Random Access
B.Direct Access
C.Sequential Access
D.Associative Access
Correct Answer: Sequential Access
Explanation:Magnetic tapes are Sequential Access devices; to reach a specific point, the tape must be wound past all preceding data.
Incorrect! Try again.
20What is Seek Time in a magnetic disk?
A.Time to transfer data to the bus.
B.Time for the sector to rotate under the head.
C.Time to move the read/write head to the specified track.
D.Total time to read a file.
Correct Answer: Time to move the read/write head to the specified track.
Explanation:Seek Time is the mechanical delay involved in moving the read/write arm to the correct track (cylinder).
Incorrect! Try again.
21According to Flynn's Taxonomy, a traditional Uniprocessor (von Neumann architecture) is classified as:
A.SISD (Single Instruction, Single Data)
B.SIMD (Single Instruction, Multiple Data)
C.MISD (Multiple Instruction, Single Data)
D.MIMD (Multiple Instruction, Multiple Data)
Correct Answer: SISD (Single Instruction, Single Data)
Explanation:A standard uniprocessor executes one instruction stream on one data stream, making it SISD.
Incorrect! Try again.
22Which classification of parallel computers is best suited for Vector Processing / Array Processors?
A.SISD
B.SIMD
C.MISD
D.MIMD
Correct Answer: SIMD
Explanation:SIMD (Single Instruction, Multiple Data) is used for vector processing where a single control unit broadcasts the same instruction to multiple execution units operating on different data elements.
Incorrect! Try again.
23In Pipelining, what is the theoretical speedup achievable with a -stage pipeline assuming no stalls?
A.
B.
C.
D.
Correct Answer:
Explanation:Ideally, a -stage pipeline can complete one instruction per clock cycle, providing a speedup of compared to a non-pipelined system that takes cycles per instruction.
Incorrect! Try again.
24What is a Structural Hazard in pipelining?
A.Dependency between data of two instructions.
B.Branching instructions changing the flow.
C.Hardware resource conflict (e.g., two stages needing memory at the same time).
D.A voltage drop in the CPU.
Correct Answer: Hardware resource conflict (e.g., two stages needing memory at the same time).
Explanation:Structural Hazards arise when the hardware cannot support all possible combinations of instructions simultaneously (resource conflict).
Incorrect! Try again.
25A situation where an instruction depends on the result of a previous instruction that has not yet completed is called:
A.Structural Hazard
B.Data Hazard
C.Control Hazard
D.Branch Hazard
Correct Answer: Data Hazard
Explanation:A Data Hazard (Read-After-Write) occurs when operands are not yet available because the producing instruction hasn't finished writing them.
Incorrect! Try again.
26What technique is commonly used to minimize the performance penalty of Control Hazards (Branching)?
A.Operand Forwarding
B.Branch Prediction
C.Memory Interleaving
D.Cache Coherence
Correct Answer: Branch Prediction
Explanation:Branch Prediction logic tries to guess the outcome of a branch instruction to keep the pipeline full before the branch condition is actually evaluated.
Incorrect! Try again.
27Which of the following defines Throughput in a pipelined processor?
A.The time to process a single instruction.
B.The number of instructions completed per unit time.
C.The number of pipeline stages.
D.The clock frequency.
Correct Answer: The number of instructions completed per unit time.
Explanation:Throughput is the rate at which instructions exit the pipeline, typically measured in instructions per second.
Incorrect! Try again.
28In a Tightly Coupled Multiprocessor system:
A.Processors do not share memory.
B.Processors share a global main memory.
C.Communication is done via message passing over LAN.
D.Each processor has its own OS copy.
Correct Answer: Processors share a global main memory.
Explanation:Tightly Coupled systems share a common global memory and are often controlled by a single OS. They communicate via this shared memory.
Incorrect! Try again.
29Which interconnection structure uses a set of crosspoints where a switch determines the path between a processor and a memory module?
A.Time-Shared Common Bus
B.Crossbar Switch
C.Hypercube
D.Ring Network
Correct Answer: Crossbar Switch
Explanation:A Crossbar Switch uses a grid of switching elements (crosspoints) allowing simultaneous connections between different processor-memory pairs.
Incorrect! Try again.
30In a Time-Shared Common Bus system, how is conflict resolved when multiple processors want to access the bus?
A.Data is merged.
B.Arbitration logic (Arbiter).
C.The bus shuts down.
D.Random selection.
Correct Answer: Arbitration logic (Arbiter).
Explanation:An Arbiter is required to decide which processor gets control of the bus at any given time to prevent conflicts.
Incorrect! Try again.
31The Omega Network is an example of which type of interconnection structure?
A.Static Topology
B.Multistage Switching Network
C.Crossbar Switch
D.Shared Bus
Correct Answer: Multistage Switching Network
Explanation:The Omega Network is a dynamic Multistage Switching Network typically constructed using switches.
Incorrect! Try again.
32What is the Cache Coherence problem in multiprocessors?
A.The cache is too small.
B.Multiple caches may hold different values for the same memory block.
C.The cache is slower than main memory.
D.The processor cannot read the cache.
Correct Answer: Multiple caches may hold different values for the same memory block.
Explanation:In multiprocessors with private caches, if one processor modifies a variable, other caches holding copies of that variable must be updated or invalidated to maintain Coherence.
Incorrect! Try again.
33Which protocol is commonly used to maintain Cache Coherence in bus-based multiprocessors?
A.Snoopy Protocol
B.Sliding Window Protocol
C.Handshaking Protocol
D.Interrupt Protocol
Correct Answer: Snoopy Protocol
Explanation:Snoopy Protocols (e.g., Write-Invalidate) rely on cache controllers 'snooping' (monitoring) the bus to detect transactions involving data blocks they hold.
Incorrect! Try again.
34In a Hypercube interconnection network, a system with nodes has a node degree (connections per node) of:
A.
B.
C.
D.
Correct Answer:
Explanation:In a hypercube of dimension , there are nodes, and each node is connected to exactly neighbors.
Incorrect! Try again.
35What distinguishes NUMA (Non-Uniform Memory Access) from UMA (Uniform Memory Access)?
A.NUMA has no shared memory.
B.In NUMA, memory access time depends on the memory location relative to the processor.
C.NUMA is strictly for single processors.
D.UMA allows faster access to remote memory.
Correct Answer: In NUMA, memory access time depends on the memory location relative to the processor.
Explanation:In NUMA, a processor can access its local memory faster than remote memory (memory attached to other processors). In UMA, access time is uniform for all memory locations.
Incorrect! Try again.
36What is Memory Interleaving?
A.Mixing ROM and RAM chips.
B.Dividing memory into modules that can be accessed in parallel.
C.Storing data in non-volatile memory.
D.Using virtual memory for cache.
Correct Answer: Dividing memory into modules that can be accessed in parallel.
Explanation:Memory Interleaving spreads memory addresses across multiple modules, allowing simultaneous access to sequential addresses, thereby increasing effective bandwidth.
Incorrect! Try again.
37If a computer has a 32-bit address bus, what is the maximum addressable memory space?
A.$1$ GB
B.$2$ GB
C.$4$ GB
D.$8$ GB
Correct Answer: $4$ GB
Explanation: bytes bytes GB.
Incorrect! Try again.
38What is the role of a Multiport Memory?
A.To allow only one processor to access memory at a time.
B.To allow multiple processors to access separate internal memory modules simultaneously.
C.To replace cache memory.
D.To store only instructions.
Correct Answer: To allow multiple processors to access separate internal memory modules simultaneously.
Explanation:Multiport Memory modules have multiple access ports, allowing multiple processors to access the memory structure at the same time (provided they access different addresses).
Incorrect! Try again.
39Which pipelining hazard is resolved using Pipeline Interlocking (Stalling)?
A.It is used to clear the cache.
B.It halts the pipeline for one or more cycles to resolve a Data Hazard.
C.It increases the clock speed.
D.It is used for branch prediction.
Correct Answer: It halts the pipeline for one or more cycles to resolve a Data Hazard.
Explanation:Interlocking (or bubbling/stalling) pauses the dependent instructions in the pipeline until the required data is available.
Incorrect! Try again.
40In the context of Cache Mapping, what is the Tag?
A.The data stored in the cache.
B.A unique identifier stored with the block to determine which main memory block is currently in the line.
C.The index of the cache set.
D.The offset within the block.
Correct Answer: A unique identifier stored with the block to determine which main memory block is currently in the line.
Explanation:The Tag bits are the high-order bits of the address stored alongside the data in the cache to identify the memory block.
Incorrect! Try again.
41Loosely Coupled Multiprocessors typically use which scheme for communication?
A.Shared Memory variables
B.Message Passing
C.Common Register File
D.Direct wire connection
Correct Answer: Message Passing
Explanation:Loosely coupled systems (like clusters) do not share global memory; they communicate by passing messages over a network.
Incorrect! Try again.
42Which of the following is an advantage of Associative Mapping over Direct Mapping?
A.Simpler hardware.
B.Lower cost.
C.Higher hit ratio due to flexibility in block placement.
D.No need for replacement algorithms.
Correct Answer: Higher hit ratio due to flexibility in block placement.
Explanation:Associative mapping allows any block to go into any line, reducing conflict misses and potentially increasing the hit ratio.
Incorrect! Try again.
43What occurs when the system spends more time swapping pages in and out than executing instructions?
A.Deadlock
B.Thrashing
C.Paging
D.Interleaving
Correct Answer: Thrashing
Explanation:Thrashing is a state of severe performance degradation where the OS spends most of its time paging (swapping) data rather than executing processes.
Incorrect! Try again.
44In a Vector Processor, instructions operate on:
A.Single scalar values.
B.One-dimensional arrays of data (vectors).
C.Only boolean values.
D.Only integer values.
Correct Answer: One-dimensional arrays of data (vectors).
Explanation:Vector processors have specialized instructions that operate on entire vectors (arrays) of data simultaneously.
Incorrect! Try again.
45Which equation represents the speedup factor of a pipeline, where is time without pipeline and is time with pipeline?
A.
B.
C.
D.
Correct Answer:
Explanation:Speedup is the ratio of the execution time of the non-pipelined system to the execution time of the pipelined system.
Incorrect! Try again.
46What is the primary function of the Bootstrap Loader?
A.To load the operating system from disk to main memory upon startup.
B.To clean the cache.
C.To manage virtual memory pages.
D.To synchronize processors.
Correct Answer: To load the operating system from disk to main memory upon startup.
Explanation:The bootstrap loader is a small program stored in ROM that runs on power-up to load the main OS from secondary storage into RAM.
Incorrect! Try again.
47In a Directory-based cache coherence protocol, where is the information about the status of memory blocks stored?
A.In each processor's cache controller only.
B.In a centralized or distributed directory.
C.In the hard disk.
D.In the instruction register.
Correct Answer: In a centralized or distributed directory.
Explanation:Directory-based protocols keep track of which caches hold which blocks in a directory, rather than snooping on a shared bus.
Incorrect! Try again.
48Which write policy is easiest to implement if the cache uses a parity bit for error detection?
A.Write-back
B.Write-through
C.Write-once
D.Write-allocation
Correct Answer: Write-through
Explanation:In Write-through, the main memory always has the valid data, so if a parity error occurs in cache, data can simply be re-fetched from memory. Write-back makes recovery harder.
Incorrect! Try again.
49Assuming a cache size of 64KB and a block size of 16 bytes, how many lines (blocks) are in the cache?
A.$1024$
B.$2048$
C.$4096$
D.$8192$
Correct Answer: $4096$
Explanation:Lines = Cache Size / Block Size = .
Incorrect! Try again.
50What is the specific benefit of Pipelining in processors?
A.It reduces the latency of a single instruction.
B.It increases the overall throughput of instruction execution.
C.It eliminates branch hazards.
D.It increases the clock cycle time.
Correct Answer: It increases the overall throughput of instruction execution.
Explanation:Pipelining does not reduce the time for an individual instruction (latency) but allows multiple instructions to overlap, increasing the rate (throughput) at which they complete.
Incorrect! Try again.
Give Feedback
Help us improve by sharing your thoughts or reporting issues.