Unit3 - Subjective Questions
CSE211 • Practice Questions with Detailed Answers
Define Peripheral Devices. Classify them into three main categories with examples.
Peripheral Devices are input-output devices connected to a computer's CPU and memory to transfer information into and out of the computer system. They act as the interface between the machine and the user or the physical world.
Classification of Peripheral Devices:
- Input Devices: Used to provide data and control signals to the information processing system.
- Examples: Keyboard, Mouse, Microphone, Scanner.
- Output Devices: Used to communicate the results of data processing carried out by the system to the outside world.
- Examples: Monitor, Printer, Plotter, Speaker.
- Storage Devices (Input/Output): Used to store data when it is not in use by the CPU. These can function as both source and destination of data.
- Examples: Hard Disk Drive (HDD), Optical Disks (CD/DVD), Flash Drives.
Explain the need for an Input-Output (I/O) Interface. Why can peripherals not be connected directly to the system bus?
Peripherals cannot be connected directly to the system bus due to several differences between the CPU/Memory and external devices. An I/O Interface resolves these discrepancies:
- Data Transfer Speed: Peripherals (e.g., keyboards) are often much slower than the CPU and memory. The interface buffers data to synchronize speeds.
- Data Codes and Formats: Peripherals may use different data formats (e.g., ASCII for characters) compared to the binary word format of the CPU. The interface performs the necessary conversion.
- Operating Modes: Peripherals act independently and asynchronously, whereas the CPU operates synchronously with a clock. The interface manages handshake signals.
- Signal Levels: Peripherals are electromechanical or electromagnetic and may require signal conversion (e.g., A/D or D/A conversion) to match the digital voltage levels of the computer bus.
Distinguish between Isolated I/O and Memory-Mapped I/O.
Isolated I/O:
- Address Space: Uses separate address spaces for memory and I/O devices.
- Control Signals: distinct control signals (e.g., , ) are used for I/O operations.
- Instructions: Requires special instructions like
INandOUTto transfer data. - Efficiency: Does not use up memory addresses for I/O, allowing full memory utilization.
Memory-Mapped I/O:
- Address Space: Uses the same address space for both memory and I/O. A portion of the memory address map is assigned to I/O devices.
- Control Signals: Uses the same control signals (, ) for both memory and I/O.
- Instructions: Any instruction that references memory (e.g.,
MOV,LOAD,STORE) can be used for I/O. - Efficiency: Simplifies programming but reduces the range of addresses available for physical memory.
Explain the Asynchronous Data Transfer using the Strobe Control method. What is its main disadvantage?
Strobe Control is a method of asynchronous data transfer where a control signal (the strobe) is employed to indicate the time at which data is being transmitted.
Mechanism:
- Source-Initiated: The source unit places data on the data bus and activates the strobe signal to inform the destination that data is valid.
- Destination-Initiated: The destination unit activates the strobe signal to inform the source that it is ready to accept data. The source then places data on the bus.
Disadvantage:
The primary disadvantage of the strobe method is the lack of feedback (acknowledgment). The source unit has no way of knowing whether the destination unit has actually received the data or if the destination was ready to receive it before the data was removed from the bus.
Describe the Handshaking protocol for data transfer. Illustrate the sequence of events for a Source-Initiated transfer.
Handshaking is a data transfer mechanism that uses two control signals to ensure synchronization: a signal to initiate the transfer and a second signal to acknowledge it.
Source-Initiated Transfer Sequence:
- Source: Places data on the data bus.
- Source: Activates the Data Valid signal.
- Destination: Accepts the data from the bus.
- Destination: Activates the Data Accepted (or Ready) signal.
- Source: Deactivates the Data Valid signal after recognizing the acknowledgment.
- Source: Invalidates the data on the bus.
- Destination: Deactivates the Data Accepted signal, preparing for the next transfer.
This ensures that data is only sent when the destination is ready and is not lost if the destination is slow.
List the three main Modes of Data Transfer. Explain Programmed I/O.
Modes of Data Transfer:
- Programmed I/O
- Interrupt-Initiated I/O
- Direct Memory Access (DMA)
Programmed I/O:
In this mode, the input/output operations are the result of I/O instructions written in the computer program. The CPU stays in a program loop (busy-wait) checking the status flag of the I/O interface to determine if the device is ready.
- Process: CPU requests I/O -> Interface sets Status Flag -> CPU loops checking Flag -> When Flag = Ready, Data transfer occurs.
- Drawback: It is inefficient because the CPU wastes time polling the peripheral, preventing it from doing other useful computational work.
Explain Interrupt-Initiated I/O. How does it improve upon Programmed I/O?
Interrupt-Initiated I/O is a technique where the external device informs the CPU when it is ready for data transfer, rather than the CPU checking the device repeatedly.
Mechanism:
- The CPU executes its current program.
- When an I/O interface is ready, it sends an Interrupt Request signal to the CPU.
- The CPU temporarily stops the current program execution, saves the return address (PC) and status (PSW).
- The CPU branches to a Service Routine to process the data transfer.
- After completion, the CPU restores the saved state and returns to the original program.
Improvement: It eliminates the "busy-waiting" time found in Programmed I/O. The CPU remains busy with useful tasks and processes I/O only when notified, significantly improving system throughput.
What is Direct Memory Access (DMA)? Why is it required for high-speed devices?
Direct Memory Access (DMA) is a feature that allows certain hardware subsystems to access the main system memory independently of the Central Processing Unit (CPU).
Why it is required:
- Throughput: For high-speed peripherals (like Disk Drives), the data transfer rate is close to the memory speed. Managing this via the CPU (Programmed or Interrupt I/O) would overwhelm the CPU, leaving no time for processing.
- Efficiency: DMA allows large blocks of data to be transferred directly between the peripheral and memory. The CPU only initiates the transfer and is notified only when the entire block is done, freeing it to perform other tasks during the transfer.
Describe the function of the following registers in a DMA Controller: Address Register, Word Count Register, and Control Register.
- Address Register:
- Stores the starting address in the memory where the data is to be read from or written to. This register is automatically incremented after each word transfer.
- Word Count Register:
- Stores the number of words to be transferred. The CPU loads the initial count. This register is decremented after each transfer. When the count reaches zero, the DMA transfer ceases and an interrupt is generated.
- Control Register:
- Specifies the mode of transfer (Read or Write), enables/disables the DMA, and may specify the type of transfer (Burst or Cycle Stealing). It coordinates the operation of the DMA controller.
Compare Burst Transfer and Cycle Stealing in the context of DMA.
Burst Transfer:
- Operation: The DMA controller takes control of the system bus and transfers the entire block of data continuously.
- CPU Impact: The CPU is completely disabled from using the bus for the duration of the block transfer.
- Use Case: Used for very fast magnetic storage devices where data integrity depends on continuous flow.
Cycle Stealing:
- Operation: The DMA controller takes control of the bus to transfer one word (or byte) at a time, then releases the bus back to the CPU.
- CPU Impact: The CPU is delayed but not halted for long periods. It "steals" a memory cycle from the CPU.
- Use Case: Useful when the CPU processing must run concurrently with I/O, preventing the CPU from being idle for long durations.
Draw and explain the block diagram of a DMA Controller interacting with the CPU and System Bus.
A DMA Controller interacts with the system using Bus Request (BR) and Bus Grant (BG) signals.
Block Diagram Components:
- Data Bus / Address Bus buffers: Connect to system buses.
- DMA Internal Registers: Address Register, Word Count Register, Control Register.
- Control Logic: Handles handshaking with CPU.
Operation:
- Initialization: CPU loads the Address and Word Count registers via the data bus.
- Request: When the peripheral is ready, the DMA sends a Bus Request (BR) to the CPU.
- Grant: The CPU completes the current instruction cycle, releases the bus (High Impedance), and sends Bus Grant (BG).
- Transfer: The DMA puts the memory address on the Address Bus and generates Read/Write control signals (, , , ). Data flows directly between Memory and I/O.
- Completion: After the Word Count reaches zero, the DMA releases the BR signal, and the CPU regains bus control.
Explain the Daisy Chaining Priority interrupt mechanism. How is the device causing the interrupt identified?
Daisy Chaining consists of a serial connection of all devices that request an interrupt. The device with the highest priority is placed in the first position, followed by lower priority devices.
Mechanism:
- Interrupt Request: All devices share a common interrupt request line. If any device needs service, this line goes high.
- Interrupt Acknowledge: The CPU sends an interrupt acknowledge signal () to the first device in the chain (highest priority).
- Propagation:
- If device 1 has generated the interrupt, it accepts the , blocks the signal from going further, and places its Vector Address (VAD) on the bus.
- If device 1 did not request the interrupt, it passes the signal to the next device in the chain.
- Identification: The device that blocks the signal and places its VAD on the bus is identified by the CPU using that VAD to find the specific service routine.
Describe the Parallel Priority Interrupt hardware method. What is the role of the Priority Encoder?
Parallel Priority Interrupt uses a register where bits are set separately by the interrupt signal from each device. Priority is established according to the position of the bits in the register.
Components:
- Interrupt Register: Holds individual status of requests.
- Mask Register: Can be programmed by the CPU to disable lower priority interrupts while a higher priority is being serviced.
- Priority Encoder: This is a logic circuit that accepts the inputs from the mask register.
Role of Priority Encoder:
If two or more inputs arrive simultaneously, the Priority Encoder determines the highest priority input and generates a distinct vector address output corresponding to that input. For example, in a 4-to-2 encoder, if input 3 (highest) and input 1 are active, the output will represent binary 3, ignoring input 1. This output is used to point to the correct Interrupt Service Routine.
What is an Input-Output Processor (IOP)? How does it differ from a standard CPU?
An Input-Output Processor (IOP), also known as a channel, is a processor with direct memory access capability that communicates with I/O devices. It is essentially a computer dedicated to I/O tasks.
Differences from CPU:
- Instruction Set: The IOP has a limited instruction set oriented towards I/O transfers (e.g., Read, Write, Control) and does not perform complex arithmetic or logical processing.
- Purpose: The CPU handles data processing and logic, while the IOP handles the details of data transfer and device communication.
- Operation: The CPU initiates the IOP by instructing it where the I/O program resides in memory. The IOP then operates independently, interrupting the CPU only when the entire I/O task is finished.
Distinguish between Selector Channel and Multiplexer Channel in the context of an IOP.
These are types of I/O channels used to manage data flow:
1. Selector Channel:
- Usage: Designed for high-speed devices (e.g., magnetic disks, tapes).
- Operation: It can select only one device at a time. Once a device is selected, the channel is dedicated to that device until the entire block of data is transferred.
- Mode: Operates in Burst Mode.
2. Multiplexer Channel:
- Usage: Designed for slow to medium-speed devices (e.g., printers, terminals).
- Operation: It can handle multiple devices simultaneously by time-multiplexing the channel's resources.
- Mode: Operates in Byte-Interleaved Mode (similar to cycle stealing), transferring bytes from different devices in rotation.
What is UART? Describe the frame format of asynchronous serial data transfer.
UART stands for Universal Asynchronous Receiver-Transmitter. It is a hardware component that translates data between parallel (used by the CPU) and serial forms (used for communication).
Frame Format:
In asynchronous serial transfer, the line is usually High (1) when idle. A character transmission consists of:
- Start Bit: One bit (logic 0) indicates the beginning of a character.
- Data Bits: Usually 5 to 8 bits representing the character code (e.g., ASCII).
- Parity Bit: (Optional) One bit used for error detection.
- Stop Bit(s): One or more bits (logic 1) to return the line to the idle state and indicate the end of the character.
The receiver uses the Start Bit to synchronize its internal clock to sample the following data bits.
Explain the difference between Synchronous and Asynchronous serial transmission.
Asynchronous Transmission:
- Synchronization: Does not require a shared clock. Synchronization is achieved via Start and Stop bits for every character.
- Efficiency: Lower efficiency due to overhead (start/stop bits per byte).
- Usage: Keyboard to computer, lower speed modems.
Synchronous Transmission:
- Synchronization: Sender and Receiver share a common clock signal, or the clock is embedded in the data stream.
- Data Flow: Data is sent in large blocks or frames without start/stop bits between characters. The block is preceded by unique synchronization bytes (SYNC).
- Efficiency: Higher efficiency; suitable for high-speed data transfer.
- Usage: Network protocols, high-speed bulk transfer.
Define Baud Rate and Bit Rate. When are they equal?
Bit Rate:
- The speed at which data is transferred, measured in bits per second (bps).
Baud Rate:
- The rate at which the signal changes state on the transmission line. It represents the number of signal symbols transmitted per second.
Relationship:
- They are equal only when each signal change (symbol) represents exactly one bit of data (binary signaling). If a modulation technique encodes multiple bits per signal change (e.g., QAM), the Bit Rate will be higher than the Baud Rate.
What are the four types of I/O Commands that an interface may receive from the CPU?
When the CPU addresses an I/O interface, it issues a command:
- Control Command: Used to activate the peripheral and inform it what to do. The particular control command depends on the specific peripheral (e.g., "rewind tape", "start motor").
- Status Command: Used to test various status conditions in the interface and the peripheral (e.g., checking "Ready" or "Error" flags).
- Data Output Command: Causes the interface to respond by transferring data from the bus into one of its registers (CPU writing to I/O).
- Data Input Command: Causes the interface to place the data from the peripheral into the bus (CPU reading from I/O).
Explain Software Polling for handling priority interrupts. What is its trade-off?
Software Polling is a technique to identify the highest priority source among multiple interrupts using software instead of hardware.
Mechanism:
- There is one common branch address for all interrupts.
- When an interrupt occurs, the CPU jumps to a service routine that contains a series of test instructions (polling).
- The program checks the status flag of the highest priority device first. If set, it services it.
- If not, it checks the next highest, and so on.
Trade-off:
- Advantage: Flexible (priority can be changed by code) and requires minimal hardware.
- Disadvantage: Slow. The time required to identify the source depends on the number of devices and their position in the polling order, causing latency for lower-priority devices.