Group “A”
Brief Answer Questions:
1. What is demerit of 1’s complement technique?
Ans: The main demerit of the 1’s complement technique is the existence of two representations for zero: positive zero and negative zero.
2. Write microoperation for fetch and decode instruction.
Ans:
3. What are advantages of Zero address instruction ?
Ans: The advantages of Zero address instruction are:
- Simplicity.
- Efficiency.
- Flexibility.
- Compactness.
4. What is limitation of Flynn’s classification?
Ans: The limitation of Flynn’s classification are:
- Limited Applicability to Modern Architectures.
- Lacks Scalability Considerations.
- Ignores Communication Overhead.
- Neglects Memory Access Patterns.
5. Differentiate between direct and set associative mapping.
Ans: The difference between direct and set associative mapping are:
Aspects | Direct associative mapping | Set associative mapping |
---|---|---|
Hardware Complexity. | Simple hardware implementation, requiring fewer resources. | More complex hardware implementation compared to direct mapping, as it involves managing multiple sets and performing associative searches within those sets. |
Flexibility | Limited flexibility due to one-to-one mapping between main memory blocks and cache lines. | More flexible than direct mapping as it allows multiple cache lines to compete for storing the same main memory block, providing better cache utilization. |
6. Define strobe control method.
Ans: The strobe control method is a technique used in digital systems to synchronize the transfer of data between devices. In this method, a control signal called a “strobe” is used to indicate the valid timing for data transmission or reception.
7. Make distinction between loosely coupled and tightly coupled multiprocessor system.
Ans: The difference between loosely coupled and tightly coupled multiprocessor system are:
Aspects | Loosely coupled multiprocessor | Tightly coupled multiprocessor |
---|---|---|
Performance | Performance may vary based on the efficiency of communication over the network | Generally higher performance due to efficient communication and resource sharing among processors. |
Complexity | Lower system complexity compared to tightly coupled systems. | Higher system complexity due to shared resources and inter processor communication. |
8. Differentiate between memory -reference and register -reference instruction.
Ans: The between memory -reference and register -reference instruction are:
Aspects | Memory -reference instruction | Register -reference instruction |
---|---|---|
Flexibility | Provides flexibility in accessing a wide range of memory locations. | Offers limited flexibility as it operates only on data stored in registers. |
Overhead and Complexity. | May incur additional overhead and complexity due to memory access operations | Generally incurs lower overhead and complexity as it operates directly on register operands. |
9. How hardware interlocks techniques resolve data dependency?
Ans: Hardware interlock techniques resolve data dependencies by stalling the pipeline, forwarding data directly between dependent instructions, employing out-of-order execution, using score boarding to track dependencies, and sometimes executing instructions speculatively.
10. Define vector instruction.
Ans: A vector instruction is a type of instruction in computer architecture designed to perform a single operation on multiple data elements simultaneously, often in parallel
Group “B”
Exercise Problems:
11. Explain DMA transfer with block diagram . The time delay of four segments in pipeline system are t1 =10ns, t2=25ns, t3=15ns and t4=35ns. Interface registers delay time d = 6ns. Find speed up ratio of pipeline system over equivalent conventional system for 200 tasks.
Ans: Direct Memory Access (DMA) is a technique used in computer systems to allow certain hardware subsystems to access system memory for data transfer without involving the CPU.
DMA Transfer Process:
- Initialization:
- The CPU initializes the DMA controller by configuring transfer parameters such as source and destination addresses, transfer size, and transfer direction.
- Transfer Initiation:
- The peripheral device requests a data transfer by raising a DMA request signal.
- The DMA controller receives the request and arbitrates for control of the memory bus.
- Data Transfer:
- Once granted control of the memory bus, the DMA controller transfers data between the peripheral device and main memory directly without CPU intervention.
- Data is transferred in blocks or bursts, depending on the transfer parameters configured by the CPU.
- Completion:
- Upon completion of the data transfer, the DMA controller signals the peripheral device and releases control of the memory bus.
- The CPU may be notified of the transfer completion through an interrupt or status register.
DM Transfer block diagram :
Given:
The time delay of four segments in pipeline system are t1 =10ns, t2=25ns, t3=15ns and t4=35ns.
Interface registers delay time d = 6ns.
For the pipeline system:
Total time Tp​ taken to process 200 tasks = t1​+t2​+t3​+t4​+(200−1)×(t1​+t2​+t3​+t4​+d)
For the conventional system:
Total time Tc​ taken to process 200 tasks = 200×(t1​+t2​+t3​+t4​)
Now, we can calculate the total time taken by each system:
For the pipeline system:
Tp​=t1​+t2​+t3​+t4​+199×(t1​+t2​+t3​+t4​+d)
Tp​=10+25+15+35+199×(10+25+15+35+6)
Tp​=85+199×91
Tp​=85+18089
Tp​=18174 ns
For the conventional system:
Tc​=200×(t1​+t2​+t3​+t4​)
Tc​=200×(10+25+15+35)
Tc​=200×85
Tc​=17000 ns
Now, we can calculate the speedup ratio:
Speedup ratio=Tc/​Tp​​
Speedup ratio=17000​/18174
Speedup ratio≈0.935
So, the speedup ratio of the pipeline system over the equivalent conventional system is approximately 0.935.
12. Differentiate between microprogrammed and hardwired control organization. Explain how next address for control memory is selected in microprogram organization.
Ans: The difference between microprogrammed and hardwired control organization are:
Aspects | Microprogrammed Control Organization | Hardwired control organization |
---|---|---|
13. Write a symbolic program that evaluates the logic exclusive -NOR of two logic operand and explain each statement .
Ans:
14. A two -word instruction is stored in memory at an address designated by symbol X. Address field of instruction (stored at X+1) is designated by symbol Y. Operand used during the execution of the instruction is stored at an address symbolized by K. An index register contains the value L. State how K is calculated from the other address if the addressing mode of the instruction is
i) Direct ii) Indirect iii) Relative iv) Indexed
Ans:
15. Multiply -37 and +21 using “Multiplication of signed magnitude data” algorithm.
Ans:
Group “C”
Comprehensive Answer Questions:
16. Differentiate between fixed -point representation and floating point representation. Explain at least four application access of logic micro-operation with suitable example.
Ans: The difference between Fixed-point representation and floating point representation are:
Aspects | Fixed-point Representation | Floating point representation |
---|---|---|
Definition | Fixed-point representation is defined as a numbers with a fixed number of digits before and after the decimal point | Floating-point representation is defined as a numbers with a varying number of digits before and after the decimal point. |
Precision | Precision is limited by the number of bits available for representing the fractional part. | Precision can be adjusted dynamically based on the magnitude of the number being represented. |
Range | The range is limited by the number of bits used to represent the number. | It can represent very large and very small numbers with a wide dynamic range. |
Example | In binary fixed-point representation, a number like 12.345 would be represented as 1100.0101, where 1100 represents the integer part and 0101 represents the fractional part. | In binary floating-point representation, a number like 12.345 would be represented as 1.00110100110100001101001 x 2^3, where the mantissa (1.00110100110100001101001) and exponent (3) represent the magnitude and the position of the decimal point. |
The four application access of logic micro-operation with suitable example are:
17. Explain cross bar interconnection structure with its merits and demerits . Explain associative memory with block diagram.
Ans: Cross bar interconnection structure is a non-blocking interconnection network that provides a direct connection between any input and output pair, allowing for full connectivity without contention or blocking. It is a type of network topology used in computer systems, particularly in high-performance computing environments
Here’s an explanation of the crossbar interconnection structure along with its merits and demerits:
Structure:
- A crossbar interconnection structure consists of a grid of horizontal and vertical lines intersecting each other at specific points, forming a matrix-like structure.
- Each horizontal line represents an input port, while each vertical line represents an output port.
- The intersection points (cross points) between the horizontal and vertical lines serve as switches that can be configured to connect an input port to an output port.
Merits:
- Full Connectivity: A crossbar interconnection structure provides full connectivity between input and output ports, allowing any input to be connected to any output without contention or blocking.
- Non-blocking: The crossbar architecture is non-blocking, meaning that concurrent communications between different input-output pairs can occur without interference or contention for resources.
- Low Latency: With direct connections between input and output ports, the latency of communication in a crossbar interconnection is minimal, leading to fast data transfer and reduced processing times.
- Scalability: Crossbar interconnection networks can be easily scaled to accommodate a large number of input and output ports, making them suitable for high-performance computing systems.
Demerits:
- Complexity: Implementing a crossbar interconnection network can be complex and expensive, especially for large-scale systems with a high number of input and output ports.
- Hardware Cost: The hardware cost of a crossbar switch increases quadratically with the number of input and output ports, making it less cost-effective for very large systems.
- Limited Scalability: While crossbar interconnections are scalable in theory, practical limitations in hardware complexity and cost may restrict their scalability for extremely large systems.
- Power Consumption: The high connectivity and low-latency nature of crossbar interconnections can result in increased power consumption, especially for large-scale systems with numerous active switches.
An associative memory can be considered as a memory unit whose stored data can be identified for access by the content of the data itself rather than by an address or memory location. In associative memory, the data is stored along with associated tags or keys, and retrieval is performed by specifying the content (or a portion of it) rather than a specific memory address.