Model Questions and Solutions BIM 3rd Semester

  1. Home
  2. Docs
  3. Model Questions and Solut...
  4. Microprocessor and Comput...
  5. Model Question And Solution (2021)

Model Question And Solution (2021)

Ans: Signed Magnitude Representation:

image 23

The remaining 4 bits represent the magnitude of the number, which is (-9)10 in binary (1001).

Two’s Complement Representation:

image 24

So, (-9)10 in 4-bit two’s complement representation is 0111.

Ans: Arithmetic left shift sequencing is a process used in computer arithmetic to perform left shifts on binary numbers, particularly in signed arithmetic operations.

Ans: Address sequencing is defined as the process of generating a sequence of memory addresses for accessing data stored in a computer’s memory.

Ans: The limitation of programmed I/O are:

  • CPU Overhead.
  • Slow Data Transfer.
  • Inefficient Use of CPU Resources.
  • Limited Scalability.

Ans: The main characteristics of RISC are:

  • Simplified Instruction Set.
  • Single Clock Cycle Execution.

Ans: The limitation of strobe method of data transfer are:

  • Limited Timing Flexibility.
  • Noise Sensitivity.
  • Limited Error Detection.
  • Synchronization Issues.

Ans: The use of input output processor in computer system are:

  • Offloading CPU.
  • Data Transfer.
  • Error Handling
  • Device Management.

Ans: The hazards of pipeline are:

  • Data Hazards.
  • Control Hazards.
  • Structural Hazards.

Ans: A critical section is a section of code within a program or software application that must be executed atomically, meaning that it cannot be concurrently executed by multiple threads or processes.

Ans: The advantages of associative memory are:

  • Simplified Programming.
  • Flexibility.
  • Reduced Hardware Overhead.
  • Efficient Cache Management.






  • Opcode and mode field stored at location 500.
  • Address field stored at location 501.
  • Address field value: 700.
  • Processor register R1 contains the value 300.

i. Direct Addressing Mode:

  • Effective address = Address field value
  • Effective address = 700

ii. Immediate Addressing Mode:

  • In immediate addressing mode, the operand is specified directly within the instruction.
  • Effective address = Immediate value specified in the instruction
  • Since the instruction does not specify an immediate value, we cannot calculate the effective address for immediate addressing mode in this case.

iii. Indirect Addressing Mode:

  • In indirect addressing mode, the address field contains the address of the operand.
  • Effective address = Value at the address specified in the address field
  • Effective address = Value at address 700

iv. Register Indirect Addressing Mode:

  • In register indirect addressing mode, the operand is specified indirectly through a register.
  • Effective address = Value in the specified register
  • Effective address = Value in register R1
  • Effective address = 300

So, the effective address for each addressing mode is:

i. Direct Addressing Mode: 700

ii. Immediate Addressing Mode: N/A (no immediate value specified)

iii. Indirect Addressing Mode: Value at address 700

iv. Register Indirect Addressing Mode: 300


Speedup Ratio=Total time taken by non-pipeline system​/Total time taken by pipeline system

Let’s calculate:

For the non-pipeline system:

  • Time taken per task = 30 ns
  • Total time for 75 tasks = 30 ns/task×75 tasks=2250 ns30ns/task×75tasks=2250ns

For the pipeline system:

  • Clock cycle = 10 ns
  • Number of stages = 5
  • Throughput per stage = 1 task per clock cycle
  • Total time for 75 tasks = Number of cycles ×Clock cycle
  • Number of cycles = Number of tasks/Throughput per stage=1/75


Total time = 75 cycles×10 ns/cycle=750 ns

ow, we can calculate the speedup ratio:

Speedup Ratio=2250 ns/750 ns=3

So, the speedup ratio of the pipeline system compared to the non-pipeline system for 75 tasks is 3.

To find the maximum speedup that can be achieved, we consider an ideal scenario where the pipeline system achieves perfect speedup, meaning there are no pipeline stalls or overhead. In this case, the maximum speedup that can be achieved is determined by the number of pipeline stages:

Maximum Speedup=Number of pipeline stages

For the given pipeline system with 5 stages, the maximum speedup that can be achieved is 5.


Ans: Cache coherence is defined as a consistency and synchronization of data stored in different caches within a multiprocessor or multicore system. In such systems, each processor or core typically has its own cache memory to improve performance.

There are various cache mapping techniques and some of them are as:

Group 17 1
  1. Direct Mapping:
    • Each memory block is mapped to exactly one cache line.
    • The mapping is determined by a modulo function, which maps memory addresses to specific cache lines.
    • Simple and easy to implement but may lead to cache conflicts.
image 11

2. Fully Associative Mapping:

  • Each memory block can be placed in any cache line.
  • No restrictions on the placement of memory blocks.
  • Offers maximum flexibility and minimizes cache conflicts but requires complex hardware for searching cache contents.
image 5

3. K-way Set-Associative Mapping:

  • Combines aspects of both direct mapping and fully associative mapping.
  • Memory is divided into a number of sets, and each set contains multiple cache lines.
  • Each memory block can be mapped to any cache line within its corresponding set.
  • Allows for more flexibility and reduces the chance of cache conflicts compared to direct mapping.
sm 1

These mapping techniques help optimize cache performance by balancing factors such as cache access time, cache size, and complexity of cache hardware. The choice of mapping technique depends on the specific requirements and constraints of the system design.

Ans: A microprogrammed control unit is a type of control unit used in computer architecture to implement the control logic for executing instructions. Instead of using hardwired logic circuits, a microprogrammed control unit employs microinstructions stored in a control memory to define the control signals for various operations.

Any two interconnection structure for a multiprocessor are:

  1. Shared Bus Interconnection:In a shared bus interconnection structure, all processors in the multiprocessor system are connected to a common bus. This bus serves as the communication medium through which processors exchange data and coordinate their activities. Each processor has direct access to the bus, allowing it to send and receive data to and from memory and other processors. However, only one processor can access the bus at a time, leading to potential contention and delays if multiple processors attempt to access the bus simultaneously.Advantages:
    • Simplicity: Shared bus interconnection is relatively simple to implement, requiring only a single communication medium (the bus) for interprocessor communication.
    • Cost-Effectiveness: Since only one bus is needed, the hardware cost of implementing a shared bus interconnection is lower compared to other interconnection structures.
    • Bus Contention: Contention for the shared bus can occur when multiple processors attempt to access it simultaneously, leading to potential performance bottlenecks.
    • Limited Scalability: As the number of processors increases, the contention for the shared bus also increases, limiting the scalability of the system.
  2. Crossbar Interconnection:In a crossbar interconnection structure, each processor is connected to a set of input and output lines forming a matrix-like structure. These input and output lines intersect at a crossbar switch, allowing any processor to establish a direct connection with any other processor or memory module. Unlike a shared bus interconnection, a crossbar interconnection provides full connectivity between processors, enabling simultaneous communication between multiple processors without contention.Advantages:
    • High Scalability: A crossbar interconnection provides full connectivity between processors, allowing for efficient communication even as the number of processors increases.
    • Low Contention: Since each processor has its dedicated communication paths, there is no contention for resources, leading to improved performance and reduced latency.
    • Flexibility: The crossbar switch allows any processor to communicate directly with any other processor or memory module, providing flexibility in system configuration and communication patterns.
    • Complexity: Implementing a crossbar interconnection structure requires a more complex hardware design compared to a shared bus interconnection, potentially increasing cost and design complexity.
    • Higher Cost: The increased hardware complexity of a crossbar interconnection may result in higher costs compared to simpler interconnection structures.

How can we help?

Leave a Reply

Your email address will not be published. Required fields are marked *