Model Questions and Solutions BIM 3rd Semester

⌘K
  1. Home
  2. Docs
  3. Model Questions and Solut...
  4. Data Structure and Algori...
  5. DSA Model Question Solutions 2023

DSA Model Question Solutions 2023

  • Doubly linked list supports traversal in both forward and backward directions but Singly linked list only supports forward traversal.
  • Deleting a node in a doubly linked list is more efficient, especially when having a reference to the node to be deleted.
  • Doubly linked lists support efficient insertion and deletion at both the beginning and the end.
  • Function Call Management (Call Stack)
  • Expression Evaluation (Expression Stack)
  • Backtracking Algorithms
  • Undo and Redo Features in Text Editors
  • File Systems
  • Database Systems
  • Organization Charts
  • Decision Trees
  • Network Routing Algorithms

Let G=(V,E) be a graph with n vertices: v1, v2, v3, …… Vn. The adjacency matrix of G with respect to given ordered list of vertices is a nxn matrix denoted by A(G)=(aij)nxn.

Ans: Here’s a Java implementation of a pop operation in a stack:

import java.util.Stack;

public class StackPopExample {
    public static void main(String[] args) {
        // Create a stack
        Stack<Integer> stack = new Stack<>();

        // Push elements into the stack
        stack.push(10);
        stack.push(20);
        stack.push(30);

        // Pop the top element from the stack
        int poppedElement = pop(stack);

        // Output the popped element
        System.out.println("Popped Element: " + poppedElement);
    }

    // Function to pop an element from the stack
    public static int pop(Stack<Integer> stack) {
        if (stack.isEmpty()) {
            System.out.println("Stack is empty!");
            return -1; // Return a default value indicating failure
        }
        return stack.pop(); // Pop and return the top element
    }
}

Output:

Popped Element: 30

Ans: Here is a function for enqueue operation in circular queue is as follow:

public class CircularQueue {
    private int[] queue;
    private int front;
    private int rear;
    private int size;
    private int capacity;

    public CircularQueue(int capacity) {
        this.capacity = capacity;
        queue = new int[capacity];
        front = -1;
        rear = -1;
        size = 0;
    }

    public void enqueue(int item) {
        if (isFull()) {
            System.out.println("Queue is full. Cannot enqueue " + item);
            return;
        }
        if (isEmpty()) {
            front = 0;
            rear = 0;
        } else {
            rear = (rear + 1) % capacity;
        }
        queue[rear] = item;
        size++;
        System.out.println("Enqueued: " + item);
    }

    public boolean isFull() {
        return size == capacity;
    }

    public boolean isEmpty() {
        return size == 0;
    }

    public static void main(String[] args) {
        CircularQueue cq = new CircularQueue(5);

        // Enqueue elements
        cq.enqueue(10);
        cq.enqueue(20);
        cq.enqueue(30);
        cq.enqueue(40);
        cq.enqueue(50);
        cq.enqueue(60); // This will fail as the queue is full
    }
}

Output:

Enqueued: 10
Enqueued: 20
Enqueued: 30
Enqueued: 40
Enqueued: 50
Queue is full. Cannot enqueue 60

Here’s how a linked list fits the criteria of an abstract data type:

Encapsulation:

Linked lists encapsulate the idea of a sequence of elements where each element is linked to the next one. Users of the linked list do not need to know the internal details of how elements are stored or how pointers are managed.

Operations:

Linked lists typically support operations such as insertion, deletion, traversal, and searching. These operations define the behavior of the linked list ADT and are the primary means through which users interact with the data structure.

Interface:

An abstract data type defines an interface, which specifies the operations that can be performed on the data structure and their semantics. For a linked list, the interface might include methods like insert, delete, get, isEmpty, etc.

Data Hiding:

The internal implementation details of a linked list, such as the structure of nodes and the pointers linking them, are hidden from users of the ADT. Users interact with the linked list through its defined interface without needing to know how the data structure is implemented internally.

Ans: Here is the function of selection sort:

public class SelectionSort {
    public static void selectionSort(int[] arr) {
        int n = arr.length;

        // One by one move boundary of unsorted subarray
        for (int i = 0; i < n - 1; i++) {
            // Find the minimum element in unsorted array
            int minIndex = i;
            for (int j = i + 1; j < n; j++) {
                if (arr[j] < arr[minIndex]) {
                    minIndex = j;
                }
            }

            // Swap the found minimum element with the first element
            int temp = arr[minIndex];
            arr[minIndex] = arr[i];
            arr[i] = temp;
        }
    }

    public static void main(String[] args) {
        int[] arr = {64, 25, 12, 22, 11};
        System.out.println("Array before sorting:");
        printArray(arr);

        selectionSort(arr);

        System.out.println("\nArray after sorting:");
        printArray(arr);
    }

    public static void printArray(int[] arr) {
        for (int i = 0; i < arr.length; i++) {
            System.out.print(arr[i] + " ");
        }
        System.out.println();
    }
}

Output:

Array before sorting:
64 25 12 22 11 

Array after sorting:
11 12 22 25 64 

Ans: Here is the recursive function to compute gcd of any two positive integers:

public class GCDRecursive {
    public static void main(String[] args) {
        int num1 = 48;
        int num2 = 18;
        
        int gcd = computeGCD(num1, num2);
        
        System.out.println("GCD of " + num1 + " and " + num2 + " is: " + gcd);
    }
    
    public static int computeGCD(int a, int b) {
        if (b == 0) {
            return a;
        } else {
            return computeGCD(b, a % b);
        }
    }
}

Output:

GCD of 48 and 18 is: 6

Ans: Here’s how you can do it step by step:

  • Initialize an empty stack to hold operators.
  • Start scanning the infix expression from left to right.
  • If the current token is an operand (i.e., a letter), add it to the output.
  • If the current token is an opening parenthesis, push it onto the stack.
  • If the current token is a closing parenthesis, pop operators from the stack and add them to the output until an opening parenthesis is encountered. Discard both the opening and closing parentheses.
  • If the current token is an operator, repeatedly pop operators from the stack and add them to the output if they have higher precedence than the current operator. Then push the current operator onto the stack.
  • After scanning the entire expression, pop any remaining operators from the stack and add them to the output.

Let’s apply this algorithm to the given infix expression (AB) + (CD):

  1. Start with an empty stack and an empty output string.
  2. Scan the infix expression:
    • (AB) – Add AB to the output.
    • + – There are no operators on the stack, so push + onto the stack.
    • (CD) – Add CD to the output.
  3. Finish scanning the expression. Pop any remaining operators from the stack and add them to the output. In this case, there is only one operator +, so add it to the output.
  4. The final postfix expression is ABCD+.

So, (AB) + (CD) converted to an equivalent postfix expression is ABCD+.

Compare algorithms:

By analyzing the computational complexity of different algorithms solving the same problem, we can determine which algorithm is more efficient in terms of time or space.

Predict performance:

Computational complexity provides insights into how an algorithm will perform on large inputs. It helps us understand whether an algorithm will scale well as the input size grows.

Optimize algorithms:

Understanding the computational complexity of an algorithm allows us to identify potential bottlenecks and areas for optimization. We can focus our efforts on improving the parts of the algorithm that contribute most significantly to its overall complexity.

Choose the right algorithm:

For a given problem, there might be multiple algorithms available. Computational complexity analysis helps us choose the most appropriate algorithm based on the characteristics of the problem and the available resources.

Provide guarantees:

o

→Big O notation, often denoted as O(), is a mathematical notation used in computer science to describe the upper bound or worst-case behavior of an algorithm’s time or space complexity. It provides a simplified way to express how the runtime or memory usage of an algorithm grows with respect to the size of the input.

In Big O notation:

O(f(n)) represents the upper bound or worst-case scenario of the growth rate of an algorithm.
‘n’ represents the size of the input to the algorithm.
‘f(n)’ represents a function of ‘n’ that describes the behavior of the algorithm.
For example:

O(1) indicates constant time complexity. The runtime or space usage of the algorithm does not depend on the input size.
O(log n) indicates logarithmic time complexity. The runtime or space usage grows logarithmically with the input size.
O(n) indicates linear time complexity. The runtime or space usage grows linearly with the input size.
O(n^2) indicates quadratic time complexity. The runtime or space usage grows quadratically with the input size.
O(2^n) indicates exponential time complexity. The runtime or space usage grows exponentially with the input size.

Here is a function to delete a last node of doubly linked list:

public class DoublyLinkedList {
    // Node class representing a node in the doubly linked list
    static class Node {
        int data;
        Node prev;
        Node next;

        Node(int data) {
            this.data = data;
            prev = null;
            next = null;
        }
    }

    // Head and tail of the doubly linked list
    Node head, tail;

    // Constructor to initialize an empty doubly linked list
    DoublyLinkedList() {
        head = null;
        tail = null;
    }

    // Method to delete the last node of the doubly linked list
    void deleteLastNode() {
        if (tail == null) {
            System.out.println("List is empty. Nothing to delete.");
            return;
        }

        // If there is only one node in the list
        if (head == tail) {
            head = tail = null;
            return;
        }

        // Otherwise, move tail to the previous node and unlink the old tail
        tail = tail.prev;
        tail.next = null;
    }

    // Method to add a new node to the end of the doubly linked list
    void append(int data) {
        Node newNode = new Node(data);
        if (head == null) {
            head = tail = newNode;
            return;
        }
        tail.next = newNode;
        newNode.prev = tail;
        tail = newNode;
    }

    // Method to display the elements of the doubly linked list
    void display() {
        Node current = head;
        while (current != null) {
            System.out.print(current.data + " ");
            current = current.next;
        }
        System.out.println();
    }

    public static void main(String[] args) {
        DoublyLinkedList dll = new DoublyLinkedList();
        dll.append(10);
        dll.append(20);
        dll.append(30);

        System.out.println("Doubly linked list before deletion:");
        dll.display();

        dll.deleteLastNode();

        System.out.println("\nDoubly linked list after deletion of last node:");
        dll.display();
    }
}
Doubly linked list before deletion:
10 20 30 

Doubly linked list after deletion of last node:
10 20 

Ans:

Here are the lists of some common collision resolution technique:

Chaining:

Chaining is a simple collision resolution technique where each bucket of the hash table holds a linked list of key-value pairs that hash to the same index. When a collision occurs, the new key-value pair is appended to the linked list at the corresponding bucket.

Open Addressing:

  • Linear Probing: In linear probing, when a collision occurs, the hash table is searched sequentially (linearly) for the next available slot. If the slot is occupied, the search continues to the next slot until an empty slot is found. The key-value pair is then inserted into the first empty slot encountered.
  • Quadratic Probing: Quadratic probing uses a quadratic function to search for the next available slot. The search sequence is based on the formula : h(k, i)=(h(k)+c1 .i+c2.i^2) mod m, where h`(k) is the initial hash value, i is the iteration count, c1 and c2 are constants, and m is the size of hash table.
  • Double Hashing: Double hashing uses a secondary hash function to determine the interval between probes. It calculates the next slot by adding the result of the secondary hash function to the current slot index, resolving collisions by probing in a more deterministic manner.

Robin Hood Hashing:

Robin Hood hashing is an open addressing technique where elements are placed in the hash table, and if a collision occurs, the new element checks if it has traveled less distance from its initial position than the element occupying the current slot. If so, it swaps positions with the occupying element, effectively redistributing elements more evenly.

Linear probing is a simple and straightforward collision resolution technique used in open addressing.

Here’s how it works:

  1. When a collision occurs (i.e., the hash function maps two keys to the same index), linear probing searches the hash table linearly starting from the collision index.
  2. It checks each successive slot until an empty slot is found.
  3. Once an empty slot is found, the new key-value pair is inserted into that slot.

If the hash table becomes full or nearly full, linear probing can suffer from clustering issues, where consecutive slots become occupied, leading to inefficient search performance. To mitigate this, careful selection of the hash function and periodic resizing of the hash table are often necessary. Despite its simplicity, linear probing can be efficient for small or moderately sized hash tables and is widely used in practice.

Here are some reasons why balancing a BST is necessary:

Optimal Search Time:

A balanced BST ensures that the tree is as shallow as possible, which results in optimal search times. In a balanced tree, the height is
⁡O(logn), where n is the number of nodes in the tree. This means that the time complexity of search operations is O(logn), which is efficient.

Efficient Insertion and Deletion:

Balancing a BST ensures that insertion and deletion operations remain efficient. When a tree is balanced, insertion and deletion operations maintain the tree’s balanced structure, ensuring that the tree does not become significantly skewed or unbalanced.

Prevents Degeneration:

Without balancing, a BST can degenerate into a skewed tree, where one subtree is significantly larger than the other. This can happen if elements are inserted in sorted order. In a skewed tree, the height becomes O(n), where n is the number of nodes, leading to worst-case time complexity of O(n) for search, insertion, and deletion operations.

Maintains Stability:

A balanced BST maintains stability in the sense that the relative ordering of elements remains unchanged after insertion and deletion operations. Unbalanced trees may disrupt the relative ordering, leading to incorrect search results.

Supports Efficient Range Queries:

A balanced BST supports efficient range queries, where you need to find all elements within a certain range. With a balanced tree, you can efficiently traverse the tree to find elements within the specified range, optimizing the time complexity of range queries.

The differences between binary search trees (BSTs) and multiway search trees are as follows:

FeatureBinary Search Trees (BSTs)Multiway Search Trees
Child Nodes per ParentAt most two child nodes (left and right)Multiple child nodes per parent
Node StructureEach node contains a key and two pointersEach node contains multiple keys
Ordering PropertyMaintains ordering property such that for any given node, all keys in the left subtree are less than the node’s key, and all keys in the right subtree are greater than the node’s keyDoes not enforce strict ordering, keys within a node are ordered, but there are no specific rules about the arrangement of keys within different nodes
Search ComplexityAverage case: O(logn)
Worst case: O(n)
Average case: O(logn)
Worst case: O(logn)
BalancingSelf-balancing techniques such as AVL, Red-Black trees are often used to ensure optimal height and performance.Balancing is typically not enforced, but some variants may implement balancing mechanisms

Space EfficiencyTypically more memory-efficient than multiway search trees due to the binary structure.Can be less memory-efficient compared to BSTs, especially with large branching factors.
image 70

Ans: Minimum spanning trees (MSTs) have various applications across different domains due to their ability to efficiently connect all vertices of a weighted graph while minimizing the total edge weight.

Here are some common applications of minimum spanning trees:

  1. Network Design:
    • Telecommunication Networks: MSTs can be used to design efficient communication networks, such as telephone or internet networks, by connecting all locations with the minimum total cable length or cost.
    • Transportation Networks: MSTs help design transportation networks, such as highways or railways, to minimize construction costs while ensuring connectivity between all locations.
  2. Cluster Analysis:
    • Data Clustering: In data analysis, MSTs can be used for clustering similar data points together. By treating each data point as a vertex and the distance between points as edge weights, an MST can identify clusters of closely related data points.
  3. Approximation Algorithms:
    • Traveling Salesman Problem (TSP): MSTs are used as a component in approximation algorithms for the TSP. By adding the shortest edge not creating a cycle at each step, the algorithm constructs a minimum spanning tree, which is then modified to form a tour covering all vertices with minimal total distance.
  4. Resource Management:
    • Water Supply Networks: MSTs can be employed to design water distribution networks, ensuring efficient water flow to all areas while minimizing pipeline construction costs.
    • Electrical Power Grids: MSTs aid in designing power transmission networks, optimizing the placement of power lines to minimize energy losses and construction expenses.
  5. Spanning Tree Protocols:
    • Computer Networks: In computer networks, MSTs are used by spanning tree protocols (e.g., IEEE 802.1D, Rapid Spanning Tree Protocol) to prevent loops in network topologies. These protocols use MSTs to create a loop-free topology, ensuring reliable and efficient data transmission.

Here’s a explanation How Kruskal’s algorithm differ from Prim’s algorithm:

  1. Algorithm Approach:
    • Kruskal’s algorithm is an edge-based algorithm. It selects edges one by one based on their weights.
    • Prim’s algorithm is a vertex-based algorithm. It grows the minimum spanning tree from an arbitrary starting vertex by adding the nearest vertex at each step.
  2. Initialization:
    • Kruskal’s algorithm initializes by sorting all edges in non-decreasing order of weight.
    • Prim’s algorithm initializes by starting with an arbitrary vertex.
  3. Data Structures:
    • Kruskal’s algorithm typically uses Disjoint Set Union (DSU) data structure to efficiently detect cycles.
    • Prim’s algorithm typically uses a Priority Queue or Min Heap data structure to select the minimum weight edge incident to the growing tree.
  4. Edge Selection:
    • Kruskal’s algorithm selects edges in non-decreasing order of weight and adds them to the tree if they don’t create a cycle.
    • Prim’s algorithm selects edges incident to the growing tree based on their weights.
  5. Cycle Detection:
    • Kruskal’s algorithm uses DSU to detect and avoid cycles during edge selection.
    • Prim’s algorithm does not need explicit cycle detection as it grows the tree vertex by vertex.
  6. Time Complexity:
    • Kruskal’s algorithm has a time complexity of O(E log E), where E is the number of edges.
    • Prim’s algorithm has a time complexity of O(E log V), where V is the number of vertices.
  7. Space Complexity:
    • Kruskal’s algorithm typically has a space complexity of O(V+E).
    • Prim’s algorithm typically has a space complexity of O(V).
Tags

How can we help?

Leave a Reply

Your email address will not be published. Required fields are marked *