BIM 5th Sem-Exam Question Solution 2024 – Artificial Intelligence
Brief Answer Questions:
1.) How do you define right behavior?
Right behavior in AI refers to actions that are ethically sound, socially acceptable, and aligned with predefined goals and values.
2.) Does machine perform biased behavior?
Yes, machines can perform biased behavior if they are trained on biased data or designed with flawed algorithms, leading to unfair or discriminatory outcomes.
3.) What is the purpose of Turing test?
The Turing Test is designed to evaluate whether a machine can perform intelligent behavior indistinguishable from that of a human, thereby testing its ability to “act humanly.”
4.) How does economics help in AI?
Economics contributes to AI by providing models for decision-making, optimization, utility maximization, and game theory, which are essential in intelligent agent behavior.
5.) Write the limitation of goal-based agent.
Goal-based agents do not consider the quality or efficiency of the path to the goal; they focus only on achieving the goal, possibly ignoring better alternatives.
6.) Which agent will consider the degree of happiness?
The utility-based agent considers the degree of happiness (utility) by evaluating how desirable different outcomes are, rather than just achieving goals.
7.) Differentiate between training and testing in machine learning.
- Training is the process of feeding the model data to learn patterns.
- Testing evaluates the model’s performance on unseen data to check generalization.
8.) Define recurrent neural networks.
Recurrent Neural Networks (RNNs) are a type of neural network where connections between nodes form cycles, allowing them to retain memory and process sequential data, such as time series or text.
9.) What is the task of activation function?
An activation function determines whether a neuron should be activated or not, introducing non-linearity into the neural network, enabling it to learn complex patterns.
10.) What is machine translation?
Machine translation is the application of AI to automatically translate text or speech from one language to another, often using natural language processing (NLP) techniques.
Short Answer Questions:
11.) Discuss the component of Expert system.
An Expert System is a computer-based AI system designed to simulate human expertise in a particular field.
1.) Knowledge Base:
The Knowledge Base is a collection of facts, rules, and heuristics that represent domain-specific expertise.
- The Knowledge Base is continuously updated with new data and expert insights.
- Example: A medical expert system stores diseases, symptoms, and treatments.
2.) Inference Engine:
The Inference Engine is the reasoning mechanism that applies logical rules to the knowledge base to draw conclusions and make decisions based on user input.
- It evaluates user input, applies rules from the knowledge base, and generates responses.
- It uses two types of reasoning: Forward Chaining and Backward Chaining.
- Example: Diagnosing a patient based on symptoms provided.
3.) User Interface:
The User Interface allows users to interact with the expert system by inputting data and receiving explanations.
- The UI ensures that non-experts can easily ask questions, input data, and understand system recommendations.
- It can be text-based (command-line), graphical (GUIs), or voice-based (chatbots, virtual assistants).
4.) Working Memory:
The Working Memory is a temporary storage space that holds data and intermediate results while the system processes a query.
- It stores facts and data related to the current problem, ensuring smooth decision-making.
- It holds user input and intermediate steps used by the Inference Engine before reaching a conclusion.
- Once a case is solved, the Working Memory is cleared to process new queries.
12.) Explain the structure of learning agent.
A learning agent is an intelligent system that can improve its performance over time by learning from experience.
- Its structure consists of several key components that work together to sense, act, and learn from the environment.
a.) Learning Element
- Responsible for improving the agent’s performance by learning from experiences and feedback.
- It modifies the behavior of the performance element using data from the critic.
- Example: Learning new strategies based on past performance.
b.) Performance Element
- Chooses actions to be taken based on the current percept and learned knowledge.
- This is the part of the agent that interacts with the environment.
- It uses decision-making rules or policies created by the learning element.
c.) Critic
- Evaluates the agent’s behavior and provides feedback on the quality of its actions.
- It compares the performance against some standard or objective.
- Helps the learning element to know whether the action was good or needs improvement.
d.) Problem Generator
- Suggests new experiences to help the agent learn.
- Encourages exploration of unknown situations to improve learning.
- Helps avoid getting stuck in local optima by exploring alternatives.
13.) Discuss the limitations of Depth Limited Search.
Depth-Limited Search (DLS) is a variant of Depth-First Search (DFS) that imposes a fixed limit on the depth of the search tree.
- While it helps prevent infinite loops in problems with large or infinite depth spaces, it has several limitations:
- Incomplete if the solution is beyond the depth limit: If the solution exists at a level deeper than the specified limit, the algorithm will fail to find it.
- Difficulty in selecting an appropriate depth limit: Choosing a depth limit that is too small leads to incompleteness, while too large a limit reintroduces DFS drawbacks.
- Non-optimal solutions: If multiple solutions exist, DLS does not guarantee the optimal one will be found first.
- Redundant exploration: DLS may repeatedly explore nodes close to the root across multiple iterations.
14.) Conceptual dependency representation:
“Ram gave a book to Shyam”:
GIVE (ACTOR: Ram, OBJECT: book, RECEIVER: Shyam)
“Hari opened the door”:
OPEN (ACTOR: Hari, OBJECT: door)
15.) What is GAN? When do you prefer deep learning over classical approaches?
A Generative Adversarial Network (GAN) is a deep learning framework consisting of two competing neural networks:
- Generator – Creates synthetic data (e.g., fake images, videos, or text).
- Discriminator – Evaluates whether the generated data is real or fake.
Deep learning is preferred over classical approaches when:
- The data is large-scale, high-dimensional, and unstructured (like images, audio, and video).
- Feature extraction is complex or requires automatic feature learning.
- The problem involves complex pattern recognition, like natural language processing, speech recognition, or image classification.
- The system must improve end-to-end performance without hand-crafted rules.
16.) Differentiate between single layer and multilayer perceptions.

Long Answer Questions:
18.) Use resolution to show that the hypothesis, “It is not raining or Biyana has her umbrella”, “Biyana does not have her umbrella or she does not get wet”, “It is raining or Biyana does not get wet” imply that “Biyana does not get wet.”
19.) Define NLU and NLG. Describe the significance of pragmatic analysis in NLP with example.
Natural Language Understanding (NLU) is a subfield of Natural Language Processing (NLP) that focuses on enabling machines to comprehend and interpret human language in a meaningful way.
Natural Language Generation (NLG) is another subfield of NLP concerned with the automatic production of human-like text from structured data or machine output.
Pragmatic Analysis is the process of interpreting language based on context and real-world knowledge. It focuses on understanding what the speaker actually means, rather than just the literal meaning of words.
Why it is Important:
- Resolves ambiguities that cannot be solved through syntax or semantics alone.
- Helps in interpreting indirect speech acts, sarcasm, implications, or cultural references.
- Essential for building context-aware dialogue systems and chatbots.
Example of Pragmatic Analysis:
- “Can you pass the salt?”
- Literal meaning: A question about the ability to pass the salt.
- Pragmatic meaning: A polite request to pass the salt.
A machine that lacks pragmatic understanding might misinterpret this as a simple yes/no question instead of an actual request.
20.) Discuss the mathematical model of an Artificial Neural Network (ANN). Explain about Perceptron learning algorithm.
An Artificial Neural Network (ANN) is a computational model inspired by the structure and functioning of the human brain.
- It consists of interconnected nodes (neurons) organized in layers that process information to make predictions or classifications.
Mathematical Model of Artificial Neural Networks:
Each neuron in an ANN performs a weighted sum of the input values and applies an activation function to produce an output.
Mathematical Formula:

Where:
- W = Weights (importance of each input).
- x = Inputs (data fed into the neuron).
- b = Bias (adjustment factor to optimize learning).
- f = Activation function (determines the neuron’s output).
- y = Output (prediction or classification result).
Perceptron Learning is a supervised learning algorithm used in binary classification tasks. It updates the weights of a single-layer perceptron based on classification errors.
How it Works:
- The perceptron takes inputs, applies weights, and produces an output using an activation function (e.g., step function).
- If the output is incorrect, the algorithm adjusts weights using the formula

where:
- d = desired output
- y = actual output
- x = input feature
- η = learning rate
Comprehensive Answer Questions:
21.) Difference between DFS and BFS. How does A* search use Heuristic information to expand the node? Explain.

A* Search Algorithm is an informed search algorithm that uses both the actual cost from the start node and an estimated cost to the goal node to decide which node to explore next.
Formula Used: f(n)=g(n)+h(n)
- g(n) = Actual cost from the start node to the current node n.
- h(n) = Heuristic estimate of the cost from node n to the goal.
- f(n) = Estimated total cost of the cheapest solution through node n.
How it Expands Nodes:
- A* maintains a priority queue of nodes to be explored.
- At each step, it selects the node with the lowest f(n) value.
- The algorithm uses heuristic information h(n) to prioritize nodes that are likely closer to the goal.
- This combination of actual cost and estimated future cost helps A* find the optimal path efficiently.
- If the heuristic h(n) is admissible (never overestimates) and consistent, A* guarantees an optimal solution.
➤ Example:
If you’re using A* to find the shortest route on a map:
- g(n) = total distance traveled so far.
- h(n) = straight-line distance (as the crow flies) to the destination.
- A* chooses the next city to visit based on the smallest total of actual + estimated distance.
22.) Define fuzzy set and how do you define membership in fuzzy set? After your yearly checkup, the doctor has bad news and good news. The bad news is that you tested positive for covid and the test is 99% accurate. The good news is that this is rare disease, striking only one in 20,000 people. Why is it good news the disease is rare? What are the chances that you actually have disease?
A fuzzy set is a set where elements have degrees of membership rather than simply belonging or not belonging to the set (as in classical sets).
- In a fuzzy set, each element has a membership value between 0 and 1, indicating how strongly that element belongs to the set.
In fuzzy logic, membership is defined by a membership function (μ), which maps elements from a universe of discourse to a real number between 0 and 1.
- This value represents the degree of truth or degree of belonging of the element to the fuzzy set.
For example:
- μ(tall(180 cm)) = 0.8 → means 180 cm has 80% membership in the “tall” set.
- μ(tall(150 cm)) = 0.1 → means 150 cm has 10% membership in the “tall” set.
It is good news the disease is rare because even with a positive test, the extremely low prevalence of the disease in the population means that it is more likely to be a false positive than a true one. This statistical principle is why rarity reduces real risk, despite the test result.
