Unit 2 - Practice Quiz

INT428 60 Questions
0 Correct 0 Wrong 60 Left
0/60

1 What does the "state space" in AI problem solving represent?

Problem formulation and state space search Easy
A. The cost of the solution path
B. The set of all possible states reachable from the initial state
C. The algorithm used to solve the problem
D. Only the final goal state

2 Which of the following is a key characteristic of uninformed search algorithms?

Introduction to uninformed search Easy
A. They have no information about the cost or distance to the goal
B. They use problem-specific knowledge to find a solution
C. They are also known as heuristic search algorithms
D. They always find the optimal solution

3 What is a heuristic function, denoted as , used for in a search algorithm?

Heuristic search Easy
A. To estimate the cost of the cheapest path from node to a goal state
B. To guarantee that the search will find a solution
C. To store the visited nodes in the search tree
D. To calculate the exact cost of the path from the start node to node

4 The A* search algorithm evaluates nodes by combining which two functions?

Core search algorithms: A* search Easy
A. and
B. The depth of the node and the branching factor
C. only
D. only

5 In a Constraint Satisfaction Problem (CSP), what is the primary goal?

Constraint satisfaction Easy
A. To find an assignment of values to variables that satisfies all given constraints
B. To find a state that maximizes a certain objective function
C. To find the shortest path between two nodes
D. To explore the entire state space

6 What is the fundamental idea behind Gradient Descent?

Basics of optimization: gradient-based methods and metaheuristics Easy
A. To take iterative steps in the opposite direction of the function's gradient
B. To explore all possible solutions to find the absolute best one
C. To randomly jump to different points in the search space
D. To divide the problem into smaller, simpler subproblems

7 In reinforcement learning, what does an "agent" receive from the "environment" as feedback for its actions?

Introduction to reinforcement learning for sequential decision problems Easy
A. A new set of rules
B. A reward or a penalty
C. The source code of the environment
D. A heuristic function

8 Which of the following is a standard metric for evaluating the performance of a search algorithm?

AI problem design: Solution metrics Easy
A. Completeness (does it always find a solution if one exists?)
B. The developer's experience level
C. The number of lines of code
D. The programming language used

9 Which component of a problem formulation defines the starting point for the search agent?

Problem formulation and state space search Easy
A. The action function
B. The goal test
C. The path cost
D. The initial state

10 Which uninformed search algorithm explores the deepest node in the current frontier of the search tree first?

Introduction to uninformed search Easy
A. Breadth-First Search (BFS)
B. Depth-First Search (DFS)
C. Uniform-Cost Search
D. A* Search

11 Greedy Best-First Search is a type of Best-First Search that expands the node that is...?

Core search algorithms: Best-first search Easy
A. Chosen randomly from the frontier
B. The deepest in the search tree
C. Closest to the initial state
D. Estimated to be closest to the goal state

12 What does it mean for a heuristic to be "admissible"?

Heuristic search Easy
A. It always calculates the exact cost to the goal
B. It is very complex to compute
C. It never overestimates the true cost to reach the goal
D. It guarantees the search will be fast

13 Which of the following is a classic example of a Constraint Satisfaction Problem (CSP)?

Constraint satisfaction Easy
A. Sorting a list of numbers
B. The 8-puzzle problem
C. Finding the shortest path in a graph
D. Map coloring

14 What is a "metaheuristic"?

Basics of optimization: gradient-based methods and metaheuristics Easy
A. A specific, low-level algorithm for a single problem
B. A method that guarantees finding the globally optimal solution
C. A high-level strategy that guides other heuristics to find a good solution
D. A mathematical proof of an algorithm's correctness

15 In the context of search algorithms, what does "space complexity" generally measure?

AI problem design: Complexity Easy
A. The time it takes a programmer to write the code
B. The number of nodes generated during the search
C. The length of the solution path found
D. The amount of memory the algorithm requires

16 For a search problem like finding the best route on a map, what is the most fundamental data required to define the problem?

AI problem design: Data requirements Easy
A. User reviews of different locations
B. The history of all previous trips taken
C. The current weather conditions
D. A graph of locations (nodes) and roads (edges) with costs

17 If the heuristic function in an A search is always zero, which algorithm does A search effectively become?

Core search algorithms: A* search Easy
A. Greedy Best-First Search
B. Hill Climbing
C. Depth-First Search
D. Uniform-Cost Search

18 The main goal of a reinforcement learning agent is typically to learn a...?

Introduction to reinforcement learning for sequential decision problems Easy
A. Policy that maximizes cumulative reward
B. Heuristic function for A* search
C. Map of the environment
D. Set of constraints for a CSP

19 What does "optimality" mean as a metric for a search algorithm?

AI problem design: Solution metrics Easy
A. The algorithm uses the least amount of memory
B. The algorithm is guaranteed to find the best possible solution
C. The algorithm finds a solution very quickly
D. The algorithm can solve any type of problem

20 What is the "goal test" in a search problem?

Problem formulation and state space search Easy
A. A function that determines if a given state is a goal state
B. A heuristic that estimates the distance to the goal
C. The final step in the search algorithm
D. The cost of reaching the goal

21 If you are formulating the 8-puzzle problem and change the state representation from a 3x3 grid of integers to a single string of 9 characters, how does this primarily affect the problem formulation?

Problem formulation and state space search Medium
A. It alters the set of possible actions (e.g., 'move blank left').
B. It changes the goal test, as the target configuration is now a string.
C. It fundamentally changes the size of the abstract state space.
D. It is an implementation detail that does not change the abstract problem formulation (states, actions, goal test).

22 For a state space graph with a very large (or infinite) depth but a solution at a shallow depth d, why is Breadth-First Search (BFS) often preferred over Depth-First Search (DFS)?

Introduction to uninformed search Medium
A. DFS is not complete and might get stuck in an infinite branch, never finding the shallow solution.
B. DFS is not optimal, whereas BFS always finds the best solution.
C. BFS is faster because it explores fewer nodes.
D. BFS has better space complexity in this scenario.

23 In A search, the evaluation function is . If we use an admissible heuristic where for all nodes , which search algorithm does A effectively become?

Core search algorithms: Best-first search, A* search, Heuristic search Medium
A. Greedy Best-First Search
B. Uniform Cost Search (UCS)
C. Breadth-First Search (BFS)
D. Depth-First Search (DFS)

24 An A* search is being performed. The path to node N has a cost . The heuristic estimate to the goal is . A child of N, node M, is generated with a step cost of 5. The heuristic for M is . What is the -value for node M?

Core search algorithms: Best-first search, A* search, Heuristic search Medium
A. 14
B. 21
C. 16
D. 19

25 In a Constraint Satisfaction Problem (CSP), what is the primary benefit of using the Minimum Remaining Values (MRV) heuristic for variable selection?

Constraint satisfaction Medium
A. It tends to choose a variable that is most likely to cause a failure early, thereby pruning large portions of the search tree.
B. It guarantees that a solution will be found without any backtracking.
C. It always selects the variable with the simplest constraints first.
D. It ensures the solution found is optimal according to some cost function.

26 A key challenge in using gradient-based optimization methods like Gradient Descent is getting stuck in a local minimum. Which of the following techniques is specifically designed to help escape local minima?

Basics of optimization: gradient-based methods and metaheuristics Medium
A. Using a larger batch size
B. Simulated Annealing
C. Decreasing the learning rate
D. Normalizing the input features

27 You are building an email spam detector. The dataset is highly imbalanced, with 99% of emails being non-spam. Why is 'accuracy' a poor metric for evaluating your model's performance in this case?

AI problem design: Complexity, Data requirements, Solution metrics Medium
A. Accuracy is too computationally expensive to calculate for large datasets.
B. Accuracy does not account for the speed of the classification.
C. Accuracy cannot be used for binary classification problems.
D. A naive model that classifies all emails as non-spam would achieve 99% accuracy but be useless.

28 In a Markov Decision Process (MDP), what does the 'Markov Property' imply about the states?

Introduction to reinforcement learning for sequential decision problems Medium
A. Every state must be reachable from every other state.
B. The future is independent of the past given the present; the current state captures all relevant information.
C. The reward for reaching a state is independent of the action taken.
D. The number of possible actions is the same in every state.

29 What is the primary risk of using Greedy Best-First Search for a route-finding problem?

Core search algorithms: Best-first search, A* search, Heuristic search Medium
A. It often finds a path that is significantly longer (and thus suboptimal) than the best possible path.
B. It has a much higher space complexity than A* search.
C. It is not guaranteed to find a solution even if one exists.
D. It cannot be used if the graph contains cycles.

30 What is the main difference between applying Forward Checking and maintaining Arc Consistency (e.g., using the AC-3 algorithm) in a CSP search?

Constraint satisfaction Medium
A. Arc Consistency guarantees finding a solution, while Forward Checking does not.
B. Forward Checking can solve any CSP without backtracking, unlike Arc Consistency.
C. Forward Checking only checks constraints between the current variable and its neighbors, while Arc Consistency propagates constraints further.
D. Arc Consistency is applied before the search begins, while Forward Checking is used during the search.

31 Under what condition is Uniform Cost Search (UCS) guaranteed to be optimal?

Introduction to uninformed search Medium
A. When all path costs are non-negative, and step costs are greater than some small positive constant .
B. When all step costs are equal.
C. When the state space is a tree.
D. When the heuristic used is admissible.

32 In a Genetic Algorithm, what is the combined purpose of the 'selection' and 'mutation' operators?

Basics of optimization: gradient-based methods and metaheuristics Medium
A. Selection diversifies the population, while mutation ensures convergence.
B. Selection drives the population toward better solutions, while mutation introduces diversity to prevent premature convergence.
C. Selection creates new solutions from old ones, while mutation removes bad solutions.
D. Both operators work to reduce the size of the population in each generation.

33 An AI model for image recognition is very complex, using a deep neural network with 150 layers. What is a likely consequence of this high complexity when training on a small dataset of only 1,000 images?

AI problem design: Complexity, Data requirements, Solution metrics Medium
A. The model will be unable to learn any patterns from the data.
B. The model will likely achieve high training accuracy but perform poorly on new, unseen images (overfitting).
C. The model will generalize well to new data because of its high capacity to learn.
D. The model will train very quickly due to its advanced architecture.

34 An RL agent is learning to navigate a maze. For the first 100 trials, it always chooses a random available move. After that, it always chooses the move that it currently estimates has the highest value. This strategy is an example of what?

Introduction to reinforcement learning for sequential decision problems Medium
A. A purely exploitative strategy.
B. A separation of the exploration and exploitation phases.
C. An Epsilon-Greedy policy.
D. Q-learning.

35 When defining a search problem, what is the main consequence of having a more informed (i.e., more constrained and specific) goal test?

Problem formulation and state space search Medium
A. It guarantees that any solution found is optimal.
B. It always increases the branching factor of the state space.
C. It makes the problem unsolvable by uninformed search algorithms.
D. It may significantly reduce the size of the effective state space to be searched.

36 If an A* search uses an inadmissible heuristic that sometimes overestimates the true cost to the goal, what is the primary guarantee that is lost?

Core search algorithms: Best-first search, A* search, Heuristic search Medium
A. The guarantee of optimal efficiency (expanding the minimum number of nodes).
B. The guarantee of completeness (finding a solution if one exists).
C. The guarantee that the algorithm will terminate.
D. The guarantee of finding the optimal (least-cost) solution.

37 What is the primary motivation for using optimization methods like Stochastic Gradient Descent (SGD) instead of standard (batch) Gradient Descent in large-scale machine learning?

Basics of optimization: gradient-based methods and metaheuristics Medium
A. SGD does not require setting a learning rate.
B. SGD provides a much more accurate estimate of the gradient at each step.
C. SGD is computationally much cheaper per update, as it uses only one or a few examples instead of the entire dataset.
D. SGD is guaranteed to converge to the global minimum, whereas batch GD is not.

38 For a map-coloring CSP, which variable ordering heuristic would be most effective for a backtracking search?

Constraint satisfaction Medium
A. Degree Heuristic: Choose the country that borders the most other uncolored countries.
B. Random Selection: Choose the next country to color at random.
C. Least Constraining Value: Choose the country that can be colored with the most possible colors.
D. Alphabetical Order: Choose the country that comes first alphabetically.

39 When designing an AI system, what does the term 'solution metric' refer to?

AI problem design: Complexity, Data requirements, Solution metrics Medium
A. A quantifiable measure used to evaluate how well the AI's output achieves the problem's goal.
B. The amount of data required to train the AI model.
C. The computational complexity (time and space) of the AI system.
D. The algorithm used to find the solution, such as A* or Gradient Descent.

40 What is the fundamental difference between model-based and model-free reinforcement learning?

Introduction to reinforcement learning for sequential decision problems Medium
A. Model-free RL is always more sample-efficient than model-based RL.
B. Model-free RL uses a policy, while model-based RL uses a value function.
C. Model-based RL attempts to learn the environment's dynamics (transition and reward functions), while model-free RL learns a value function or policy directly from experience.
D. Model-based RL works only in discrete state spaces, while model-free works in continuous ones.

41 In an A search, you are using a heuristic that is admissible but not* consistent. What is the most significant consequence of this property for the algorithm's behavior on a general graph (not a tree)?

Core search algorithms: A* search Hard
A. The algorithm's time complexity degrades to be equivalent to that of Dijkstra's algorithm.
B. The algorithm is no longer guaranteed to find the optimal solution.
C. The algorithm might re-expand a node after having already found an optimal path to it, if a new, cheaper path is discovered.
D. The algorithm can no longer use a closed set (visited list) to prune nodes, leading to exponential complexity on all graphs.

42 Consider a Constraint Satisfaction Problem being solved with backtracking search. If you apply Arc Consistency (AC-3) as a pre-processing step, and then during the search you use Forward Checking, what is the primary relationship between these two constraint propagation methods in this context?

Constraint satisfaction Hard
A. Forward Checking will not prune any additional values from the domains of unassigned variables immediately after the first variable assignment, because AC-3 has already made every arc consistent.
B. The combination is redundant; AC-3 makes Forward Checking's work during the search unnecessary for the entire search tree.
C. Forward Checking propagates constraints to all other unassigned variables, whereas AC-3 only considered pairs of variables, making Forward Checking strictly more powerful.
D. AC-3 guarantees that the first variable assignment will not lead to an immediate dead-end that Forward Checking would have detected.

43 An agent is using Q-Learning to navigate a grid world. It is in state , takes action , and observes reward and next state . According to its -greedy policy, the next action it actually takes from is a random exploratory action, . How does the Q-value update for incorporate this information?

Introduction to reinforcement learning for sequential decision problems Hard
A. The update uses a weighted average of and based on the value of .
B. The update uses the value as the bootstrap estimate for the next state's value.
C. The update uses , ignoring the random action that was actually taken.
D. The update is skipped for this step because an exploratory action was taken, which does not reflect the optimal policy.

44 You are optimizing a high-dimensional, non-convex function with many saddle points using a gradient-based method. Which of the following algorithm variants is most likely to escape these saddle points most efficiently?

Basics of optimization: gradient-based methods and metaheuristics Hard
A. Standard (Batch) Gradient Descent
B. Gradient Descent with Momentum
C. Newton's method without any regularization
D. Gradient Descent with a very small, fixed learning rate

45 Consider the problem of finding the shortest path for a knight on an chessboard. If we formulate this as a state space search problem, what is the most accurate description of the structure of the state space graph?

Problem formulation and state space search Hard
A. An undirected graph where some nodes have a degree as low as 2 and others as high as 8.
B. A strongly connected component, as every square is reachable from every other square.
C. A directed acyclic graph (DAG) because the path cost is always increasing.
D. A tree with a branching factor varying between 2 and 8.

46 You are designing an AI for an autonomous drone that must inspect 100 specific points of interest (POIs) on a large structure within a limited battery life. The drone gets a high reward for each POI successfully inspected, but a massive penalty if the battery dies before it returns to the charging station. The time taken to travel between POIs is non-uniform. Which problem formulation and corresponding solution metric is most appropriate?

AI problem design: Solution metrics Hard
A. Formulate as a search for the highest-reward POI and use a greedy algorithm to maximize immediate reward.
B. Formulate as an Orienteering Problem and use the total reward collected as the primary metric, subject to the battery constraint.
C. Formulate as a Traveling Salesperson Problem (TSP) and use the total path length as the metric to minimize.
D. Formulate as a Vehicle Routing Problem (VRP) and use the number of drones required as the metric.

47 You are using A search with an inadmissible heuristic that consistently overestimates the true cost by a constant factor , i.e., . Which of the following statements about the solution found by this A variant is guaranteed to be true?

Core search algorithms: A* search Hard
A. The cost of the solution found will be no more than times the cost of the optimal solution.
B. The algorithm will find the optimal solution, but it will be much slower.
C. The algorithm will behave identically to a Greedy Best-First Search.
D. The algorithm is not guaranteed to find a solution, even if one exists.

48 Consider a state space graph that is a balanced binary tree of depth . If the single goal node is located at the rightmost leaf of the tree, how does the number of nodes expanded by Iterative Deepening Search (IDS) compare to Depth-First Search (DFS)?

Introduction to uninformed search Hard
A. IDS expands approximately twice as many nodes as DFS.
B. DFS and IDS expand the exact same number of nodes.
C. DFS expands significantly fewer nodes as it finds the goal almost immediately.
D. IDS expands approximately times more nodes than DFS.

49 In a backtracking search for a CSP, you are deciding between the Minimum Remaining Values (MRV) and Degree Heuristic for variable ordering. In which scenario would the Degree Heuristic be a more impactful choice than MRV?

Constraint satisfaction Hard
A. On a problem where the constraint graph is a simple chain (A-B-C-D).
B. In the middle of the search, when one variable has only one value left in its domain.
C. At the very beginning of the search, when all variables have the same number of legal values in their domains.
D. When the goal is to find all possible solutions, not just the first one.

50 You are using Simulated Annealing to solve an optimization problem. The cooling schedule is defined by , where is the temperature at iteration . What is the most likely consequence of choosing a value of very close to 1 (e.g., 0.9999)?

Basics of optimization: gradient-based methods and metaheuristics Hard
A. The algorithm's performance will be highly sensitive to the initial state .
B. The algorithm will perform a very long and thorough search of the solution space, but may be computationally expensive.
C. The algorithm will fail to converge as the probability of accepting worse states will remain high indefinitely.
D. The algorithm will converge very quickly to the nearest local minimum, behaving like greedy hill-climbing.

51 In the context of a Markov Decision Process (MDP), what does the Bellman Optimality Equation for fundamentally express?

Introduction to reinforcement learning for sequential decision problems Hard
A. The optimal policy is to always choose the action that leads to the state with the highest immediate reward.
B. The transition probability from state to is independent of all previous states and actions.
C. The optimal value of a state is the expected immediate reward plus the discounted optimal value of the best successor state, maximized over all possible actions.
D. The value of a state is the sum of all possible future rewards the agent can receive from that state.

52 If the state in a search problem is represented by a permutation of items, and the actions are swaps of any two adjacent items, what is the diameter of the state space graph (i.e., the longest shortest path between any two states)?

AI problem design: Complexity Hard
A.
B.
C.
D.

53 You are given a search problem where the path cost from the start node to any node is exactly equal to the depth of . The heuristic is admissible. In this specific scenario, which of the following algorithms will expand nodes in the exact same order as A* search?

Core search algorithms: Best-first search Hard
A. GBFS expanded at least one node where its path cost was greater than 50.
B. The initial heuristic value at the start node, , must have been greater than 50.
C. GBFS must have expanded a node that was not on the optimal path found by A*.
D. A* search expanded strictly more nodes than GBFS.

54 An AI system for diagnosing rare diseases from patient symptoms shows 99.9% accuracy during testing. However, the prevalence of any single rare disease is 1 in 10,000. Why is accuracy a deeply misleading metric in this context, and what would be a more appropriate set of metrics to evaluate its real-world performance?

AI problem design: Data requirements Hard
A. Because a model that always predicts 'no rare disease' would achieve 99.99% accuracy, yet be useless. Better metrics would be Precision, Recall, and the F1-Score.
B. Because the dataset is too small to achieve statistical significance, and the model is likely overfitting. The best metric is validation loss.
C. Because accuracy doesn't account for the computational cost of the model. A better metric is accuracy per FLOP.
D. Because the problem is sequential, and accuracy doesn't capture the temporal nature of diagnostics. A better metric is discounted cumulative reward.

55 You are formulating the Rubik's Cube problem for an AI solver. Which of the following state representations would be most problematic for use with an efficient search algorithm like A* and a pattern database heuristic?

Problem formulation and state space search Hard
A. A 3D array representing the cube, with each of the 26 outer cubies having an ID and an orientation vector.
B. An image of the cube from a single, fixed perspective.
C. A flat list of 54 color labels, one for each facelet, in a fixed order.
D. A string of 20 characters representing the core properties (position and orientation) of the 8 corner and 12 edge pieces.

56 In a graph with many cycles and non-uniform positive edge costs, what is a primary advantage of Uniform Cost Search (UCS) over a standard Breadth-First Search (BFS) implementation that stops upon finding the first path to the goal?

Introduction to uninformed search Hard
A. UCS has a better space complexity, as it does not need to store all nodes at the current depth.
B. UCS is guaranteed to find the optimal (least-cost) path, whereas BFS is only guaranteed to find the shallowest path (least number of edges).
C. UCS can handle negative edge costs, while BFS cannot.
D. UCS expands fewer nodes on average because it is a form of informed search.

57 Consider the design of a heuristic for the 15-puzzle. Let be the number of misplaced tiles. Let be the sum of the Manhattan distances of each tile from its goal position. Let . Let . Which of these heuristics is both admissible and dominates the others that are also admissible?

Heuristic search Hard
A.
B.
C.
D.

58 In Reinforcement Learning, what is the fundamental challenge addressed by using eligibility traces (as in TD() or Q())?

Introduction to reinforcement learning for sequential decision problems Hard
A. The curse of dimensionality: handling state spaces that are too large to store in a lookup table.
B. The exploration-exploitation trade-off: deciding whether to try a new action or stick with the current best one.
C. The non-stationarity of the environment: adapting when the transition probabilities or reward function change over time.
D. The temporal credit assignment problem: efficiently propagating a delayed reward back to the sequence of state-action pairs that were responsible for it.

59 When applying Stochastic Gradient Descent (SGD) to train a deep neural network (a non-convex optimization problem), why might a learning rate schedule that starts high and gradually anneals to a lower value be more effective than using a small, fixed learning rate?

Basics of optimization: gradient-based methods and metaheuristics Hard
A. A high learning rate ensures that the weights are properly initialized before training begins, preventing vanishing gradients.
B. The learning rate must be annealed to satisfy the mathematical convergence criteria for convex optimization problems.
C. The initial high learning rate and noise from mini-batches help the optimizer escape poor local minima and saddle points early in training, while the later low learning rate allows it to converge finely within a good basin of attraction.
D. A small, fixed learning rate causes the loss to oscillate wildly, preventing convergence, whereas an annealed rate dampens these oscillations.

60 You are performing A* search on a graph where all edge costs are integers greater than or equal to a constant . The heuristic function is consistent. What can you conclude about the values of for the sequence of nodes expanded by the algorithm?

Core search algorithms: A* search Hard
A. The sequence of -values will be random, depending on the graph topology.
B. The sequence of -values is guaranteed to be equal to the true cost of the path to the goal.
C. The sequence of -values for the expanded nodes is non-decreasing.
D. The sequence of -values is strictly increasing.