Unit 2 - Practice Quiz

INT428

1 In the context of AI problem formulation, what constitutes the state space?

A. The physical location where the AI operates
B. The set of all possible states reachable from the initial state by any sequence of actions
C. The memory required to store the problem constraints
D. The sequence of actions taken to reach the goal

2 Which of the following is a component of a well-defined AI problem?

A. The programming language used
B. The transition model
C. The compiler version
D. The hardware specifications

3 In Breadth-First Search (BFS), which data structure is typically used to store the frontier?

A. Stack (LIFO)
B. Queue (FIFO)
C. Priority Queue
D. Hash Map

4 What is the time complexity of Breadth-First Search if the branching factor is and the depth of the solution is ?

A.
B.
C.
D.

5 Which uninformed search algorithm is generally preferred when the search space is infinite or extremely deep, but the solution might be shallow?

A. Depth-First Search
B. Breadth-First Search
C. Uniform Cost Search
D. Random Walk

6 What is the primary evaluation function used in *A search**?

A.
B.
C.
D.

7 In Greedy Best-First Search, how is the next node selected for expansion?

A. The node with the lowest path cost
B. The node with the lowest estimated cost to the goal
C. The node with the highest heuristic value
D. The node added most recently to the frontier

8 For A* search to be optimal (find the shortest path), the heuristic must be admissible. What does this mean?

A. for all nodes
B. never overestimates the true cost to reach the goal
C. never underestimates the true cost to reach the goal
D. is equal to the path cost

9 Which search algorithm behaves exactly like A* search if the heuristic function for all nodes?

A. Depth-First Search
B. Uniform Cost Search
C. Greedy Best-First Search
D. Iterative Deepening Search

10 What is the consistency (or monotonicity) property of a heuristic?

A.
B.
C. for all neighbors
D. is always negative

11 In a Constraint Satisfaction Problem (CSP), what defines the scope of a solution?

A. Finding a path from start to goal
B. Assigning values to variables such that all constraints are satisfied
C. Minimizing the gradient of the objective function
D. Sorting the variables in ascending order

12 Which of the following is an example of a unary constraint?

A.
B.
C.
D.

13 What is Forward Checking in the context of CSPs?

A. Checking all constraints after a complete assignment is made
B. Terminating the search when a variable has no legal values left after an assignment
C. Ordering variables by the number of constraints they participate in
D. Running BFS on the constraint graph

14 In CSP heuristics, what does the Minimum Remaining Values (MRV) heuristic suggest?

A. Choose the variable with the fewest legal values remaining
B. Choose the value that rules out the fewest choices for neighbors
C. Choose the variable involved in the most constraints
D. Choose the variable with the largest domain

15 Which algorithm is used to enforce Arc Consistency in CSPs?

A. Min-Max
B. AC-3
C. Prim's Algorithm
D. K-Means

16 In gradient-based optimization, if the gradient of the function at a point is zero, what does it indicate?

A. The function is undefined
B. The point is a critical point (maximum, minimum, or saddle point)
C. The algorithm has failed
D. The learning rate is too high

17 What is a major drawback of the standard Hill Climbing algorithm?

A. It requires too much memory
B. It always finds the global maximum
C. It can get stuck in local maxima
D. It is computationally slower than genetic algorithms

18 In Gradient Descent, the update rule is . What does represent?

A. The discount factor
B. The learning rate (step size)
C. The momentum
D. The heuristic value

19 Which metaheuristic algorithm is inspired by the process of natural selection?

A. Simulated Annealing
B. Genetic Algorithms
C. Tabu Search
D. Particle Swarm Optimization

20 In Simulated Annealing, what is the purpose of allowing 'bad' moves (moves that decrease the objective function value)?

A. To speed up the calculation
B. To escape local optima
C. To increase memory usage
D. To satisfy constraints

21 The branching factor in a search tree refers to:

A. The depth of the tree
B. The number of successors a node has
C. The cost of the cheapest path
D. The number of nodes in the frontier

22 Which search algorithm combines the space efficiency of DFS with the completeness of BFS?

A. Iterative Deepening Search (IDS)
B. Bidirectional Search
C. Greedy Search
D. A* Search

23 What is the space complexity of Depth-First Search (DFS) with branching factor and maximum depth ?

A.
B.
C.
D.

24 A search algorithm is complete if:

A. It always finds the lowest cost solution
B. It is guaranteed to find a solution when one exists
C. It uses minimal memory
D. It finishes in polynomial time

25 In Reinforcement Learning, what entity interacts with the environment?

A. The Critic
B. The Agent
C. The Generator
D. The Supervisor

26 What does a Markov Decision Process (MDP) usually consist of?

A. A set of neurons and weights
B. States, Actions, Transition Probabilities, and Rewards
C. A static database of labeled images
D. A set of logical rules and axioms

27 In Reinforcement Learning, the Exploration vs. Exploitation dilemma refers to:

A. Choosing between using CPU or GPU
B. Choosing between trying new actions to gather information or using known actions to maximize reward
C. Choosing between supervised and unsupervised learning
D. Choosing between A* and BFS

28 What is the role of the discount factor in Reinforcement Learning?

A. It determines the learning rate
B. It balances the importance of immediate rewards vs. future rewards
C. It reduces the dimensionality of the state space
D. It is used to calculate the gradient

29 In RL, a Policy is defined as:

A. The sequence of rewards received
B. A mapping from states to actions
C. The transition probability matrix
D. The memory buffer of past experiences

30 Which of the following is a local search algorithm?

A. Merge Sort
B. Hill Climbing
C. Breadth-First Search
D. Depth-First Search

31 If a problem has a solution at depth , but the state space is infinite, which algorithm risks never terminating?

A. Breadth-First Search
B. Iterative Deepening Search
C. Depth-First Search
D. A* Search with admissible heuristic

32 In the 8-Queens problem modeled as a CSP, what is a binary constraint?

A. A queen must be on the board
B. No two queens can attack each other
C. There must be 8 queens total
D. Queens are represented by integers

33 What is the Least Constraining Value heuristic in CSP?

A. Pick the value that rules out the fewest choices for neighboring variables
B. Pick the value that rules out the most choices
C. Pick the smallest numerical value
D. Pick the value randomly

34 In Local Beam Search with , what happens at each step?

A. It expands the single best node 5 times
B. It keeps track of the 5 best states found so far
C. It runs 5 independent hill climbing searches
D. It backtracks 5 steps

35 For the Traveling Salesperson Problem (TSP), what is the nature of the state space?

A. Continuous and infinite
B. Discrete and finite (combinatorial)
C. Continuous and finite
D. Differentiable

36 In A* search, what is the Manhattan Distance heuristic often used for?

A. Route planning on a map
B. Grid-based navigation (like the 8-puzzle or robot pathfinding)
C. Chess
D. Solving Sudoku

37 What is a plateau in the context of state space landscapes?

A. A flat area where the objective function value is practically constant
B. The global maximum
C. A deep valley representing low cost
D. The initial state

38 Which component of an RL agent estimates the 'goodness' of being in a specific state?

A. The Policy
B. The Value Function
C. The Action Space
D. The Environment

39 What is the difference between Stochastic Gradient Descent (SGD) and Batch Gradient Descent?

A. SGD uses the entire dataset per step; Batch uses one sample
B. SGD uses a single sample (or mini-batch) per step; Batch uses the entire dataset
C. SGD is deterministic; Batch is random
D. SGD is for classification; Batch is for regression

40 Which of the following problems is best modeled as a Constraint Satisfaction Problem?

A. Predicting stock prices
B. Map Coloring
C. Classifying emails as spam
D. Driving a car

41 In search algorithms, path cost is usually assumed to be:

A. Negative
B. Additive
C. Multiplicative
D. Exponential

42 What is the main advantage of Bidirectional Search?

A. It guarantees finding the solution without memory
B. It can reduce time complexity from to
C. It works on infinite graphs
D. It does not require a goal state

43 When defining an AI problem, abstraction refers to:

A. Writing abstract code classes
B. Removing irrelevant details from the real world to create a model
C. Making the problem harder
D. Visualizing the search tree

44 Which of the following is a metaheuristic?

A. Binary Search
B. Particle Swarm Optimization (PSO)
C. Linear Regression
D. Bubble Sort

45 In A* search, if overestimates the actual cost to the goal, the algorithm:

A. Remains optimal
B. Becomes incomplete
C. Is no longer guaranteed to be optimal
D. Becomes slower than BFS

46 What is the Reward Hypothesis in Reinforcement Learning?

A. Agents only learn if they are punished
B. All goals can be described by the maximization of expected cumulative reward
C. Rewards must always be positive
D. The reward is always equal to the state value

47 What is the primary disadvantage of using a very small learning rate in optimization?

A. The algorithm will diverge
B. The algorithm will overshoot the minimum
C. Convergence will be very slow
D. It increases the variance of the gradient

48 In the context of Genetic Algorithms, Mutation is used to:

A. Combine two parents
B. Select the best individuals
C. Maintain genetic diversity and prevent premature convergence
D. Calculate the fitness function

49 Which search strategy is best suited for a game like Chess where the search space is too large to fully traverse?

A. Uniform Cost Search
B. Depth-Limited Search with Heuristic Evaluation (Minimax)
C. Random Walk
D. Genetic Algorithms

50 An optimization problem where the objective function is Convex is desirable because:

A. It has no minimum
B. Any local minimum is also a global minimum
C. It requires genetic algorithms to solve
D. It is always discrete