1In the context of AI problem formulation, what constitutes the state space?
A.The physical location where the AI operates
B.The set of all possible states reachable from the initial state by any sequence of actions
C.The memory required to store the problem constraints
D.The sequence of actions taken to reach the goal
Correct Answer: The set of all possible states reachable from the initial state by any sequence of actions
Explanation:The state space is a graph where nodes are states and links are actions, representing all possible configurations the system can enter from the start.
Incorrect! Try again.
2Which of the following is a component of a well-defined AI problem?
A.The programming language used
B.The transition model
C.The compiler version
D.The hardware specifications
Correct Answer: The transition model
Explanation:A problem is defined by the initial state, actions, transition model (which specifies what an action does), goal test, and path cost.
Incorrect! Try again.
3In Breadth-First Search (BFS), which data structure is typically used to store the frontier?
A.Stack (LIFO)
B.Queue (FIFO)
C.Priority Queue
D.Hash Map
Correct Answer: Queue (FIFO)
Explanation:BFS uses a First-In-First-Out (FIFO) queue to ensure that nodes are expanded in the order they are added, guaranteeing the shallowest nodes are processed first.
Incorrect! Try again.
4What is the time complexity of Breadth-First Search if the branching factor is and the depth of the solution is ?
A.
B.
C.
D.
Correct Answer:
Explanation:In the worst case, BFS generates all nodes at each level up to depth . The total number of nodes is , which is .
Incorrect! Try again.
5Which uninformed search algorithm is generally preferred when the search space is infinite or extremely deep, but the solution might be shallow?
A.Depth-First Search
B.Breadth-First Search
C.Uniform Cost Search
D.Random Walk
Correct Answer: Breadth-First Search
Explanation:DFS may get stuck in an infinite path. BFS is complete and will find the shallowest solution even in an infinite state space (provided the branching factor is finite).
Incorrect! Try again.
6What is the primary evaluation function used in *A search**?
A.
B.
C.
D.
Correct Answer:
Explanation:A* evaluates nodes by combining (the cost to reach the node) and (the estimated cost to reach the goal from that node).
Incorrect! Try again.
7In Greedy Best-First Search, how is the next node selected for expansion?
A.The node with the lowest path cost
B.The node with the lowest estimated cost to the goal
C.The node with the highest heuristic value
D.The node added most recently to the frontier
Correct Answer: The node with the lowest estimated cost to the goal
Explanation:Greedy Best-First search tries to move as close to the goal as it can, so it expands the node with the minimum heuristic value .
Incorrect! Try again.
8For A* search to be optimal (find the shortest path), the heuristic must be admissible. What does this mean?
A. for all nodes
B. never overestimates the true cost to reach the goal
C. never underestimates the true cost to reach the goal
D. is equal to the path cost
Correct Answer: never overestimates the true cost to reach the goal
Explanation:An admissible heuristic implies , where is the true cost to the goal. It is 'optimistic'.
Incorrect! Try again.
9Which search algorithm behaves exactly like A* search if the heuristic function for all nodes?
A.Depth-First Search
B.Uniform Cost Search
C.Greedy Best-First Search
D.Iterative Deepening Search
Correct Answer: Uniform Cost Search
Explanation:If , then . Minimizing is the definition of Uniform Cost Search.
Incorrect! Try again.
10What is the consistency (or monotonicity) property of a heuristic?
A.
B.
C. for all neighbors
D. is always negative
Correct Answer:
Explanation:Consistency requires that the estimated cost from is no greater than the cost to get to a successor plus the estimated cost from . This ensures the f-values are non-decreasing along any path.
Incorrect! Try again.
11In a Constraint Satisfaction Problem (CSP), what defines the scope of a solution?
A.Finding a path from start to goal
B.Assigning values to variables such that all constraints are satisfied
C.Minimizing the gradient of the objective function
D.Sorting the variables in ascending order
Correct Answer: Assigning values to variables such that all constraints are satisfied
Explanation:A CSP solution is a complete assignment of values to variables from their respective domains satisfying all problem constraints.
Incorrect! Try again.
12Which of the following is an example of a unary constraint?
A.
B.
C.
D.
Correct Answer:
Explanation:A unary constraint restricts the value of a single variable. is binary, and is ternary (or higher).
Incorrect! Try again.
13What is Forward Checking in the context of CSPs?
A.Checking all constraints after a complete assignment is made
B.Terminating the search when a variable has no legal values left after an assignment
C.Ordering variables by the number of constraints they participate in
D.Running BFS on the constraint graph
Correct Answer: Terminating the search when a variable has no legal values left after an assignment
Explanation:When variable is assigned, Forward Checking looks at unassigned neighbors and deletes values from 's domain that are inconsistent with . If a domain becomes empty, it backtracks immediately.
Incorrect! Try again.
14In CSP heuristics, what does the Minimum Remaining Values (MRV) heuristic suggest?
A.Choose the variable with the fewest legal values remaining
B.Choose the value that rules out the fewest choices for neighbors
C.Choose the variable involved in the most constraints
D.Choose the variable with the largest domain
Correct Answer: Choose the variable with the fewest legal values remaining
Explanation:MRV acts as a 'fail-first' heuristic, picking the variable most likely to cause a failure soon, thereby pruning the search tree earlier.
Incorrect! Try again.
15Which algorithm is used to enforce Arc Consistency in CSPs?
A.Min-Max
B.AC-3
C.Prim's Algorithm
D.K-Means
Correct Answer: AC-3
Explanation:AC-3 (Arc Consistency Algorithm #3) is a standard algorithm used to reduce the domains of variables by ensuring that for every value in a domain, there exists a valid value in the domain of connected variables.
Incorrect! Try again.
16In gradient-based optimization, if the gradient of the function at a point is zero, what does it indicate?
A.The function is undefined
B.The point is a critical point (maximum, minimum, or saddle point)
C.The algorithm has failed
D.The learning rate is too high
Correct Answer: The point is a critical point (maximum, minimum, or saddle point)
Explanation:When , the slope is zero, indicating a local extremum (min/max) or a saddle point.
Incorrect! Try again.
17What is a major drawback of the standard Hill Climbing algorithm?
A.It requires too much memory
B.It always finds the global maximum
C.It can get stuck in local maxima
D.It is computationally slower than genetic algorithms
Correct Answer: It can get stuck in local maxima
Explanation:Hill Climbing is a greedy local search that moves only uphill. Once it reaches a peak where no neighbor is higher (a local maximum), it stops, even if a higher peak exists elsewhere.
Incorrect! Try again.
18In Gradient Descent, the update rule is . What does represent?
A.The discount factor
B.The learning rate (step size)
C.The momentum
D.The heuristic value
Correct Answer: The learning rate (step size)
Explanation: is the learning rate, which controls how large a step is taken in the direction of the negative gradient.
Incorrect! Try again.
19Which metaheuristic algorithm is inspired by the process of natural selection?
A.Simulated Annealing
B.Genetic Algorithms
C.Tabu Search
D.Particle Swarm Optimization
Correct Answer: Genetic Algorithms
Explanation:Genetic Algorithms use operations like selection, crossover, and mutation, mimicking biological evolution.
Incorrect! Try again.
20In Simulated Annealing, what is the purpose of allowing 'bad' moves (moves that decrease the objective function value)?
A.To speed up the calculation
B.To escape local optima
C.To increase memory usage
D.To satisfy constraints
Correct Answer: To escape local optima
Explanation:By accepting worse moves with a certain probability (which decreases over time), the algorithm can jump out of local peaks to potentially find the global optimum.
Incorrect! Try again.
21The branching factor in a search tree refers to:
A.The depth of the tree
B.The number of successors a node has
C.The cost of the cheapest path
D.The number of nodes in the frontier
Correct Answer: The number of successors a node has
Explanation:Branching factor defines the width of the tree, specifically the number of children generated by a node.
Incorrect! Try again.
22Which search algorithm combines the space efficiency of DFS with the completeness of BFS?
A.Iterative Deepening Search (IDS)
B.Bidirectional Search
C.Greedy Search
D.A* Search
Correct Answer: Iterative Deepening Search (IDS)
Explanation:IDS runs DFS with increasing depth limits. It has the memory complexity of DFS () and the completeness/optimality of BFS.
Incorrect! Try again.
23What is the space complexity of Depth-First Search (DFS) with branching factor and maximum depth ?
A.
B.
C.
D.
Correct Answer:
Explanation:DFS only needs to store the path from the root to the leaf and the unexpanded siblings at each level, resulting in linear space complexity relative to depth.
Incorrect! Try again.
24A search algorithm is complete if:
A.It always finds the lowest cost solution
B.It is guaranteed to find a solution when one exists
C.It uses minimal memory
D.It finishes in polynomial time
Correct Answer: It is guaranteed to find a solution when one exists
Explanation:Completeness is the property that if a solution exists in the state space, the algorithm will eventually report it.
Incorrect! Try again.
25In Reinforcement Learning, what entity interacts with the environment?
A.The Critic
B.The Agent
C.The Generator
D.The Supervisor
Correct Answer: The Agent
Explanation:In the RL framework, the Agent performs actions in the Environment and receives observations and rewards.
Incorrect! Try again.
26What does a Markov Decision Process (MDP) usually consist of?
A.A set of neurons and weights
B.States, Actions, Transition Probabilities, and Rewards
C.A static database of labeled images
D.A set of logical rules and axioms
Correct Answer: States, Actions, Transition Probabilities, and Rewards
Explanation:An MDP is a tuple used to formally describe an environment for reinforcement learning.
Incorrect! Try again.
27In Reinforcement Learning, the Exploration vs. Exploitation dilemma refers to:
A.Choosing between using CPU or GPU
B.Choosing between trying new actions to gather information or using known actions to maximize reward
C.Choosing between supervised and unsupervised learning
D.Choosing between A* and BFS
Correct Answer: Choosing between trying new actions to gather information or using known actions to maximize reward
Explanation:The agent must balance exploring the environment to find better strategies (Exploration) and using its current knowledge to get high rewards (Exploitation).
Incorrect! Try again.
28What is the role of the discount factor in Reinforcement Learning?
A.It determines the learning rate
B.It balances the importance of immediate rewards vs. future rewards
C.It reduces the dimensionality of the state space
D.It is used to calculate the gradient
Correct Answer: It balances the importance of immediate rewards vs. future rewards
Explanation:Gamma () determines how much the agent cares about future rewards. A value close to 0 makes the agent myopic (greedy); close to 1 makes it far-sighted.
Incorrect! Try again.
29In RL, a Policy is defined as:
A.The sequence of rewards received
B.A mapping from states to actions
C.The transition probability matrix
D.The memory buffer of past experiences
Correct Answer: A mapping from states to actions
Explanation:A policy defines the behavior of the agent, specifying which action to take in a given state.
Incorrect! Try again.
30Which of the following is a local search algorithm?
A.Merge Sort
B.Hill Climbing
C.Breadth-First Search
D.Depth-First Search
Correct Answer: Hill Climbing
Explanation:Hill Climbing operates on a single current state and moves to neighbors, without keeping track of the path (local search).
Incorrect! Try again.
31If a problem has a solution at depth , but the state space is infinite, which algorithm risks never terminating?
A.Breadth-First Search
B.Iterative Deepening Search
C.Depth-First Search
D.A* Search with admissible heuristic
Correct Answer: Depth-First Search
Explanation:DFS dives deep. If there is an infinite path on the 'left' side of the tree, DFS will follow it forever and miss the solution at depth 5 on the 'right'.
Incorrect! Try again.
32In the 8-Queens problem modeled as a CSP, what is a binary constraint?
A.A queen must be on the board
B.No two queens can attack each other
C.There must be 8 queens total
D.Queens are represented by integers
Correct Answer: No two queens can attack each other
Explanation:This constraint involves the relationship between the positions of two distinct queens (variables), making it binary.
Incorrect! Try again.
33What is the Least Constraining Value heuristic in CSP?
A.Pick the value that rules out the fewest choices for neighboring variables
B.Pick the value that rules out the most choices
C.Pick the smallest numerical value
D.Pick the value randomly
Correct Answer: Pick the value that rules out the fewest choices for neighboring variables
Explanation:This heuristic attempts to leave the maximum flexibility for subsequent variable assignments.
Incorrect! Try again.
34In Local Beam Search with , what happens at each step?
A.It expands the single best node 5 times
B.It keeps track of the 5 best states found so far
C.It runs 5 independent hill climbing searches
D.It backtracks 5 steps
Correct Answer: It keeps track of the 5 best states found so far
Explanation:Local Beam Search keeps states in memory. At each step, it generates all successors of all states and selects the best from the complete list.
Incorrect! Try again.
35For the Traveling Salesperson Problem (TSP), what is the nature of the state space?
A.Continuous and infinite
B.Discrete and finite (combinatorial)
C.Continuous and finite
D.Differentiable
Correct Answer: Discrete and finite (combinatorial)
Explanation:TSP involves visiting a finite set of cities. The search space consists of permutations of these cities, which is discrete and factorial in size.
Incorrect! Try again.
36In A* search, what is the Manhattan Distance heuristic often used for?
A.Route planning on a map
B.Grid-based navigation (like the 8-puzzle or robot pathfinding)
C.Chess
D.Solving Sudoku
Correct Answer: Grid-based navigation (like the 8-puzzle or robot pathfinding)
Explanation:Manhattan distance () is admissible for grid movements where diagonal moves are not allowed.
Incorrect! Try again.
37What is a plateau in the context of state space landscapes?
A.A flat area where the objective function value is practically constant
B.The global maximum
C.A deep valley representing low cost
D.The initial state
Correct Answer: A flat area where the objective function value is practically constant
Explanation:A plateau makes local search difficult because the algorithm cannot determine which direction leads to improvement (zero gradient).
Incorrect! Try again.
38Which component of an RL agent estimates the 'goodness' of being in a specific state?
A.The Policy
B.The Value Function
C.The Action Space
D.The Environment
Correct Answer: The Value Function
Explanation:The Value Function predicts the expected cumulative future reward starting from state .
Incorrect! Try again.
39What is the difference between Stochastic Gradient Descent (SGD) and Batch Gradient Descent?
A.SGD uses the entire dataset per step; Batch uses one sample
B.SGD uses a single sample (or mini-batch) per step; Batch uses the entire dataset
C.SGD is deterministic; Batch is random
D.SGD is for classification; Batch is for regression
Correct Answer: SGD uses a single sample (or mini-batch) per step; Batch uses the entire dataset
Explanation:Batch gradient descent calculates the gradient using the whole dataset (stable but slow). SGD approximates the gradient using one sample (noisy but faster).
Incorrect! Try again.
40Which of the following problems is best modeled as a Constraint Satisfaction Problem?
A.Predicting stock prices
B.Map Coloring
C.Classifying emails as spam
D.Driving a car
Correct Answer: Map Coloring
Explanation:Map coloring requires assigning colors (values) to regions (variables) such that no adjacent regions share the same color (constraints).
Incorrect! Try again.
41In search algorithms, path cost is usually assumed to be:
A.Negative
B.Additive
C.Multiplicative
D.Exponential
Correct Answer: Additive
Explanation:The cost of a path is typically the sum of the costs of the individual actions along that path.
Incorrect! Try again.
42What is the main advantage of Bidirectional Search?
A.It guarantees finding the solution without memory
B.It can reduce time complexity from to
C.It works on infinite graphs
D.It does not require a goal state
Correct Answer: It can reduce time complexity from to
Explanation:By searching forward from the start and backward from the goal simultaneously, the search depth is effectively halved.
Incorrect! Try again.
43When defining an AI problem, abstraction refers to:
A.Writing abstract code classes
B.Removing irrelevant details from the real world to create a model
C.Making the problem harder
D.Visualizing the search tree
Correct Answer: Removing irrelevant details from the real world to create a model
Explanation:Abstraction is the process of keeping only the necessary details to solve the problem (e.g., ignoring the color of the car when planning a route).
Incorrect! Try again.
44Which of the following is a metaheuristic?
A.Binary Search
B.Particle Swarm Optimization (PSO)
C.Linear Regression
D.Bubble Sort
Correct Answer: Particle Swarm Optimization (PSO)
Explanation:PSO is a higher-level procedure (metaheuristic) designed to find a sufficiently good solution to an optimization problem, inspired by bird flocking.
Incorrect! Try again.
45In A* search, if overestimates the actual cost to the goal, the algorithm:
A.Remains optimal
B.Becomes incomplete
C.Is no longer guaranteed to be optimal
D.Becomes slower than BFS
Correct Answer: Is no longer guaranteed to be optimal
Explanation:If the heuristic is inadmissible (overestimates), A* might think a good path is too expensive and choose a suboptimal path instead.
Incorrect! Try again.
46What is the Reward Hypothesis in Reinforcement Learning?
A.Agents only learn if they are punished
B.All goals can be described by the maximization of expected cumulative reward
C.Rewards must always be positive
D.The reward is always equal to the state value
Correct Answer: All goals can be described by the maximization of expected cumulative reward
Explanation:This is the central axiom of RL: the goal of the agent is formalized as maximizing the return (cumulative reward).
Incorrect! Try again.
47What is the primary disadvantage of using a very small learning rate in optimization?
A.The algorithm will diverge
B.The algorithm will overshoot the minimum
C.Convergence will be very slow
D.It increases the variance of the gradient
Correct Answer: Convergence will be very slow
Explanation:A tiny step size means the algorithm makes very little progress in each iteration, requiring many more iterations to reach the optimum.
Incorrect! Try again.
48In the context of Genetic Algorithms, Mutation is used to:
A.Combine two parents
B.Select the best individuals
C.Maintain genetic diversity and prevent premature convergence
D.Calculate the fitness function
Correct Answer: Maintain genetic diversity and prevent premature convergence
Explanation:Mutation introduces random changes to genes, ensuring the algorithm can explore new areas of the search space that parent combinations couldn't reach.
Incorrect! Try again.
49Which search strategy is best suited for a game like Chess where the search space is too large to fully traverse?
A.Uniform Cost Search
B.Depth-Limited Search with Heuristic Evaluation (Minimax)
C.Random Walk
D.Genetic Algorithms
Correct Answer: Depth-Limited Search with Heuristic Evaluation (Minimax)
Explanation:In large games, we search to a specific depth limit and use a heuristic evaluation function to estimate the value of the positions at that depth.
Incorrect! Try again.
50An optimization problem where the objective function is Convex is desirable because:
A.It has no minimum
B.Any local minimum is also a global minimum
C.It requires genetic algorithms to solve
D.It is always discrete
Correct Answer: Any local minimum is also a global minimum
Explanation:Convex functions are bowl-shaped; gradient descent is guaranteed to find the global optimum.
Incorrect! Try again.
Give Feedback
Help us improve by sharing your thoughts or reporting issues.