1What is the primary goal of Artificial Intelligence (AI)?
what is AI
Easy
A.To design visually appealing user interfaces for software.
B.To create machines that can perform tasks that typically require human intelligence.
C.To increase the storage capacity of digital devices.
D.To build faster and more efficient computer hardware.
Correct Answer: To create machines that can perform tasks that typically require human intelligence.
Explanation:
Artificial Intelligence is a field of computer science focused on creating systems that can perform tasks like learning, reasoning, problem-solving, and understanding language, all of which are characteristic of human intelligence.
Incorrect! Try again.
2An AI system that is designed to perform one specific task, such as playing chess or identifying faces, is known as:
Evolution, and types of AI (narrow, general)
Easy
A.Sentient AI
B.Artificial General Intelligence (AGI)
C.Narrow AI (or Weak AI)
D.Artificial Superintelligence (ASI)
Correct Answer: Narrow AI (or Weak AI)
Explanation:
Narrow AI, also called Weak AI, is focused on a single, specific task. Most of the AI we use today, like virtual assistants and image recognition software, falls into this category.
Incorrect! Try again.
3Which of the following is a common application of AI in the healthcare domain?
Applications across domains (business, healthcare, automation, vision, language)
Easy
A.Keeping track of employee work schedules.
B.Manufacturing surgical tools.
C.Designing hospital building layouts.
D.Analyzing medical images like X-rays to help detect diseases.
Correct Answer: Analyzing medical images like X-rays to help detect diseases.
Explanation:
AI, particularly computer vision, is widely used in healthcare to analyze medical scans and images, assisting doctors in diagnosing conditions like cancer or fractures more accurately and quickly.
Incorrect! Try again.
4What are TensorFlow and PyTorch?
Modern AI Toolkits (TensorFlow, PyTorch)
Easy
A.Popular open-source software libraries for machine learning.
B.Cloud storage services.
C.Types of computer processors.
D.Web browsers for AI research.
Correct Answer: Popular open-source software libraries for machine learning.
Explanation:
TensorFlow (developed by Google) and PyTorch (developed by Meta/Facebook) are two of the most popular software libraries that provide tools and frameworks for building and training machine learning and deep learning models.
Incorrect! Try again.
5Which concept in Responsible AI deals with ensuring that an AI system's decisions are understandable and explainable to humans?
Responsible AI
Easy
A.Scalability
B.Transparency (or Explainability)
C.Performance
D.Availability
Correct Answer: Transparency (or Explainability)
Explanation:
Transparency, or Explainability, is a key principle of Responsible AI. It focuses on making the 'black box' of AI decision-making more understandable, so that humans can trust, audit, and debug the system's outputs.
Incorrect! Try again.
6In the context of an AI problem like solving a puzzle, what does a 'state' in a 'state space' represent?
AI Problem Modeling & Search Concepts: Defining AI problems as State Space and Search Problems
Easy
A.The person who is solving the puzzle.
B.A specific configuration or arrangement of the puzzle at a given moment.
C.The time it takes to solve the puzzle.
D.The final solution to the puzzle.
Correct Answer: A specific configuration or arrangement of the puzzle at a given moment.
Explanation:
A state space consists of all possible configurations of a problem. A single 'state' is a snapshot of one of those configurations, such as the positions of all pieces on a chessboard at a particular turn.
Incorrect! Try again.
7The hypothetical concept of an AI that could understand, learn, and apply knowledge across a wide range of tasks at a human level is called:
Evolution, and types of AI (narrow, general)
Easy
A.Artificial General Intelligence (AGI)
B.Limited Memory AI
C.Narrow AI
D.Reactive AI
Correct Answer: Artificial General Intelligence (AGI)
Explanation:
AGI refers to a form of AI with human-like cognitive abilities, capable of solving unfamiliar problems and learning new tasks without being explicitly programmed for them. It is still a theoretical concept.
Incorrect! Try again.
8Which of the following fields is considered a core foundation for the development of Artificial Intelligence?
Foundations of AI
Easy
A.Computer Science
B.Geology
C.World History
D.Marine Biology
Correct Answer: Computer Science
Explanation:
AI is a branch of computer science. Its foundations are also deeply rooted in mathematics, psychology, linguistics, and neuroscience, but computer science provides the primary framework for its implementation.
Incorrect! Try again.
9An AI system that can understand and respond to human language, like a chatbot or a voice assistant, is an application of which AI subfield?
Applications across domains (business, healthcare, automation, vision, language)
Easy
A.Robotics
B.Computer Vision
C.Natural Language Processing (NLP)
D.Expert Systems
Correct Answer: Natural Language Processing (NLP)
Explanation:
Natural Language Processing (NLP) is the area of AI focused on enabling computers to understand, interpret, and generate human language, both text and speech.
Incorrect! Try again.
10What is typically the first step in a standard AI/machine learning workflow?
Introduction to AI Workflows & Data-Centric Modeling
Easy
A.Gathering and preparing the data.
B.Training the algorithm.
C.Evaluating the model's accuracy.
D.Deploying the model to production.
Correct Answer: Gathering and preparing the data.
Explanation:
The foundation of any successful AI model is high-quality data. Therefore, the first step is always to collect, clean, and prepare the data that will be used for training and testing.
Incorrect! Try again.
11What is a major challenge in developing AI systems, especially those based on deep learning?
Challenges in AI problem solving
Easy
A.They run too quickly.
B.They require very little computer memory.
C.They often require a very large amount of high-quality, labeled data.
D.They are too easy for anyone to build.
Correct Answer: They often require a very large amount of high-quality, labeled data.
Explanation:
One of the biggest hurdles in AI is data acquisition. Many state-of-the-art models are 'data-hungry' and need vast datasets to learn effectively, and creating this data can be expensive and time-consuming.
Incorrect! Try again.
12The task of automatically grouping similar items together from an unlabeled dataset is known as:
Key AI problems and techniques
Easy
A.Clustering
B.Classification
C.Regression
D.Reinforcement Learning
Correct Answer: Clustering
Explanation:
Clustering is an unsupervised learning technique where the algorithm groups data points based on their similarities, without being told what the groups are beforehand. An example is grouping customers based on purchasing behavior.
Incorrect! Try again.
13Which of the following abilities is a key component of intelligence, both human and artificial?
What is Intelligence
Easy
A.The ability to lift heavy objects.
B.The ability to hold one's breath.
C.The ability to change color.
D.The ability to learn from experience.
Correct Answer: The ability to learn from experience.
Explanation:
Learning from experience is a fundamental aspect of intelligence. It allows an entity to adapt its behavior and improve its performance on future tasks based on past outcomes.
Incorrect! Try again.
14What is the primary focus of a 'data-centric' approach to building AI systems?
Data-Centric Modeling
Easy
A.Using the most powerful hardware available.
B.Constantly changing the model's algorithm.
C.Systematically improving the quality of the dataset.
D.Writing the code in multiple programming languages.
Correct Answer: Systematically improving the quality of the dataset.
Explanation:
A data-centric approach prioritizes improving the data used to train a model over tweaking the model's code or architecture. The philosophy is that high-quality data is often more impactful for performance than a slightly better algorithm.
Incorrect! Try again.
15In a simple search problem like finding the best route between two cities on a map, the 'problem space' consists of:
Characteristics of AI Problem Spaces
Easy
A.Only the start city and the end city.
B.The type of car you are driving.
C.All the possible routes and cities that can be visited.
D.The weather conditions.
Correct Answer: All the possible routes and cities that can be visited.
Explanation:
The problem space, or state space, encompasses all possible states of the problem. For a route-finding problem, this includes all potential paths and intermediate locations between the start and goal.
Incorrect! Try again.
16In AI, what does the term 'bias' often refer to?
Challenges in AI problem solving
Easy
A.A systematic error where the model produces unfair or prejudiced outcomes.
B.The model's ability to make fair decisions.
C.The physical weight of the computer running the model.
D.A type of computer virus.
Correct Answer: A systematic error where the model produces unfair or prejudiced outcomes.
Explanation:
Bias in AI refers to skewed results that are prejudiced against certain groups or outcomes. This often originates from biased training data that doesn't accurately represent the real world.
Incorrect! Try again.
17A key characteristic of many modern AI systems is their ability to improve their performance on a task over time without being explicitly reprogrammed. This is known as:
Characteristics of artificial intelligence
Easy
A.Hard-coding
B.Learning
C.Static programming
D.Rebooting
Correct Answer: Learning
Explanation:
Learning is a hallmark of AI. Machine learning algorithms, for example, can analyze data and 'learn' patterns from it, allowing them to improve their accuracy and decision-making over time as they are exposed to more data.
Incorrect! Try again.
18What is an example of AI being used in business for automation?
Applications across domains (business, healthcare, automation, vision, language)
Easy
A.Using AI to sort and respond to customer support emails automatically.
B.Designing a new company logo by hand.
C.Hiring more employees for manual data entry.
D.Manually creating financial reports in a spreadsheet.
Correct Answer: Using AI to sort and respond to customer support emails automatically.
Explanation:
AI is excellent for automating repetitive tasks. Using Natural Language Processing (NLP), an AI can categorize incoming emails and send automated responses to common queries, freeing up human agents for more complex issues.
Incorrect! Try again.
19The primary function of a machine learning library like PyTorch is to:
Modern AI Toolkits (TensorFlow, PyTorch)
Easy
A.Design the physical circuits for an AI chip.
B.Store large video files.
C.Provide a text editor for writing code.
D.Offer pre-built tools and functions to simplify the process of creating AI models.
Correct Answer: Offer pre-built tools and functions to simplify the process of creating AI models.
Explanation:
AI toolkits and libraries abstract away much of the low-level complexity, providing developers with high-level building blocks for tasks like creating neural network layers, calculating gradients, and training models.
Incorrect! Try again.
20What is the principle of 'Fairness' in Responsible AI?
Responsible AI
Easy
A.Ensuring the AI model does not produce systematically biased or discriminatory outcomes against certain groups.
B.Ensuring the AI model is profitable for the company.
C.Ensuring the AI model is written in a popular programming language.
D.Ensuring the AI model runs as fast as possible.
Correct Answer: Ensuring the AI model does not produce systematically biased or discriminatory outcomes against certain groups.
Explanation:
Fairness is a critical pillar of Responsible AI. It aims to mitigate and correct for biases in data and algorithms to ensure that the AI system's decisions are equitable and just for all user groups.
Incorrect! Try again.
21A GPS navigation system needs to find the shortest route between two locations in a city. In a state space search model for this problem, what would be the most accurate representation of a state and an action?
AI Problem Modeling & Search Concepts: Defining AI problems as State Space and Search Problems
Medium
A.State: A specific vehicle. Action: The speed of the vehicle.
B.State: The destination city. Action: Calculating the total distance.
C.State: The list of all possible routes. Action: Choosing the route with the fewest turns.
D.State: The current geographical intersection or junction. Action: Driving along a road segment to the next intersection.
Correct Answer: State: The current geographical intersection or junction. Action: Driving along a road segment to the next intersection.
Explanation:
In a state space representation, a 'state' is a complete snapshot of the world relevant to the problem. For routing, the agent's current location (an intersection) is a state. An 'action' transitions the agent from one state to another; in this case, driving down a road to the next intersection.
Incorrect! Try again.
22An AI model used for loan approvals is trained on historical data. If the historical data contains biases where a certain demographic was unfairly denied loans, the resulting AI model is likely to perpetuate this bias. This issue is primarily a failure of:
Responsible AI
Medium
A.Data privacy
B.Model scalability
C.Algorithmic efficiency
D.Fairness and equity
Correct Answer: Fairness and equity
Explanation:
This is a classic example of algorithmic bias. The AI system learns and amplifies existing biases present in the training data, leading to unfair outcomes for certain groups. Responsible AI practices focus on identifying and mitigating such fairness issues.
Incorrect! Try again.
23A company develops an AI system that is exceptionally good at translating languages but cannot perform any other task, such as identifying objects in images or composing music. This system is a prime example of:
Evolution, and types of AI (narrow, general)
Medium
Artificial Narrow Intelligence (ANI), also known as Weak AI, refers to AI systems that are designed and trained for a particular task. They operate within a limited, pre-defined range. AGI, in contrast, would possess human-like intelligence across a wide range of tasks.
Incorrect! Try again.
24An AI team has a reasonably good model for detecting manufacturing defects but wants to improve its performance. Instead of focusing on hyperparameter tuning or trying a more complex architecture, the team decides to spend its effort on acquiring more diverse and accurately labeled images of defects. This approach is best described as:
Introduction to AI Workflows & Data-Centric Modeling
Medium
A.Model-centric AI
B.Data-centric AI
C.Algorithm-centric AI
D.Hardware-centric AI
Correct Answer: Data-centric AI
Explanation:
Data-centric AI is an approach to AI development that focuses on systematically improving the quality and quantity of the dataset to enhance model performance, rather than iterating only on the model code or architecture (which is model-centric AI).
Incorrect! Try again.
25The Turing Test, proposed by Alan Turing, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. This test is most closely aligned with which definition of AI?
Foundations of AI
Medium
A.Acting humanly
B.Thinking rationally
C.Acting rationally
D.Thinking humanly
Correct Answer: Acting humanly
Explanation:
The Turing Test does not evaluate the internal thought processes (thinking humanly) or optimal decision-making (acting rationally). It focuses purely on external behavior – can the machine act in a way that a human interrogator cannot distinguish it from another human? This falls under the 'Acting Humanly' paradigm of AI.
Incorrect! Try again.
26A researcher is developing a novel neural network architecture and needs to frequently debug and inspect the gradients at each step of the training process. They prefer a framework that builds the computation graph as the code is executed. Which toolkit's core design philosophy is most suitable for this 'define-by-run' approach?
Modern AI Toolkits (TensorFlow, PyTorch)
Medium
A.Scikit-learn
B.PyTorch
C.Apache Spark MLlib
D.Early versions of TensorFlow (using static graphs)
Correct Answer: PyTorch
Explanation:
PyTorch is famous for its 'define-by-run' approach, which creates a dynamic computational graph. This means the graph is built on the fly as operations are performed, making it more intuitive and Pythonic, which is especially useful for debugging and building complex, dynamic models. In contrast, early TensorFlow used a 'define-and-run' static graph approach.
Incorrect! Try again.
27In the context of solving a Rubik's Cube, the problem space is characterized by a massive number of possible states (over ) but a small, well-defined set of actions (turning a face). This immense growth in the number of states is a classic example of:
Characteristics of AI Problem Spaces
Medium
A.A non-deterministic problem
B.A partially observable environment
C.A continuous action space
D.Combinatorial explosion
Correct Answer: Combinatorial explosion
Explanation:
Combinatorial explosion refers to the extremely rapid growth of the complexity of a problem due to how the components can be combined. In problems like chess or the Rubik's Cube, the number of possible states becomes astronomically large, making exhaustive search methods impractical.
Incorrect! Try again.
28An email service provider wants to build a system that automatically sorts incoming emails into categories like 'Primary,' 'Social,' 'Promotions,' and 'Spam.' This task is best framed as what type of machine learning problem?
Key AI problems and techniques
Medium
A.Clustering
B.Reinforcement learning
C.Regression
D.Multi-class classification
Correct Answer: Multi-class classification
Explanation:
Classification is a supervised learning task where the goal is to predict a discrete label. Since there are more than two possible categories ('Primary,' 'Social,' etc.), it is a multi-class classification problem. Clustering is unsupervised, and regression predicts a continuous value.
Incorrect! Try again.
29A hospital uses an AI system to analyze Magnetic Resonance Imaging (MRI) scans to detect and segment tumors. This application is a primary example of AI in the domain of:
Applications across domains (business, healthcare, automation, vision, language)
Medium
A.Natural Language Processing (NLP)
B.Predictive Analytics for patient readmission
C.Computer Vision
D.Robotic Process Automation (RPA)
Correct Answer: Computer Vision
Explanation:
Computer Vision is the field of AI that trains computers to interpret and understand the visual world. Analyzing medical images like MRIs to identify patterns, objects (like tumors), and anomalies is a core task in computer vision.
Incorrect! Try again.
30An AI agent is designed to play the card game Poker. A key challenge for the agent is that it does not know the cards held by its opponents. This lack of complete information about the game state makes the problem environment:
Challenges in AI problem solving
Medium
A.Partially observable
B.Deterministic
C.Fully observable
D.Static
Correct Answer: Partially observable
Explanation:
In a partially observable environment, the agent cannot access the complete state of the world at all times. In Poker, the opponents' hidden cards are a critical part of the state that the agent cannot see, forcing it to reason under uncertainty.
Incorrect! Try again.
31When modeling the 8-puzzle problem for an AI search algorithm, which of the following would be the most suitable heuristic function () for an A* search?
AI Problem Modeling & Search Concepts: Defining AI problems as State Space and Search Problems
Medium
A.The number of steps taken so far.
B.The number of tiles in the correct row.
C.The Manhattan distance, which is the sum of the distances of each tile from its goal position.
D.A constant value of 1 for every state.
Correct Answer: The Manhattan distance, which is the sum of the distances of each tile from its goal position.
Explanation:
A good heuristic for A* search must be admissible (never overestimates the cost to reach the goal). The Manhattan distance is a classic admissible and effective heuristic for grid-based problems like the 8-puzzle, as it provides an informed estimate of the minimum number of moves required to solve the puzzle from the current state.
Incorrect! Try again.
32A company deploys a complex deep learning model for medical diagnosis. A doctor questions a specific diagnosis, but the developers cannot provide a clear reason for the output, only that it is what the model learned. This situation highlights a critical lack of:
Explainability (XAI) is the ability to explain the reasoning behind an AI's decision in human-understandable terms. In high-stakes domains like healthcare, this is crucial for trust, accountability, and debugging. A 'black box' model lacks this characteristic.
Incorrect! Try again.
33The development of Bayesian networks, which allow AI systems to reason with uncertainty, is most directly built upon which foundational field?
Foundations of AI
Medium
A.Control Theory
B.Computer Engineering
C.Linguistics
D.Mathematics (Probability Theory)
Correct Answer: Mathematics (Probability Theory)
Explanation:
Bayesian networks are probabilistic graphical models. Their entire framework for representing knowledge and reasoning under uncertainty is built directly on the principles of probability theory, specifically Bayes' theorem.
Incorrect! Try again.
34If an AI system could not only write a compelling novel but also understand the emotional impact of its writing on a human and discuss its literary themes with a critic, it would be demonstrating capabilities associated with:
Evolution, and types of AI (narrow, general)
Medium
A.Artificial General Intelligence (AGI)
B.A standard Reactive Machine
C.A supervised learning model
D.A purely symbolic AI system
Correct Answer: Artificial General Intelligence (AGI)
Explanation:
This scenario describes a system with broad, human-like cognitive abilities, including creativity, emotional intelligence, and abstract reasoning. These are hallmark characteristics of AGI, which aims to perform any intellectual task that a human being can.
Incorrect! Try again.
35In a standard supervised machine learning workflow, what is the specific purpose of holding out a test dataset?
Introduction to AI Workflows & Data-Centric Modeling
Medium
A.To tune the model's hyperparameters like learning rate or tree depth.
B.To be used for data augmentation and creating synthetic samples.
C.To train the model's primary parameters.
D.To provide a final, unbiased evaluation of the trained model's performance on unseen data.
Correct Answer: To provide a final, unbiased evaluation of the trained model's performance on unseen data.
Explanation:
The test set must be kept separate and used only once after all training and hyperparameter tuning (on the training and validation sets) is complete. This ensures an honest, unbiased measure of how the model will generalize to new, real-world data.
Incorrect! Try again.
36An AI-powered thermostat learns your daily routines and automatically adjusts the temperature for comfort and energy efficiency, adapting over time as your schedule changes. Which key characteristic of AI is most prominently demonstrated?
characteristics of artificial intelligence
Medium
A.Complete logical deduction
B.Static knowledge representation
C.Symbolic reasoning
D.Learning and adaptation
Correct Answer: Learning and adaptation
Explanation:
The core feature highlighted in this example is the system's ability to modify its behavior based on new data (your changing routines). This capacity to learn from experience and adapt its actions over time is a fundamental characteristic of modern AI.
Incorrect! Try again.
37An AI system analyzes customer service chat logs to automatically categorize complaints into topics like 'Billing Issue,' 'Technical Support,' or 'Product Inquiry.' This is a direct application of which specific AI technique?
Applications across domains (business, healthcare, automation, vision, language)
Medium
A.Computer Vision for image segmentation
B.Time-series forecasting
C.Natural Language Processing (NLP) for text classification
D.Reinforcement Learning for robotic control
Correct Answer: Natural Language Processing (NLP) for text classification
Explanation:
This task requires the AI to understand and categorize human language in text format. This is the central purpose of Natural Language Processing (NLP), and the specific task of assigning predefined categories to text is known as text classification.
Incorrect! Try again.
38Consider a self-driving car navigating a busy city street. The environment is constantly changing due to other cars and pedestrians, and the car's sensors may have noise or be occluded. This environment is best described as:
Characteristics of AI Problem Spaces
Medium
A.Dynamic, continuous, and partially observable.
B.Static, continuous, and deterministic.
C.Dynamic, discrete, and fully observable.
D.Static, discrete, and fully observable.
Correct Answer: Dynamic, continuous, and partially observable.
Explanation:
The environment is dynamic because it changes while the agent is deliberating. It is continuous in space and time. It is partially observable because sensors are imperfect and cannot see everything (e.g., a car hidden behind a truck). It is also multi-agent and stochastic.
Incorrect! Try again.
39According to the 'Acting Rationally' (Rational Agent) definition of AI, the primary measure of an agent's success is:
What is Intelligence, what is AI
Medium
A.Its ability to perform complex symbolic computations.
B.Whether its behavior is indistinguishable from a human's in a conversation.
C.Its ability to achieve the best expected outcome based on its knowledge and the situation.
D.How closely its internal thought processes mimic human cognition.
Correct Answer: Its ability to achieve the best expected outcome based on its knowledge and the situation.
Explanation:
The rational agent approach defines AI as the study and construction of agents that do the 'right thing.' The 'right thing' is the action that maximizes a performance measure to achieve the best expected outcome, which doesn't necessarily have to be human-like.
Incorrect! Try again.
40A financial firm wants to build an AI system to analyze historical stock prices and predict the price for the next day. This problem is best framed as a:
Key AI problems and techniques
Medium
A.Anomaly detection problem
B.Regression or Time-Series Forecasting problem
C.Clustering problem
D.Classification problem
Correct Answer: Regression or Time-Series Forecasting problem
Explanation:
The goal is to predict a continuous numerical value (the stock price). This is a regression task. More specifically, since the data is a sequence ordered by time, it is a time-series forecasting problem, which is a specialized form of regression.
Incorrect! Try again.
41John Searle's Chinese Room argument primarily targets which specific claim about artificial intelligence?
Foundations of AI
Hard
A.The claim that a machine, by virtue of implementing a formal program, can have understanding or consciousness (Strong AI).
B.The claim that machines can process information faster than humans.
C.The claim that machines can pass the Turing Test by deceiving a human interrogator.
D.The claim that AI can be a useful tool for studying human cognition (Weak AI).
Correct Answer: The claim that a machine, by virtue of implementing a formal program, can have understanding or consciousness (Strong AI).
Explanation:
The Chinese Room argument is a thought experiment designed to challenge the central thesis of 'Strong AI'. Searle concedes that a machine could pass the Turing Test (a behavioral test) and that AI can be a useful tool (Weak AI). His core objection is that symbol manipulation (syntax), no matter how complex, is insufficient to produce genuine understanding or intentionality (semantics). The man in the room follows rules to manipulate Chinese symbols perfectly but understands nothing of Chinese, analogously arguing that a computer running a program does the same.
Incorrect! Try again.
42Consider a modified 8-puzzle problem on a 3x3 grid, but with an additional action: any tile adjacent to the blank space can be 'zapped' (removed from the board) at a high cost. A goal state is any configuration with tiles 1-8 in their correct positions, regardless of whether other tiles have been zapped. How does this modification fundamentally alter the state space search compared to the classic 8-puzzle?
AI Problem Modeling & Search Concepts: Defining AI problems as State Space and Search Problems
Hard
A.The state space graph becomes a tree instead of a general graph, as the 'zap' action is irreversible.
B.The state space becomes infinite, as zapping can be done repeatedly on newly adjacent tiles.
C.The state space remains finite, but the problem is no longer solvable with heuristic searches like A* because the heuristic becomes inconsistent.
D.The state space remains finite, but the graph is no longer undirected (it becomes a directed graph), and the branching factor becomes variable.
Correct Answer: The state space remains finite, but the graph is no longer undirected (it becomes a directed graph), and the branching factor becomes variable.
Explanation:
In the classic 8-puzzle, every move is reversible, making the state space graph undirected. The 'zap' action, however, is irreversible; you cannot 'un-zap' a tile. This makes the graph directed. The branching factor, which is constant in the classic puzzle (depending on the blank's position), now becomes variable. If the blank is in the center, a tile can be moved from 4 positions OR one of the 4 adjacent tiles can be zapped, changing the branching factor. The state space remains finite because there are a finite number of tiles that can be zapped. Heuristics like A* can still be used, but the heuristic function would need to be carefully designed to remain admissible.
Incorrect! Try again.
43An AI model for loan approvals shows 95% accuracy across all demographics. However, it is found that for a protected minority group, the False Rejection Rate (FRR) is 30%, while for the majority group, it is 5%. This situation best exemplifies a conflict between which two core principles of Responsible AI?
Responsible AI
Hard
A.Privacy and Security
B.Accountability and Reliability
C.Overall Accuracy and Fairness (Equality of Opportunity)
D.Transparency and Robustness
Correct Answer: Overall Accuracy and Fairness (Equality of Opportunity)
Explanation:
This scenario is a classic example of the accuracy-fairness trade-off. While the model has high overall accuracy, it is not fair. Specifically, it violates the principle of 'Equality of Opportunity,' which suggests that individuals from different groups with similar qualifications should have similar outcomes. A much higher False Rejection Rate for one group indicates that qualified individuals from that group are being unfairly denied loans at a disproportionate rate. Optimizing for overall accuracy alone has masked this significant fairness issue.
Incorrect! Try again.
44A researcher is building a novel recurrent neural network where the computational graph's structure changes at each time step based on the input data itself (e.g., activating different sub-networks). Why would PyTorch or TensorFlow 2.x (in eager mode) be fundamentally more suitable for this task than TensorFlow 1.x (with static graphs)?
Modern AI Toolkits (TensorFlow, PyTorch)
Hard
A.TensorFlow 1.x requires specialized hardware like TPUs which are not suitable for dynamic computations.
B.PyTorch and TF 2.x have a larger community and more pre-trained models for recurrent architectures.
C.TensorFlow 1.x's 'define-then-run' paradigm requires a fixed, pre-compiled graph, making it extremely cumbersome to handle data-dependent graph structures.
D.PyTorch and TF 2.x offer better visualization tools like TensorBoard for debugging dynamic graphs.
Correct Answer: TensorFlow 1.x's 'define-then-run' paradigm requires a fixed, pre-compiled graph, making it extremely cumbersome to handle data-dependent graph structures.
Explanation:
The core difference is the execution model. TensorFlow 1.x used a static computation graph ('define-then-run'). You would first define the entire graph of operations, compile it, and then run data through it. This model is efficient for static architectures but very difficult for models where the control flow (e.g., loops, conditionals) depends on the runtime values of tensors. PyTorch and TensorFlow 2.x use eager execution ('define-by-run'), where the graph is built dynamically as operations are executed. This makes implementing models with dynamic structures, like the one described, as natural as writing standard Python code with if statements and for loops.
Incorrect! Try again.
45A team has developed a highly complex image classification model that performs poorly on images taken in foggy weather, a rare condition in their initial dataset. The team's resources for new data acquisition are limited. According to data-centric AI principles, which of the following strategies is the most direct and efficient first step to address this specific problem?
Introduction to AI Workflows & Data-Centric Modeling
Hard
A.Implement a targeted data augmentation strategy by applying a synthetic fog effect to a significant portion of the existing training images.
B.Switch to a more complex model architecture like a Vision Transformer, as it may generalize better to out-of-distribution data.
C.Perform extensive hyperparameter tuning on the existing model, focusing on regularization techniques like dropout to improve robustness.
D.Increase the overall size of the training dataset by collecting 10% more general images, hoping some will include foggy conditions.
Correct Answer: Implement a targeted data augmentation strategy by applying a synthetic fog effect to a significant portion of the existing training images.
Explanation:
Data-centric AI prioritizes improving data quality over model architecture changes. The problem is a clear case of data drift or a gap in the training distribution (lack of foggy images). While collecting more data is a data-centric approach, it's not targeted or efficient given the resource constraints. Switching models or tuning hyperparameters are model-centric approaches. The most direct, efficient, and data-centric solution is to synthetically create the data that is missing. Targeted data augmentation (adding synthetic fog) directly addresses the model's specific weakness without requiring new data collection or expensive model re-architecting.
Incorrect! Try again.
46In the context of AI problem spaces, which characteristic of a problem would most strongly suggest that a local search algorithm (like Hill Climbing) is highly likely to fail to find the optimal solution?
Characteristics of AI Problem Spaces
Hard
A.A state space where actions are irreversible (directed graph).
B.A problem that is fully observable and deterministic.
C.The presence of numerous local maxima and a 'plateau' where many states have the same heuristic value.
D.A very large or infinite branching factor.
Correct Answer: The presence of numerous local maxima and a 'plateau' where many states have the same heuristic value.
Explanation:
Local search algorithms, like Hill Climbing, operate by iteratively moving to a neighboring state that improves the current state's value. Their fundamental weakness is their 'greedy' nature. They will get stuck on local maxima because no neighboring state offers an improvement, even if the global maximum is elsewhere. Similarly, on a plateau (a flat area of the search space), the algorithm has no gradient to follow and may wander aimlessly or terminate prematurely. While other options present challenges, the presence of local maxima and plateaus is the most direct cause of failure for this specific class of algorithms.
Incorrect! Try again.
47Considering the transition from Artificial Narrow Intelligence (ANI) to a hypothetical Artificial General Intelligence (AGI), which conceptual leap represents the most significant and currently unsolved challenge?
Evolution, and types of AI (narrow, general)
Hard
A.Achieving superhuman performance in a wider variety of specific, isolated tasks such as chess, Go, and protein folding.
B.Developing the ability for robust 'transfer learning' where knowledge gained in one domain (e.g., playing a video game) can be abstractly applied to a completely different domain (e.g., financial planning).
C.Improving the accuracy of natural language processing models to achieve near-perfect translation and summarization.
D.Scaling up computational power and memory to match the human brain's capacity.
Correct Answer: Developing the ability for robust 'transfer learning' where knowledge gained in one domain (e.g., playing a video game) can be abstractly applied to a completely different domain (e.g., financial planning).
Explanation:
AGI's defining characteristic is not just being good at many tasks, but the ability to generalize, reason, and transfer knowledge across disparate domains, much like humans do. Current ANI systems, even those using transfer learning, are mostly effective in closely related tasks (e.g., transferring knowledge from classifying cats to classifying dogs). The leap to AGI requires a form of abstraction and common-sense reasoning that allows knowledge to be applied to fundamentally different contexts. This challenge of cross-domain generalization is far more profound than simply scaling hardware (Option A), mastering more narrow tasks (Option B), or perfecting a single complex skill like NLP (Option D).
Incorrect! Try again.
48A self-driving car's perception system misidentifies a large, white truck against a bright sky as being part of the sky, leading to a collision. This is a real-world example of failure due to a combination of which two fundamental AI challenges?
Challenges in AI problem solving
Hard
A.The Qualification Problem and Moravec's Paradox.
B.The Frame Problem and the Symbol Grounding Problem.
C.Lack of Common Sense Knowledge and Brittleness to Out-of-Distribution Data.
D.Combinatorial Explosion and the Halting Problem.
Correct Answer: Lack of Common Sense Knowledge and Brittleness to Out-of-Distribution Data.
Explanation:
This scenario (reminiscent of a real Tesla accident) highlights two key issues. First, the model lacks 'common sense' — a human driver understands that large, solid objects can exist in the sky's visual space and must be avoided. The AI, trained on typical road scenes, lacked this implicit knowledge. Second, the specific visual configuration of a white truck against a bright sky was likely an 'out-of-distribution' or edge case not well-represented in the training data, demonstrating the 'brittleness' of the model. The other options are less applicable: The Frame Problem is about what doesn't change after an action, Symbol Grounding is about connecting symbols to the real world (related, but less specific), and the Qualification Problem is about listing all preconditions for an action.
Incorrect! Try again.
49A hospital deploys an AI to predict patient readmission risk. To ensure transparency, they use a simple, interpretable model (e.g., logistic regression). However, a more complex 'black box' model (e.g., a deep neural network) is shown to be 15% more accurate. This creates a direct tension between which two ethical AI principles?
Responsible AI
Hard
A.Accountability and Security
B.Fairness and Privacy
C.Interpretability and Beneficence (Utility)
D.Robustness and Reliability
Correct Answer: Interpretability and Beneficence (Utility)
Explanation:
This is a classic dilemma in Responsible AI. 'Interpretability' (or Explainability) is the principle that stakeholders should be able to understand why the AI made a particular decision. The simple model provides this. 'Beneficence' is the ethical principle of acting for the benefit of others; in this context, it translates to maximizing the model's utility and accuracy to provide the best possible patient care. The more accurate black box model better serves the principle of beneficence by potentially preventing more readmissions. The hospital must therefore trade off the ability to explain every decision against the potential for better patient outcomes.
Incorrect! Try again.
50The philosophical position of 'functionalism' is a key foundation for AI. It posits that mental states are constituted by their causal relations to other mental states, sensory inputs, and behavioral outputs. How does this view directly support the possibility of creating artificial general intelligence?
Foundations of AI
Hard
A.It prioritizes emotional intelligence over logical reasoning as the cornerstone of AGI.
B.It suggests that consciousness is an illusion and therefore irrelevant to creating AI.
C.It proves that intelligence can only arise from biological carbon-based structures.
D.It implies that if a machine can replicate the functional role of a mental state, it possesses that mental state, regardless of its physical substrate (e.g., silicon vs. neurons).
Correct Answer: It implies that if a machine can replicate the functional role of a mental state, it possesses that mental state, regardless of its physical substrate (e.g., silicon vs. neurons).
Explanation:
Functionalism decouples mental properties (like 'belief' or 'pain') from the specific physical material that implements them. The theory argues that what matters is the function or the 'causal role' the state plays in the system. This is a cornerstone belief for many in AI, as it provides a philosophical basis for the idea that intelligence is not unique to biological brains. If the complex web of causal relationships that constitutes human intelligence can be replicated in a different substrate, like a computer, then according to functionalism, that system would genuinely be intelligent and have mental states.
Incorrect! Try again.
51Reinforcement Learning (RL) is often modeled as a Markov Decision Process (MDP). What is the critical implication of the 'Markov Property' for an RL agent's decision-making process?
Key AI problems and techniques
Hard
A.The reward function must be deterministic and cannot have stochastic components.
B.The agent must have a complete and perfect model of the environment's dynamics to make any decision.
C.The future state depends only on the current state and the chosen action, not on the sequence of states that preceded it ().
D.The effects of an action taken in a state depend on the entire prior history of states and actions.
Correct Answer: The future state depends only on the current state and the chosen action, not on the sequence of states that preceded it ().
Explanation:
The Markov Property, or memorylessness, is the core assumption of an MDP. It states that the current state contains all the information necessary to decide the future. The history of how the agent arrived at is irrelevant for predicting the next state and reward . This is a powerful simplifying assumption, as it means the agent doesn't need to consider its entire history when choosing an action; it only needs to consider its present state. This makes learning value functions and policies computationally tractable.
Incorrect! Try again.
52When modeling a problem like vehicle routing (a variant of the Traveling Salesperson Problem) as a state-space search, what is the most significant challenge that makes simple uninformed search algorithms like Breadth-First Search (BFS) computationally infeasible?
AI Problem Modeling & Search Concepts: Defining AI problems as State Space and Search Problems
Hard
A.The partial observability of the problem, where the agent doesn't know the location of all cities at once.
B.The difficulty in defining a goal state, as multiple routes can have the same minimal cost.
C.The combinatorial explosion of the state space, which grows factorially () with the number of cities.
D.The non-deterministic nature of the environment, where travel times can change unexpectedly.
Correct Answer: The combinatorial explosion of the state space, which grows factorially () with the number of cities.
Explanation:
The Traveling Salesperson Problem (TSP) and its variants are classic NP-hard problems. The state in a search formulation could be defined as '(current city, {set of visited cities})'. The number of possible paths (permutations) to visit n cities is related to . This factorial growth leads to a combinatorial explosion, where the size of the state space becomes astronomically large even for a moderate number of cities (e.g., 20! is a massive number). Uninformed search algorithms like BFS, which explore the state space level by level, are completely impractical because they would need to explore an unmanageable number of states.
Incorrect! Try again.
53In medical imaging AI, a convolutional neural network (CNN) is trained to detect tumors. The model achieves high accuracy but is later found to be focusing on artifacts introduced by a specific type of X-ray machine used for most of the positive cancer cases in the training data. This is a subtle example of what specific AI pitfall?
Applications across domains (business, healthcare, automation, vision, language)
Hard
A.The vanishing gradient problem preventing deeper layers from learning meaningful features.
B.The model learning a 'shortcut' or a spurious correlation instead of the actual underlying pathology.
C.Overfitting to the training data's noise.
D.Catastrophic forgetting, where the model forgets previous knowledge when trained on new data.
Correct Answer: The model learning a 'shortcut' or a spurious correlation instead of the actual underlying pathology.
Explanation:
This is a sophisticated failure mode beyond simple overfitting. The model is not just memorizing noise; it's learning a statistically powerful but medically irrelevant feature (the scanner artifact) that is correlated with the label (tumor presence) in the biased dataset. This is known as 'shortcut learning.' The model found an easy way to get the right answer for the wrong reason. When deployed in a setting with different X-ray machines, it would fail completely, as the spurious correlation no longer holds. This highlights the danger of dataset bias and the importance of model interpretability (e.g., using saliency maps) to ensure the model is 'looking' at the right things.
Incorrect! Try again.
54What is the primary motivation for adopting a 'Data-Centric' modeling approach over a 'Model-Centric' approach, especially in mature AI projects where performance has plateaued?
Introduction to AI Workflows & Data-Centric Modeling
Hard
A.Newer deep learning models are too complex to tune effectively, so focusing on data is the only remaining option.
B.In many real-world systems, the quality and consistency of the data become the biggest lever for improvement after initial model architecture has been optimized.
C.Model-centric approaches require more computational power, which is often a bottleneck.
D.Data-centric approaches allow for the use of simpler, more interpretable models.
Correct Answer: In many real-world systems, the quality and consistency of the data become the biggest lever for improvement after initial model architecture has been optimized.
Explanation:
The 'Model-Centric' approach, common in academic research, holds the data fixed and iterates on the model architecture. The 'Data-Centric' approach holds the model architecture fixed and focuses on systematically improving the data (e.g., fixing labels, adding specific examples, augmenting data). The key insight, championed by figures like Andrew Ng, is that for many practical applications, once a reasonable model architecture is in place, the ceiling on performance is determined not by the model but by the quality of the data. Systematically engineering the data is often a more direct and effective path to better performance than endlessly tweaking a complex model.
Incorrect! Try again.
55The 'Frame Problem' in classical symbolic AI is notoriously difficult. In essence, what is the core challenge it describes?
Challenges in AI problem solving
Hard
A.The difficulty of representing all the qualifications needed for an action to be successful.
B.The challenge of representing what remains unchanged in the world after an agent performs an action, without having to explicitly state every single non-effect.
C.The problem of grounding abstract symbols (like 'chair') to real-world perceptual data.
D.The computational intractability of planning in a large state space.
Correct Answer: The challenge of representing what remains unchanged in the world after an agent performs an action, without having to explicitly state every single non-effect.
Explanation:
The Frame Problem highlights the issue of efficiently representing the consequences of actions. When an agent acts, most things in the world do not change. For example, if a robot picks up a cup, the color of the walls, the position of the sun, and the president's name all remain the same. A naive representation would require an enormous number of 'frame axioms' to state all these non-changes. The challenge is to find a representational framework where non-effects are implicitly handled, and only the direct effects of an action need to be specified. This is less of a problem for modern statistical AI but was a major conceptual hurdle for logic-based AI systems.
Incorrect! Try again.
56Which of the following scenarios best illustrates the characteristic of 'autonomy' in an AI system, as distinct from mere 'automation'?
What is Intelligence, what is AI, characteristics of artificial intelligence
Hard
A.An industrial robot on an assembly line that welds car parts in the exact same predefined locations every time.
B.A chatbot that provides scripted answers to frequently asked questions based on keyword matching.
C.A software script that automatically runs at midnight to back up a database.
D.A Mars rover that, after losing communication with Earth, independently navigates around a newly-formed crater to reach its next waypoint.
Correct Answer: A Mars rover that, after losing communication with Earth, independently navigates around a newly-formed crater to reach its next waypoint.
Explanation:
Automation involves performing a predefined task without human intervention. The industrial robot, backup script, and simple chatbot are all examples of automation; they follow a fixed set of rules or a script. Autonomy, however, implies the ability to make decisions and adapt behavior in a dynamic, unforeseen environment to achieve a goal. The Mars rover, faced with an unexpected obstacle (the crater) and without human guidance, must perceive its environment, reason about its options, and decide on a new course of action to continue its mission. This adaptive, goal-oriented decision-making in a novel situation is the hallmark of autonomy.
Incorrect! Try again.
57A company uses an AI to screen resumes and observes that it disproportionately rejects female candidates for a software engineering role. Upon investigation, they find the AI learned a spurious correlation between being named 'Jared' and being a successful engineer, as 'Jared' appeared frequently in the training data of successful hires. This is a direct example of which type of bias?
Responsible AI
Hard
A.Selection Bias
B.Historical Bias
C.Latent Bias
D.Interaction Bias
Correct Answer: Historical Bias
Explanation:
This is a classic case of Historical Bias. The bias exists in the data itself because it reflects past societal or organizational prejudices, even if those prejudices are no longer explicit. The AI is simply codifying a historical pattern where the tech industry has been male-dominated. The model correctly learns the statistical correlation present in the data ('Jared' -> good hire), but in doing so, it perpetuates and amplifies a harmful historical bias. Selection bias relates to how data is sampled, interaction bias comes from user feedback loops, and latent bias relates to inherent stereotypes in language or representations, but historical bias is the most direct description of a model learning from a world that was, and is, biased.
Incorrect! Try again.
58When performing distributed training of a large neural network, what is the fundamental difference between 'data parallelism' and 'model parallelism'?
Modern AI Toolkits (TensorFlow, PyTorch)
Hard
A.Data parallelism involves training different models on the same data, while model parallelism trains one model on different data.
B.Data parallelism is only supported in PyTorch, while model parallelism is exclusive to TensorFlow.
C.Data parallelism replicates the entire model on multiple devices, each processing a different batch of data, while model parallelism splits a single large model across multiple devices.
D.In data parallelism, gradients are synchronized after each epoch, whereas in model parallelism, they are synchronized after each batch.
Correct Answer: Data parallelism replicates the entire model on multiple devices, each processing a different batch of data, while model parallelism splits a single large model across multiple devices.
Explanation:
This question addresses the core concepts of scaling up AI training. In Data Parallelism (the most common method), you have multiple copies of the same model on different GPUs/machines. You split your large dataset into mini-batches and send one mini-batch to each device. Each model replica computes its gradients independently, and then the gradients are aggregated (e.g., averaged) to update the master model. In Model Parallelism, the model itself is too large to fit into the memory of a single device. Therefore, you split the model's layers or components across different devices. Data flows through these devices sequentially to complete a forward and backward pass. Data parallelism speeds up training time, while model parallelism makes it possible to train models that would otherwise be too large.
Incorrect! Try again.
59The concept of the 'AI Winter' refers to periods of reduced funding and interest in AI research. A primary cause of the first AI Winter in the mid-1970s was the failure of early AI systems to overcome what specific, fundamental problem?
Evolution, and types of AI (narrow, general)
Hard
A.The failure of perceptrons to solve non-linearly separable problems like XOR, as highlighted by Minsky and Papert.
B.The inability to process natural language, as demonstrated by the limitations of early machine translation.
C.The extreme computational cost of running neural networks on the hardware of the time.
D.The inability of early symbolic AI systems to handle the combinatorial explosion and common-sense reasoning required for real-world problems outside of limited 'microworlds'.
Correct Answer: The inability of early symbolic AI systems to handle the combinatorial explosion and common-sense reasoning required for real-world problems outside of limited 'microworlds'.
Explanation:
While the other options were contributing factors (especially the critique of perceptrons), the most significant cause of the first AI winter was the disillusionment that followed initial hype. Early AI, like Shakey the robot or SHRDLU, performed impressively in highly constrained, simulated 'microworlds'. However, researchers discovered that these techniques did not scale to the complexity, ambiguity, and vastness of the real world. The combinatorial explosion of possibilities and the immense amount of implicit, common-sense knowledge required proved to be insurmountable hurdles for the logic-based and search-based methods of the era, leading funding agencies to pull back.
Incorrect! Try again.
60The 'Turing Test' is often cited as a benchmark for AI. However, a significant philosophical criticism of the test, as a definitive measure of intelligence, is that it is primarily a test of:
What is Intelligence, what is AI, characteristics of artificial intelligence
Hard
A.emotional intelligence and empathy.
B.mathematical and logical reasoning ability.
C.the ability to successfully deceive a human through linguistic manipulation.
D.computational efficiency and speed.
Correct Answer: the ability to successfully deceive a human through linguistic manipulation.
Explanation:
The Turing Test is a behavioral test. It doesn't measure internal states like understanding, consciousness, or true intelligence. Its sole criterion is whether a machine can behave (specifically, converse) in a way that is indistinguishable from a human. Critics argue that this is a test of successful simulation or deception, not genuine intelligence. A system could potentially pass by using a vast database of conversational tricks and patterns (like early chatbots such as ELIZA attempted) without any real understanding of the conversation's meaning. This highlights the gap between behaviorally mimicking intelligence and actually possessing it.