Unit2 - Subjective Questions
CSE121 • Practice Questions with Detailed Answers
Differentiate between Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) with the help of a relationship diagram description.
Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are related concepts that are often used interchangeably but have distinct definitions:
-
Artificial Intelligence (AI):
- Definition: AI is the broadest concept. It is the simulation of human intelligence processes by machines, especially computer systems.
- Scope: It covers anything that enables computers to mimic human behavior (e.g., logic, problem-solving, perception).
-
Machine Learning (ML):
- Definition: ML is a subset of AI. It involves the use of algorithms and statistical models to enable computer systems to improve their performance on a specific task through experience (data) without being explicitly programmed.
- Scope: It focuses on learning patterns from data.
-
Deep Learning (DL):
- Definition: DL is a specialized subset of ML. It relies on Artificial Neural Networks (ANNs) with multiple layers (hence "deep") to model complex patterns in data.
- Scope: It is particularly effective for unstructured data like images, audio, and text.
Relationship:
Imagine three concentric circles:
- The outermost circle is AI.
- The middle circle inside AI is ML.
- The innermost circle inside ML is DL.
Explain the architecture and working components of an Expert System.
An Expert System is a computer system that emulates the decision-making ability of a human expert. Its architecture consists of the following key components:
-
Knowledge Base:
- This is the core repository of the system. It contains specific, high-quality knowledge about the domain (facts and rules) provided by human experts.
- It typically uses If-Then rules.
-
Inference Engine:
- This is the "brain" of the expert system. It applies logical rules to the knowledge base and the user's input to deduce new information or reach a conclusion.
- It uses strategies like Forward Chaining (data-driven) or Backward Chaining (goal-driven).
-
User Interface:
- This allows the user (who may not be an expert) to interact with the system. It accepts queries and displays the advice or diagnosis.
-
Explanation Facility:
- This component explains how the system reached a particular conclusion (e.g., "I concluded X because Rule Y applied").
-
Knowledge Acquisition Facility:
- Tools used by engineers to insert and update knowledge in the knowledge base from human experts.
Describe the three main types of Machine Learning algorithms with examples.
Machine Learning algorithms are categorized based on how they learn from data:
-
Supervised Learning:
- Concept: The model is trained on a labeled dataset, meaning the input data is paired with the correct output.
- Goal: To map the input variable () to the output variable ().
- Examples: Spam filtering (Label: Spam/Not Spam), House price prediction.
-
Unsupervised Learning:
- Concept: The model is trained on unlabeled data. The system tries to learn the patterns and structure from the data without any explicit guidance.
- Goal: To find hidden structures or clusters in the data.
- Examples: Customer segmentation, Anomaly detection.
-
Reinforcement Learning (RL):
- Concept: An agent learns to make decisions by performing actions in an environment and receiving feedback in the form of rewards or penalties.
- Goal: To maximize the cumulative reward.
- Examples: Self-driving cars, Game-playing AI (e.g., AlphaGo).
What is Fuzzy Logic? How does it differ from traditional Boolean Logic?
Fuzzy Logic is a form of many-valued logic in which the truth values of variables may be any real number between 0 and 1. It is used to handle the concept of partial truth, where the truth value may range between completely true and completely false.
Comparison:
| Feature | Boolean Logic (Crisp Logic) | Fuzzy Logic |
|---|---|---|
| Truth Values | Strictly 0 (False) or 1 (True). | Any value between 0.0 and 1.0 (inclusive). |
| Concept | Absolute certainty. | Degrees of certainty/membership. |
| Example | Is the water hot? Yes/No. | Is the water hot? 0.8 (Very hot), 0.3 (Warm). |
| Application | Digital circuits, binary computing. | Air conditioners, washing machines, braking systems. |
In mathematical terms, while Boolean logic uses sets where an element either belongs ($1$) or doesn't ($0$), Fuzzy logic uses membership functions .
Explain the concept of Augmented Reality (AR) and distinguish it from Virtual Reality (VR).
Augmented Reality (AR):
AR is a technology that superimposes a computer-generated image on a user's view of the real world, thus providing a composite view. It enhances the real world rather than replacing it.
Key Differences:
-
Environment:
- AR: Users are still in touch with the real world while interacting with virtual objects (e.g., Pokémon GO, Instagram filters).
- VR: Creates a completely immersive, simulated environment, shutting out the physical world (e.g., Oculus Rift gaming).
-
Hardware:
- AR: Can be experienced via smartphones, tablets, or smart glasses.
- VR: Requires a headset that covers the eyes completely.
-
Interactivity:
- AR: Augments reality (adds to it).
- VR: Replaces reality.
Define Natural Language Processing (NLP). What are its two main components?
Natural Language Processing (NLP) is a branch of AI that gives computers the ability to understand, interpret, and manipulate human language. It bridges the gap between human communication and computer understanding.
Main Components of NLP:
-
Natural Language Understanding (NLU):
- Focuses on reading and comprehending human language.
- It deals with ambiguity in meaning. It involves tasks like sentiment analysis, finding the intent of a sentence, and entity recognition.
- Example: Understanding that "Apple" refers to the company, not the fruit, in a specific sentence.
-
Natural Language Generation (NLG):
- Focuses on generating logical and meaningful text from data.
- It acts as a translator that converts computer data into natural language representation.
- Example: A weather bot analyzing temperature data and generating the sentence: "It will be sunny today with a high of 25 degrees."
Discuss the significant applications of Artificial Intelligence in the Healthcare sector.
AI has revolutionized healthcare by improving accuracy and efficiency. Key applications include:
- Medical Imaging & Diagnostics:
- AI algorithms (CNNs) analyze X-rays, MRIs, and CT scans to detect diseases like cancer, pneumonia, or fractures with higher accuracy than human eyes.
- Drug Discovery:
- AI accelerates the process of discovering new drugs by simulating molecular structures and predicting how different drugs will react, reducing time and cost.
- Virtual Health Assistants:
- Chatbots and apps utilize AI to provide basic medical consultation, remind patients to take medication, and answer health queries.
- Robotic Surgery:
- AI-powered robots assist surgeons in performing complex surgeries with precision, smaller incisions, and faster recovery times.
- Predictive Analytics:
- Analyzing patient data to predict outbreaks or identifying high-risk patients (e.g., risk of heart attack) for preventative care.
How is AI transforming the Agriculture sector? Explain with examples.
AI helps in modernizing agriculture through 'Precision Farming', optimizing resources and maximizing yield.
Key Applications:
- Crop and Soil Monitoring:
- Sensors and drone imagery use AI to monitor crop health, soil moisture, and nutrient levels, allowing farmers to intervene only where necessary.
- Predictive Analysis:
- AI models analyze weather patterns and historical crop data to advise farmers on the best time to sow seeds or harvest.
- Agricultural Robots:
- Autonomous robots can harvest crops at a higher volume and faster pace than human laborers. They can also perform weed detection and removal.
- Insect and Plant Disease Detection:
- Computer vision applications allow farmers to take pictures of plants, and the AI identifies diseases or pest infestations instantly.
- Smart Irrigation:
- Automated irrigation systems use AI to optimize water usage based on real-time soil moisture data, preventing water wastage.
Explain the role of AI in Social Media Monitoring and Sentiment Analysis.
Social media platforms generate massive amounts of unstructured data. AI is essential for processing this data.
Roles of AI:
- Sentiment Analysis:
- AI uses NLP to analyze text (tweets, comments) to determine the emotional tone behind it (Positive, Negative, or Neutral). Brands use this to gauge public opinion about products.
- Content Moderation:
- AI algorithms automatically detect and filter offensive content, hate speech, or fake news without human intervention.
- Personalized Feeds/Recommendations:
- Machine Learning algorithms (like collaborative filtering) analyze user behavior to curate personalized news feeds and suggest friends or products (e.g., "You might like this post").
- Facial Recognition:
- Used for auto-tagging features in photos.
- Targeted Advertising:
- AI analyzes user interests to display highly relevant advertisements, increasing conversion rates.
List and briefly explain the popular tools, languages, and libraries used for implementing AI and ML.
Programming Languages:
- Python: The most popular language due to its simplicity and vast ecosystem of libraries.
- R: Widely used for statistical analysis and data visualization.
- Java/C++: Used for high-performance applications and game AI.
Libraries and Frameworks:
- TensorFlow: Developed by Google, it is an open-source library for numerical computation and large-scale machine learning/deep learning.
- PyTorch: Developed by Facebook (Meta), known for its flexibility and ease of use in research and prototyping.
- Scikit-learn: A Python library for standard machine learning algorithms (regression, classification, clustering).
- Pandas & NumPy: Essential for data manipulation and numerical analysis.
- Keras: A high-level neural networks API that runs on top of TensorFlow.
Explain the technology and working mechanism behind Google Translator.
Google Translator relies on a technology called Google Neural Machine Translation (GNMT).
Working Mechanism:
-
Deep Learning (Neural Networks):
- Unlike older statistical methods that translated phrase-by-phrase, GNMT translates whole sentences at a time. It uses Recurrent Neural Networks (RNNs) or Transformers.
-
Encoder-Decoder Architecture:
- Encoder: Reads the input sentence (Source Language) and converts it into a numerical vector representation (context vector) that captures the meaning.
- Decoder: Takes this vector and generates the sentence in the Target Language.
-
Zero-Shot Translation:
- The system can translate between language pairs it has never explicitly seen before (e.g., Korean to Portuguese) by using an intermediate "interlingua" representation learned from other pairs (like English).
-
Continuous Learning:
- The system improves over time by learning from millions of documents and user feedback.
Describe the architecture and key technologies used in Driverless (Autonomous) Cars.
Driverless cars allow vehicles to navigate without human input. They rely on the integration of sensors and AI.
Key Technologies/Components:
-
Sensors:
- LiDAR (Light Detection and Ranging): Creates a precise 3D map of the surroundings using laser pulses.
- Radar: Detects the speed and distance of objects (effective in poor weather).
- Cameras: Read traffic lights, signs, and detect lane markings.
-
Perception Module:
- Uses Computer Vision and ML to interpret sensor data. It identifies objects (pedestrians, other cars) and classifies them.
-
Localization and Mapping:
- Algorithms like SLAM (Simultaneous Localization and Mapping) determine the car's exact position on a map relative to the environment.
-
Prediction & Planning:
- The AI predicts the behavior of other objects (e.g., "Will that pedestrian cross?") and plans the path (trajectory generation).
-
Control System:
- Sends commands to the actuators to control steering, acceleration, and braking.
How do virtual assistants like ALEXA and Siri work? Explain the process flow.
Virtual assistants function using a pipeline of AI technologies. The process flow is generally as follows:
-
Wake Word Detection:
- The device listens for a specific keyword (e.g., "Alexa", "Hey Siri") using a low-power local algorithm.
-
Speech Recognition (ASR):
- Automatic Speech Recognition converts the recorded audio waveform into text. It breaks speech into phonemes and words.
-
Natural Language Understanding (NLU):
- The text is analyzed to understand the Intent (what the user wants, e.g., "Play music") and Entities (specific details, e.g., "Taylor Swift").
-
Processing/fulfillment:
- The system queries a cloud server or database to fetch the requested information (weather, traffic, music).
-
Natural Language Generation (NLG) & Text-to-Speech (TTS):
- The system formulates a text response and converts it back into synthesized human-like speech to play back to the user.
What is ChatGPT? Discuss its underlying technology and significance.
ChatGPT (Chat Generative Pre-trained Transformer) is an AI chatbot developed by OpenAI capable of generating human-like text.
Underlying Technology:
- LLM (Large Language Model): It is built on the GPT architecture.
- Transformer Architecture: It uses a deep learning model called the Transformer (specifically the Decoder part), which utilizes a mechanism called "Self-Attention" to understand the context of words in long sentences.
- Training: It is pre-trained on a massive dataset of text from the internet to learn grammar and facts. It is then fine-tuned using RLHF (Reinforcement Learning from Human Feedback) to make responses more helpful and safe.
Significance:
- It can write code, essays, emails, and summarize text.
- It represents a shift towards Generative AI, where machines create new content rather than just analyzing existing data.
What are the current trends and future opportunities in the field of AI and ML?
Current Trends:
- Generative AI: Rise of models like GPT-4 and Midjourney that create text, images, and video.
- Explainable AI (XAI): Developing models that can explain their decisions to build trust (essential in law and medicine).
- Edge AI: Running AI algorithms locally on devices (IoT, phones) rather than the cloud for better privacy and speed.
- AutoML: Automated Machine Learning tools that allow non-experts to build models.
Opportunities:
- AI Ethics and Policy: Demand for professionals who ensure AI is used responsibly (bias reduction, data privacy).
- AI in Cybersecurity: Using AI to predict and prevent cyber-attacks in real-time.
- Hyper-automation: Combining AI with Robotic Process Automation (RPA) to automate end-to-end business processes.
List the various job roles available in the AI/ML domain and the necessary skillsets required.
Job Roles:
- Data Scientist: Analyzes complex data to help organizations make decisions.
- Machine Learning Engineer: Designs and deploys scalable ML models into production.
- AI Research Scientist: Works on inventing new algorithms and advancing the field.
- NLP Engineer: Specializes in language processing applications (chatbots, translation).
- Computer Vision Engineer: Works on image processing and visual recognition.
Skillsets Required:
- Technical:
- Programming: Python, R, C++.
- Math: Linear Algebra, Calculus, Probability, Statistics.
- Frameworks: TensorFlow, PyTorch, Keras.
- Data Handling: SQL, Big Data technologies (Spark, Hadoop).
- Soft Skills:
- Problem-solving aptitude.
- Domain knowledge (understanding the business context).
Define a Neural Network. Explain the mathematical model of a single neuron (Perceptron).
A Neural Network is a series of algorithms that endeavor to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates.
The Perceptron (Single Neuron Model):
A perceptron is the fundamental unit of a neural network. It takes inputs, weighs them, sums them up, adds a bias, and passes the result through an activation function.
Mathematical Equation:
Where:
- : Input values.
- : Weights assigned to each input (signifying importance).
- : Bias (allows shifting the activation function).
- : Weighted sum of inputs ().
- : Activation Function (e.g., Sigmoid, ReLU) which decides if the neuron should "fire" (output a value).
Discuss the ethical concerns associated with the rapid advancement of Artificial Intelligence.
As AI becomes more integrated into society, several ethical concerns arise:
- Bias and Fairness:
- AI models trained on historical data may inherit human biases (racial, gender), leading to unfair hiring practices or loan approvals.
- Job Displacement:
- Automation and AI may replace jobs in manufacturing, driving, and even customer service, leading to economic shifts.
- Privacy and Surveillance:
- Facial recognition and data mining can erode individual privacy and lead to mass surveillance states.
- Deepfakes and Misinformation:
- AI can generate realistic fake videos and audio, which can be used to spread misinformation or damage reputations.
- Accountability:
- If a driverless car crashes or a medical AI makes a mistake, determining legal liability (the developer, the user, or the machine?) is difficult.
Compare Supervised and Unsupervised Learning based on Input Data, Goal, and Complexity.
Comparison:
| Feature | Supervised Learning | Unsupervised Learning |
|---|---|---|
| Input Data | Uses Labeled data (Input + Correct Output). | Uses Unlabeled data (Input only). |
| Goal | To predict outcomes or classify data based on prior examples. | To discover hidden patterns, structures, or groupings in data. |
| Feedback | Direct feedback is provided during training (Correct/Incorrect). | No external feedback; the algorithm finds structure on its own. |
| Complexity | Generally simpler to calculate and interpret. | Computationally complex; results can be harder to interpret. |
| Examples | Regression (Weather prediction), Classification (Spam detection). | Clustering (Customer segmentation), Association (Market basket analysis). |
What is the Turing Test? How is it relevant to the definition of AI?
The Turing Test:
Proposed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
The Setup:
- There is a human evaluator, a human respondent, and a machine respondent.
- The evaluator interacts with both via text-only conversation, unseen.
- If the evaluator cannot reliably tell which is the machine and which is the human, the machine is said to have passed the test.
Relevance:
- It provided one of the earliest operational definitions of AI.
- It shifted the question from "Can machines think?" (which is philosophical) to "Can machines act indistinguishably from humans?" (which is observable).