1What is a primary benefit of integrating AI into mobile applications?
Introduction to AI in mobile apps
Easy
A.To make the app's user interface more complex
B.To enhance the user experience with personalized features
C.To decrease the app's battery consumption
D.To increase the app's final download size
Correct Answer: To enhance the user experience with personalized features
Explanation:
AI enables features like personalized recommendations, intelligent replies, and content discovery, which makes the app more engaging and useful for the user.
Incorrect! Try again.
2Which of the following is a common example of AI being used in a mobile app?
Introduction to AI in mobile apps
Easy
A.A button that navigates to a different screen
B.A photo gallery app that suggests cropping an image
C.An e-commerce app that suggests products based on your viewing history
D.A settings page with a toggle for dark mode
Correct Answer: An e-commerce app that suggests products based on your viewing history
Explanation:
Recommendation engines are a classic application of AI that analyze user behavior to suggest relevant items, thereby personalizing the shopping experience.
Incorrect! Try again.
3What is the primary AI assistant developed by Google for the Android platform?
Role of AI assistants in Android
Easy
A.Bixby
B.Google Assistant
C.Siri
D.Alexa
Correct Answer: Google Assistant
Explanation:
Google Assistant is Google's native AI-powered virtual assistant, deeply integrated into the Android operating system and other Google products.
Incorrect! Try again.
4Besides answering questions, what is a common function of an AI assistant on an Android device?
Role of AI assistants in Android
Easy
A.Designing new application icons
B.Controlling device settings like Wi-Fi or Bluetooth
C.Writing computer code from scratch
D.Upgrading the phone's hardware automatically
Correct Answer: Controlling device settings like Wi-Fi or Bluetooth
Explanation:
AI assistants can perform actions on the device itself, such as turning settings on/off, setting alarms, or opening apps, based on voice commands.
Incorrect! Try again.
5Which company is the creator of the ChatGPT language model?
Overview of ChatGPT OpenAI and Gemini Google
Easy
A.Apple
B.Google
C.OpenAI
D.Microsoft
Correct Answer: OpenAI
Explanation:
ChatGPT is a family of large language models developed by the artificial intelligence research and deployment company, OpenAI.
Incorrect! Try again.
6Gemini is a family of powerful, multimodal AI models developed by which company?
Overview of ChatGPT OpenAI and Gemini Google
Easy
A.OpenAI
B.Google
C.Amazon
D.Meta
Correct Answer: Google
Explanation:
Gemini is Google's flagship collection of generative AI models, designed to handle various data types including text, images, and audio.
Incorrect! Try again.
7What does it mean for an AI model like Gemini to be "multimodal"?
Overview of ChatGPT OpenAI and Gemini Google
Easy
A.It was created by a team from multiple countries
B.It can only generate responses in text format
C.It can understand and process multiple types of data (e.g., text, images, audio)
D.It can only run on mobile devices
Correct Answer: It can understand and process multiple types of data (e.g., text, images, audio)
Explanation:
A multimodal AI can process and reason about various data formats, or modalities. For instance, it can analyze an image and answer a text-based question about it.
Incorrect! Try again.
8What does the acronym API stand for?
Basics of AI APIs REST APIs and API keys
Easy
A.Application Process Integrity
B.Advanced Python Implementation
C.Automated Program Interaction
D.Application Programming Interface
Correct Answer: Application Programming Interface
Explanation:
An API is a set of rules and tools that allows one software application to communicate with another.
Incorrect! Try again.
9What is the primary purpose of an API key when using a service like the Google Gemini API?
Basics of AI APIs REST APIs and API keys
Easy
A.To choose the color scheme of the API response
B.To speed up your internet connection
C.To determine the programming language you must use
D.To authenticate your application's requests to the service
Correct Answer: To authenticate your application's requests to the service
Explanation:
An API key is a unique secret token used to identify and authorize your app, ensuring that only approved applications can access the API and allowing the service to track usage.
Incorrect! Try again.
10REST APIs most commonly use which web protocol for communication?
Basics of AI APIs REST APIs and API keys
Easy
A.HTTP/HTTPS
B.FTP
C.TCP
D.SMTP
Correct Answer: HTTP/HTTPS
Explanation:
REST (Representational State Transfer) is an architectural style that leverages the standard methods of the HTTP/HTTPS protocol, like GET and POST, for client-server communication.
Incorrect! Try again.
11What is the most critical prerequisite for an Android developer before they can make calls to the Google Gemini API?
Integrating Google Gemini API in Android
Easy
A.Having at least 100 test users
B.Obtaining an API key from Google
C.Completing the UI design of the entire app
D.Publishing the app on the Google Play Store
Correct Answer: Obtaining an API key from Google
Explanation:
Access to the Gemini API is protected. You must first generate an API key through a Google service (like Google AI Studio) to authenticate your app's requests.
Incorrect! Try again.
12To simplify the process of using Google's generative AI models in an Android app, what should a developer use?
Integrating Google Gemini API in Android
Easy
A.The Firebase Crashlytics library
B.The official Google AI for Android SDK
C.Direct manual HTTP requests only
D.The Google Maps SDK
Correct Answer: The official Google AI for Android SDK
Explanation:
Google provides a specific Software Development Kit (SDK) to make integration with the Gemini API much easier by handling complex tasks like request formatting and authentication.
Incorrect! Try again.
13When interacting with a generative AI model, what is the term for the input text or question you provide to it?
Generating AI text responses
Easy
A.A parameter
B.A token
C.A response
D.A prompt
Correct Answer: A prompt
Explanation:
A prompt is the initial instruction or query given to an AI model to guide it in generating the desired output.
Incorrect! Try again.
14After sending a prompt to the Gemini API, what does your application receive back upon a successful request?
Generating AI text responses
Easy
A.A generated text response from the AI model
B.A random error message
C.An API key
D.A compiled version of your app
Correct Answer: A generated text response from the AI model
Explanation:
The fundamental purpose of calling a generative AI API is to receive a newly created response (text, image, etc.) from the model based on your input prompt.
Incorrect! Try again.
15Which standard Android UI widget is most suitable for allowing a user to type in a multi-line prompt for an AI?
Handling user input and AI output
Easy
A.TextView
B.Button
C.ImageView
D.EditText
Correct Answer: EditText
Explanation:
An EditText is the primary component for user-editable text fields in Android, making it the correct choice for capturing user input like a prompt.
Incorrect! Try again.
16To display the text response received from an AI API to the user, which non-editable UI widget is the best choice?
Handling user input and AI output
Easy
A.ProgressBar
B.EditText
C.TextView
D.Switch
Correct Answer: TextView
Explanation:
A TextView is designed to display static or dynamic text that the user cannot edit, which is perfect for showing the AI's generated response.
Incorrect! Try again.
17In the context of APIs, what is a 'rate limit'?
Basic error handling and API usage limits
Easy
A.The speed rating of the AI model
B.A limit on the number of requests an app can make in a given time period
C.The maximum number of characters in a response
D.The cost per API call
Correct Answer: A limit on the number of requests an app can make in a given time period
Explanation:
Service providers set rate limits (e.g., 60 requests per minute) to ensure service stability and prevent abuse by any single user or application.
Incorrect! Try again.
18If your app makes an API request but has no internet connection, what is the most likely outcome?
Basic error handling and API usage limits
Easy
A.The API will return a successful but empty response
B.The app will generate a fake response
C.The API will wait indefinitely until the connection returns
D.A network error will occur and the request will fail
Correct Answer: A network error will occur and the request will fail
Explanation:
API calls require a network connection to reach the remote server. Without one, the request cannot be sent, leading to a network-related error or exception in the app.
Incorrect! Try again.
19What kind of response should your app expect if it uses an incorrect or expired API key?
Basic error handling and API usage limits
Easy
A.The app will be uninstalled automatically
B.A longer, more detailed answer from the AI
C.A successful response (HTTP 200 OK)
D.An authentication error (e.g., HTTP 401 Unauthorized or 403 Forbidden)
Correct Answer: An authentication error (e.g., HTTP 401 Unauthorized or 403 Forbidden)
Explanation:
An invalid API key will cause the server to reject the request because the app cannot be properly authenticated. This is communicated via an authentication error status code.
Incorrect! Try again.
20In the client-server architecture used by AI APIs, what role does your Android app play?
Basics of AI APIs REST APIs and API keys
Easy
A.The Server
B.The Network Router
C.The Client
D.The Database
Correct Answer: The Client
Explanation:
The Android application is the 'client' that initiates requests for information or services from the remote 'server,' which hosts the AI model and processes those requests.
Incorrect! Try again.
21An e-commerce app wants to implement a feature to help users find products by describing them in natural language (e.g., "show me red running shoes for trails"). Which AI capability is most directly applicable to this task?
Introduction to AI in mobile apps
Medium
A.Natural Language Understanding (NLU)
B.Computer Vision
C.Predictive Analytics
D.Anomaly Detection
Correct Answer: Natural Language Understanding (NLU)
Explanation:
NLU is the branch of AI focused on interpreting and understanding human language, which is essential for processing a user's descriptive search query to extract intent and entities (like 'running shoes', 'red', and 'trails').
Incorrect! Try again.
22What is a key advantage of using a cloud-based AI model (like the Gemini API) over an on-device model (like Gemini Nano) for a complex mobile app feature?
Introduction to AI in mobile apps
Medium
A.It works offline without an internet connection.
B.It guarantees absolute user data privacy since no data leaves the device.
C.It can leverage much larger, more powerful models without impacting the device's local performance.
D.It has significantly lower latency for all responses.
Correct Answer: It can leverage much larger, more powerful models without impacting the device's local performance.
Explanation:
Cloud-based AI models run on powerful servers, allowing them to be much larger and more capable than models that can fit on a mobile device, all without consuming the user's local CPU, RAM, or battery.
Incorrect! Try again.
23A developer wants to allow users to say, "Hey Google, start a run in FitApp." Which Android framework is specifically designed to enable this kind of integration with the Google Assistant?
Role of AI assistants in Android
Medium
A.App Actions
B.Broadcast Receivers
C.Content Providers
D.Android Services
Correct Answer: App Actions
Explanation:
App Actions allow you to extend your app's functionality to the Google Assistant, enabling users to trigger specific features and intents within your app using voice commands.
Incorrect! Try again.
24What is the primary architectural and use-case difference between a model like Gemini Pro and a model like Gemini Nano?
Overview of ChatGPT OpenAI and Gemini Google
Medium
A.Gemini Pro is trained by Google, while Gemini Nano is trained by OpenAI.
B.Gemini Pro is exclusively for text, while Gemini Nano is for images.
C.Gemini Pro is open-source, while Gemini Nano is proprietary.
D.Gemini Pro is a large, server-side model for complex tasks, while Gemini Nano is a smaller model designed to run efficiently on-device.
Correct Answer: Gemini Pro is a large, server-side model for complex tasks, while Gemini Nano is a smaller model designed to run efficiently on-device.
Explanation:
Google's model family is designed for different scales. Gemini Pro is a powerful, versatile model that runs in the cloud, whereas Gemini Nano is a highly efficient model optimized for on-device tasks on Android where latency and offline capability are important.
Incorrect! Try again.
25When making a REST API call to a service like the Gemini API, why is it considered more secure to send the API key in an HTTP header (e.g., x-goog-api-key) rather than as a URL query parameter?
Basics of AI APIs REST APIs and API keys
Medium
A.URL parameters have a shorter character limit than headers.
B.It is faster for servers to process headers than query parameters.
C.URLs, including their query parameters, are often logged in server logs, browser history, and network proxies, exposing the key.
D.Headers are always encrypted by HTTPS, while URLs are not.
Correct Answer: URLs, including their query parameters, are often logged in server logs, browser history, and network proxies, exposing the key.
Explanation:
Placing sensitive information like an API key in the URL makes it vulnerable to exposure through server logs, browser history, and network intermediaries. Headers are part of the request that are less likely to be logged, making them a more secure location for authentication tokens.
Incorrect! Try again.
26When using the Google AI SDK for Android with Kotlin, what is the main purpose of the GenerativeModel class?
Integrating Google Gemini API in Android
Medium
A.To store the API key securely in Android's Keystore.
B.It is the primary entry point for interacting with the Gemini model to generate content and start chats.
C.It is a UI component for displaying the AI's markdown response.
D.To manage the lifecycle of the Android Activity.
Correct Answer: It is the primary entry point for interacting with the Gemini model to generate content and start chats.
Explanation:
The GenerativeModel class is the core component of the SDK. You instantiate it with your model name and configuration, and then use its methods like generateContent or startChat to communicate with the Gemini API.
Incorrect! Try again.
27In an Android app using Kotlin Coroutines to call the Gemini API, which CoroutineScope and Dispatcher would be most appropriate for launching the network request to prevent blocking the main thread?
Network operations are I/O-bound and should never run on the main thread. Dispatchers.IO is specifically optimized for this type of work. Using lifecycleScope ensures the coroutine is automatically cancelled if the Activity or Fragment is destroyed, preventing memory leaks.
Incorrect! Try again.
28Your app receives an HTTP 429 Too Many Requests error from the Gemini API. What is the most effective short-term strategy to handle this error and recover gracefully?
Basic error handling and API usage limits
Medium
A.Prompt the user to check their internet connection.
B.Immediately retry the request in a tight loop.
C.Invalidate the current API key and request a new one from the server.
D.Implement an exponential backoff strategy, waiting for a progressively longer time before retrying.
Correct Answer: Implement an exponential backoff strategy, waiting for a progressively longer time before retrying.
Explanation:
A 429 error indicates you've exceeded your rate limit. An exponential backoff strategy (e.g., waiting 1s, then 2s, then 4s) is the standard method to handle this, as it reduces the load on the server and increases the chance of a successful subsequent request.
Incorrect! Try again.
29You are displaying a response from the Gemini API that is being streamed. To improve the user experience, which approach is best for updating the TextView?
Handling user input and AI output
Medium
A.Append chunks of text to the TextView as they are received from the stream, creating a "typing" effect.
B.Show a loading spinner and replace it with the full text only when the stream is complete.
C.Display the raw JSON response in the TextView for debugging purposes.
D.Wait for the entire response to be received, then display it all at once.
Correct Answer: Append chunks of text to the TextView as they are received from the stream, creating a "typing" effect.
Explanation:
Streaming allows the model to send back the response in pieces. Updating the UI as these chunks arrive provides immediate feedback to the user and makes the app feel more responsive and dynamic, mimicking a real-time conversation.
Incorrect! Try again.
30When making a direct REST API call to a text-generation model like Gemini, the user's prompt and other parameters like temperature are typically sent in which part of the HTTP request?
Basics of AI APIs REST APIs and API keys
Medium
A.In the request body, formatted as a JSON object.
B.As a URL query parameter.
C.As part of the API endpoint's URL path.
D.In a custom HTTP header like X-Prompt-Data.
Correct Answer: In the request body, formatted as a JSON object.
Explanation:
Complex and potentially long data, such as a user's prompt, model parameters, and conversation history, is structured and sent in the request body. JSON is the most common format for this data payload in modern REST APIs.
Incorrect! Try again.
31What does it mean for a generative AI model like Gemini to be "multimodal"?
Overview of ChatGPT OpenAI and Gemini Google
Medium
A.It can process and understand information from multiple types of data, such as text, images, and audio, within a single prompt.
B.It can be deployed on multiple platforms (e.g., web, Android, iOS).
C.It has multiple versions available with different capabilities (e.g., Pro, Ultra, Nano).
D.It can generate responses in multiple human languages.
Correct Answer: It can process and understand information from multiple types of data, such as text, images, and audio, within a single prompt.
Explanation:
Multimodality is the ability of an AI model to handle and reason about different data formats (modalities) simultaneously. For example, you can give Gemini an image and ask a question about it using text in the same request.
Incorrect! Try again.
32Consider the following Kotlin code snippet for initializing the Gemini model in an Android app. What is the most significant security risk with this direct approach?
A.Hardcoding the API key directly in the source code makes it vulnerable to being extracted from the compiled APK.
B.The code does not specify a generationConfig.
C.The GenerativeModel constructor might throw a NetworkOnMainThreadException.
D.The model name "gemini-pro" might be deprecated.
Correct Answer: Hardcoding the API key directly in the source code makes it vulnerable to being extracted from the compiled APK.
Explanation:
Hardcoding secrets like API keys in your client-side code is a major security flaw. Anyone who decompiles your APK can find and misuse your key. Keys should be stored securely, for example, in local.properties and accessed via BuildConfig, and further protected with server-side proxies or API key restrictions.
Incorrect! Try again.
33When building a chatbot feature using the Gemini Android SDK, what is the primary benefit of using generativeModel.startChat() over making repeated calls to generativeModel.generateContent()?
Generating AI text responses
Medium
A.It is a synchronous call that is easier to manage.
B.It creates a Chat object that automatically maintains the conversation history, providing context for follow-up messages.
C.It uses less network bandwidth for each message.
D.It provides a more detailed error response if the API call fails.
Correct Answer: It creates a Chat object that automatically maintains the conversation history, providing context for follow-up messages.
Explanation:
Unlike stateless generateContent calls, startChat() initializes a stateful chat session. The returned Chat object keeps track of the back-and-forth, so when you send a new message, the previous turns are sent along as context, enabling a coherent conversation.
Incorrect! Try again.
34To prevent unauthorized use of your Gemini API key in other applications, what is a recommended security practice available in the Google Cloud Console?
Basic error handling and API usage limits
Medium
A.Obfuscating the key string using ProGuard/R8.
B.Frequently rotating the API key every 24 hours.
C.Applying an API key restriction to only allow requests from your app's specific package name and SHA-1 certificate fingerprint.
D.Storing the API key in SharedPreferences.
Correct Answer: Applying an API key restriction to only allow requests from your app's specific package name and SHA-1 certificate fingerprint.
Explanation:
API providers like Google allow you to add application restrictions. By locking the key to your app's unique signature, you ensure that even if the key is extracted, it cannot be used from any other application, significantly mitigating misuse.
Incorrect! Try again.
35Your app has an EditText for user input and a Button to send a prompt to the Gemini API. What is a recommended UI/UX practice to implement immediately after the user taps the send button?
Handling user input and AI output
Medium
A.Immediately clear the EditText field before getting a response.
B.Do nothing to the UI and allow the user to send multiple requests concurrently.
C.Animate the button to indicate a successful tap but leave input fields active.
D.Disable the Button and EditText and show a loading indicator to prevent multiple submissions while waiting for a response.
Correct Answer: Disable the Button and EditText and show a loading indicator to prevent multiple submissions while waiting for a response.
Explanation:
Disabling input controls and showing a visual indicator of activity (like a progress bar) provides clear feedback, prevents the user from sending duplicate requests, and manages the application state properly while waiting for an asynchronous operation to complete.
Incorrect! Try again.
36Which of the following is a clear example of Generative AI being used in a mobile application?
Introduction to AI in mobile apps
Medium
A.A fitness app that classifies your activity as "running" or "cycling" using phone sensors.
B.An editing app that creates a novel image from the text prompt "a photo of a cat wearing a spacesuit on Mars".
C.A keyboard app that suggests the next word as you type.
D.A camera app that detects and highlights faces to apply autofocus.
Correct Answer: An editing app that creates a novel image from the text prompt "a photo of a cat wearing a spacesuit on Mars".
Explanation:
Generative AI is focused on creating new, original content (text, images, code, etc.) that did not exist before. Face detection, word prediction, and activity classification are all forms of predictive or classificatory AI, not generative AI.
Incorrect! Try again.
37How do AI assistants like Google Assistant fundamentally enhance user interaction on Android compared to a traditional app-only interface?
Role of AI assistants in Android
Medium
A.By allowing all apps to run entirely in the background without user interaction.
B.By allowing users to interact with app features hands-free and from a system-level interface, without opening the app manually.
C.By guaranteeing faster app performance through OS-level CPU scheduling.
D.By providing a complete replacement for every app's graphical user interface.
Correct Answer: By allowing users to interact with app features hands-free and from a system-level interface, without opening the app manually.
Explanation:
The core value of an AI assistant is providing a conversational, system-wide interface. It allows users to perform actions without needing to manually find and navigate through a specific app, often using voice, which is crucial for accessibility and convenience.
Incorrect! Try again.
38When adding the Google AI Client SDK for Android to your project using Gradle, in which file do you typically declare the implementation("com.google.ai.client.generativeai:generativeai:...") dependency?
Integrating Google Gemini API in Android
Medium
A.In the module-level build.gradle (or build.gradle.kts) file's dependencies block.
B.In the AndroidManifest.xml file inside a <uses-library> tag.
C.In the proguard-rules.pro file to prevent obfuscation.
D.In the project-level build.gradle file's dependencies block.
Correct Answer: In the module-level build.gradle (or build.gradle.kts) file's dependencies block.
Explanation:
External libraries and SDKs for a specific Android application module (like the 'app' module) are declared in that module's build.gradle or build.gradle.kts file, using the implementation configuration in the dependencies block.
Incorrect! Try again.
39In the context of the Gemini API's generationConfig, what does the "temperature" parameter primarily control?
Generating AI text responses
Medium
A.The factual accuracy of the generated text, with higher values being more accurate.
B.The speed of the response generation.
C.The maximum number of tokens allowed in the response.
D.The degree of randomness in the output; a lower value is more deterministic, while a higher value is more creative.
Correct Answer: The degree of randomness in the output; a lower value is more deterministic, while a higher value is more creative.
Explanation:
Temperature is a key parameter for controlling the output of LLMs. A low temperature (e.g., 0.2) makes the model choose more likely, predictable words, resulting in focused responses. A high temperature (e.g., 0.9) increases randomness, leading to more diverse or creative outputs.
Incorrect! Try again.
40When you make a request to the Gemini API and receive a response where response.text is null but no network or authentication exception was thrown, what is a likely cause related to the API's configuration?
Basic error handling and API usage limits
Medium
A.The model's response was blocked due to a safety policy violation (e.g., generating harmful content).
B.The user's device is offline, but the request was cached.
C.The request specified an invalid or deprecated model name.
D.The API key used has expired.
Correct Answer: The model's response was blocked due to a safety policy violation (e.g., generating harmful content).
Explanation:
The Gemini API has built-in safety filters. If a prompt or the model's generated response violates these filters, the API may block the response. This often manifests as an empty or null content field rather than a server error, indicating the content was successfully filtered.
Incorrect! Try again.
41An Android application for real-time video analysis needs to apply a complex, computationally intensive object detection model. The app must function with minimal latency even with intermittent network connectivity. What is the most optimal AI model deployment strategy?
Introduction to AI in mobile apps
Hard
A.Exclusively use a server-side model and implement a complex caching mechanism on the device to simulate offline functionality.
B.Use a cloud-based AI API for all processing to leverage powerful servers.
C.Deploy a full-sized model directly within the app's APK.
D.Implement a hybrid approach using a quantized on-device model (e.g., TensorFlow Lite) for initial, low-latency detection and offloading complex analysis to a cloud API when a stable connection is available.
Correct Answer: Implement a hybrid approach using a quantized on-device model (e.g., TensorFlow Lite) for initial, low-latency detection and offloading complex analysis to a cloud API when a stable connection is available.
Explanation:
This hybrid approach balances latency, offline capability, and computational power. The quantized on-device model provides immediate, real-time feedback, while the cloud API can perform more nuanced, heavy analysis when connectivity permits, offering the best of both worlds for this use case.
Incorrect! Try again.
42Your Android app, using the Gemini API, suddenly starts receiving HTTP 429 Too Many Requests errors. You've implemented a simple retry mechanism, but it's not resolving the issue. What is the most robust strategy to handle this specific error and ensure service resilience?
Basic error handling and API usage limits
Hard
A.Switch to a different API endpoint, assuming the primary one is overloaded.
B.Implement an exponential backoff strategy with jitter, starting with a short delay and progressively increasing it for subsequent retries, while respecting the Retry-After header if present.
C.Immediately retry the request with a fixed delay of 5 seconds.
D.Store the failed request locally and prompt the user to manually retry later.
Correct Answer: Implement an exponential backoff strategy with jitter, starting with a short delay and progressively increasing it for subsequent retries, while respecting the Retry-After header if present.
Explanation:
An exponential backoff strategy with jitter is the industry-standard approach for handling rate limiting. It prevents a "thundering herd" problem where all clients retry simultaneously, worsening the server load. Adding jitter (randomness) further staggers the retries. Respecting the Retry-After header is crucial for complying with the API's specific instructions.
Incorrect! Try again.
43When building a conversational chat feature in an Android app using the Google Gemini SDK, you need to maintain the context of the entire conversation. Which Gemini SDK component is specifically designed for this purpose, and how does it manage the history?
Integrating Google Gemini API in Android
Hard
A.A prompt string where you must manually concatenate the entire user and model history, separated by special tokens, for each API request.
B.The GenerativeModel instance, which automatically stores all previous prompts and responses in a volatile memory cache.
C.A Content object, which must be manually appended with the full conversation history before each new generateContent call.
D.The Chat object, initiated via model.startChat(), which internally manages the conversation history and automatically includes it in subsequent sendMessage() calls.
Correct Answer: The Chat object, initiated via model.startChat(), which internally manages the conversation history and automatically includes it in subsequent sendMessage() calls.
Explanation:
The Chat object in the Gemini SDK is purpose-built for multi-turn conversations. It abstracts away the complexity of managing history by maintaining the state internally. When you call sendMessage() on a Chat instance, the SDK automatically appends the new message to the history and sends the relevant context to the API.
Incorrect! Try again.
44You are designing an application that requires generating both creative text and analyzing the content of user-uploaded images in a single API call. Which model and API feature would be most suitable for this "multimodal" task?
Overview of ChatGPT OpenAI and Gemini Google
Hard
A.OpenAI's legacy GPT-2 model, as it is highly customizable with fine-tuning for specific image-to-text tasks.
B.OpenAI's GPT-3.5-turbo model, by first converting the image to text using a separate OCR service and then passing both texts to the model.
C.Google's Gemini Pro Vision model, which natively accepts both text and image data (e.g., a Bitmap) within a single prompt.
D.Google's Gemini Pro model, which is primarily text-based and would require a separate Vision API call.
Correct Answer: Google's Gemini Pro Vision model, which natively accepts both text and image data (e.g., a Bitmap) within a single prompt.
Explanation:
Gemini Pro Vision is explicitly designed for multimodal input. It can process prompts that combine different data types like text and images simultaneously in one request, making it the most direct and efficient solution for this requirement. The other options involve multiple, less integrated steps.
Incorrect! Try again.
45An AI model is designed to return a JSON object representing a user's flight booking intent. However, due to model "hallucinations" or variations in prompting, the response is sometimes a natural language sentence describing the JSON instead of the JSON itself. What is the most resilient parsing strategy on the Android client?
Handling user input and AI output
Hard
A.Send the raw text response directly to the application's business logic layer and let it handle the unstructured data.
B.Use regular expressions to try and extract key-value pairs from the natural language sentence.
C.Use a strict JSON parser (like Gson or Moshi) and show a generic "Parsing Error" to the user if it fails.
D.Attempt to parse the response as JSON first. If it fails, use a secondary prompt to ask the AI model to "reformat the previous response as a valid JSON object only".
Correct Answer: Attempt to parse the response as JSON first. If it fails, use a secondary prompt to ask the AI model to "reformat the previous response as a valid JSON object only".
Explanation:
This two-step approach is robust. It first tries the ideal, efficient path (direct parsing). If that fails, it uses the AI's own capabilities as a fallback mechanism to self-correct its output format. This is more reliable than brittle regex and provides a better user experience than a generic error.
Incorrect! Try again.
46You need to store a Google Gemini API key in your Android project. Which of the following methods provides the strongest security against the key being extracted from a reverse-engineered APK?
Basics of AI APIs REST APIs and API keys
Hard
A.Proxying all API requests through a secure backend server that you control, where the API key is stored as an environment variable and never exposed to the client app.
B.Storing the key in strings.xml and retrieving it using context.getString().
C.Storing the key as a hardcoded String constant in a Kotlin file.
D.Placing the key in local.properties and accessing it via BuildConfig.API_KEY.
Correct Answer: Proxying all API requests through a secure backend server that you control, where the API key is stored as an environment variable and never exposed to the client app.
Explanation:
No client-side storage method is completely secure. Even keys in BuildConfig or obfuscated code can be extracted. The only truly secure method is to never let the client app possess the key. A backend proxy service authenticates your app, makes the request to the AI provider using the secret key, and then forwards the response back to the app.
Incorrect! Try again.
47You are using an LLM to generate code snippets. To get more deterministic and focused results, you want to reduce the randomness of the output. Which combination of API parameters should you adjust?
Generating AI text responses
Hard
A.Set temperature to 0 and max_output_tokens to a very high number.
B.Increase temperature to a high value (e.g., 1.0) and set top_p to 1.0.
C.Decrease temperature to a low value (e.g., 0.1) and decrease top_k to a small integer.
D.Increase the frequency_penalty and presence_penalty to their maximum values.
Correct Answer: Decrease temperature to a low value (e.g., 0.1) and decrease top_k to a small integer.
Explanation:
temperature controls randomness; a lower value makes the model choose higher-probability tokens, making the output more deterministic and focused. top_k sampling restricts the model's choices to the 'k' most likely next tokens. A small top_k further reduces randomness, complementing the low temperature for predictable tasks like code generation.
Incorrect! Try again.
48To enable Google Assistant to deep-link into a specific feature of your app and pass parameters (e.g., "Hey Google, order a pizza from MyPizzaApp"), your app needs to declare specific capabilities. What Android framework component is primarily used to achieve this integration?
Role of AI assistants in Android
Hard
A.Using Firebase App Indexing to allow Google's crawlers to index your app's content.
B.Implementing a custom ContentProvider to expose app data to the Assistant.
C.Creating a foreground Service that listens for broadcasts from Google Assistant.
D.Defining App Actions using a shortcuts.xml resource file and linking them to capability definitions in your AndroidManifest.xml.
Correct Answer: Defining App Actions using a shortcuts.xml resource file and linking them to capability definitions in your AndroidManifest.xml.
Explanation:
App Actions are the official framework for integrating with Google Assistant. You define built-in intents (like actions.intent.ORDER_MENU_ITEM) in a shortcuts.xml file, map them to your app's activities, and declare the capability to handle these intents. This allows the Assistant to understand user commands and fulfill them using your app's functionality.
Incorrect! Try again.
49Your application makes streaming requests to the Gemini API to display text as it's generated. During a stream, the connection is lost. The Gemini SDK for Android throws an exception. What is the most appropriate recovery strategy to provide a seamless user experience?
Basic error handling and API usage limits
Hard
A.Automatically resend the original prompt, which will restart the entire generation process from scratch.
B.Discard the partial response and show an error message asking the user to retry the entire prompt.
C.Cache the partial response and wait indefinitely for the network to return, at which point the SDK will automatically resume the stream.
D.Store the partially received text, display it to the user with a "Resume" button, and upon clicking, send a new prompt asking the model to "continue from where you left off: [last few words of partial text]".
Correct Answer: Store the partially received text, display it to the user with a "Resume" button, and upon clicking, send a new prompt asking the model to "continue from where you left off: [last few words of partial text]".
Explanation:
This approach preserves the work already done and provides the user with control. By re-prompting with the last known context, you can often get the model to seamlessly continue the generation, which is a much better experience than starting over or showing a hard error. The SDK does not automatically resume a broken stream.
Incorrect! Try again.
50You are implementing a feature using the Gemini API that requires processing both an image and a text prompt. Using the generative-ai SDK for Android, how would you structure the input Content for the generateContent call?
Integrating Google Gemini API in Android
Hard
A.Make two separate generateContent calls, one with the image and one with the text, and merge the results on the client side.
B.First, upload the image to a cloud storage service to get a URL, and then pass this URL along with the text prompt in a single text Part.
C.Create a Content object containing a single Part that holds both the Bitmap and the text String concatenated together.
D.Use the content() builder function to construct a Content object containing two separate Parts: one created with image(bitmap) and another with text(prompt).
Correct Answer: Use the content() builder function to construct a Content object containing two separate Parts: one created with image(bitmap) and another with text(prompt).
Explanation:
The Gemini SDK's content() builder is specifically designed for multimodal input. It allows you to create a single Content object composed of multiple Parts, where each part can hold a different data type (like an image or text). This is the correct and most efficient way to send multimodal prompts to models like Gemini Pro Vision.
Incorrect! Try again.
51When comparing the underlying architecture and training data cutoff of OpenAI's GPT-4 and Google's Gemini models (as of early 2024), what is a key differentiating factor that affects their ability to answer questions about very recent events?
Overview of ChatGPT OpenAI and Gemini Google
Hard
A.GPT-4 has a fixed knowledge cutoff date and cannot access real-time information, whereas some versions of Gemini are designed with integrations to Google Search for more up-to-date responses.
B.GPT-4 can access the live internet through a built-in browser, while Gemini cannot.
C.Both models have identical, static knowledge cutoff dates and rely solely on their training data.
D.Gemini models are exclusively trained on real-time data streams, giving them a constant advantage for recent events.
Correct Answer: GPT-4 has a fixed knowledge cutoff date and cannot access real-time information, whereas some versions of Gemini are designed with integrations to Google Search for more up-to-date responses.
Explanation:
A significant architectural difference is that Google's ecosystem allows for easier and deeper integration of its models (like Gemini) with its real-time services (like Search). While base LLMs have training data cutoffs, certain product integrations (like in Google's Bard/Gemini app) can ground the model's responses with fresh, real-time information, a feature not natively available in the standard GPT-4 API in the same way.
Incorrect! Try again.
52In the context of REST APIs for AI services, what is the primary purpose of an Idempotency-Key header, and in which scenario would it be most critical?
Basics of AI APIs REST APIs and API keys
Hard
A.To specify the desired format of the response, such as JSON or XML.
B.To authenticate the client, serving as a secondary API key.
C.To ensure that if a network error causes a request to be sent multiple times, the server only processes it once and returns the same result. This is critical for non-GET requests that modify state or incur costs.
D.To cache the request on the client-side to reduce API calls.
Correct Answer: To ensure that if a network error causes a request to be sent multiple times, the server only processes it once and returns the same result. This is critical for non-GET requests that modify state or incur costs.
Explanation:
Idempotency is crucial for operations that shouldn't be performed multiple times, such as submitting a payment or, in an AI context, initiating a costly generation job. By sending a unique Idempotency-Key, you tell the server, "If you've seen this key before, don't re-process the request; just send me the original result." This prevents duplicate actions and charges in case of network retries.
Incorrect! Try again.
53Your app uses an LLM to summarize long articles. You need to ensure the AI's output is always under 200 tokens to fit in a UI element. However, the model sometimes ignores the instruction "summarize in under 200 tokens". What is the most reliable technical constraint to enforce this limit?
Handling user input and AI output
Hard
A.After receiving the response, check its token count on the client-side and truncate the string if it's too long.
B.Add the instruction "IMPORTANT: The output must be less than 200 tokens" to the end of the prompt.
C.Fine-tune a custom model specifically on summaries that are all under 200 tokens.
D.Set the max_output_tokens parameter in the API request to 200.
Correct Answer: Set the max_output_tokens parameter in the API request to 200.
Explanation:
While prompt engineering is helpful, it's not a guarantee. Client-side truncation can abruptly cut off sentences, resulting in poor quality. Fine-tuning is overkill and expensive for this simple constraint. The max_output_tokens parameter is a direct, technical instruction to the API server to stop the generation process once the specified token limit is reached, making it the most reliable method for enforcing a hard length limit.
Incorrect! Try again.
54A mobile health app uses an on-device TensorFlow Lite model for real-time heart rate analysis from a camera feed. The developers want to personalize the model for each user over time without sending sensitive health data to the cloud. Which AI technique is best suited for this scenario?
Introduction to AI in mobile apps
Hard
A.Transfer Learning, where the app downloads a new, pre-trained model from the cloud every day.
B.Centralized cloud-based training where all user data is aggregated to train a single, improved model that is then pushed to all devices.
C.Edge Computing, which involves offloading the training process to a nearby network server instead of a central cloud.
D.Federated Learning, where the model is trained locally on each device, and only the anonymous model updates (gradients or weights), not the raw data, are sent to a central server for aggregation.
Correct Answer: Federated Learning, where the model is trained locally on each device, and only the anonymous model updates (gradients or weights), not the raw data, are sent to a central server for aggregation.
Explanation:
Federated Learning is explicitly designed for privacy-preserving, decentralized machine learning. It allows the model to learn from user-specific data directly on their device. The sensitive raw data (the heart rate video) never leaves the phone. Only the mathematical updates to the model are shared, which are then used to improve the global model, benefiting all users while maintaining individual privacy.
Incorrect! Try again.
55You are designing a feature that allows users to make up to 100 AI generation requests per day. What is the most scalable and reliable way to enforce this user-specific limit in a multi-device scenario (i.e., user is logged in on a phone and a tablet)?
Basic error handling and API usage limits
Hard
A.Use a client-side WorkManager job that resets a counter every 24 hours.
B.Store the counter in a local SQLite database on each device, timestamping each request.
C.Implement a server-side counter associated with the user's account in your backend database. All client API calls must be proxied through this backend, which checks and updates the counter before forwarding the request to the AI provider.
D.Use Android's SharedPreferences on each device to store a daily counter.
Correct Answer: Implement a server-side counter associated with the user's account in your backend database. All client API calls must be proxied through this backend, which checks and updates the counter before forwarding the request to the AI provider.
Explanation:
Client-side solutions are unreliable for multi-device scenarios and can be easily tampered with. A user could simply clear app data or use two devices to circumvent the limit. The only robust solution is a centralized, server-side counter tied to the user's authenticated account. This ensures the limit is enforced consistently regardless of which device the user is on.
Incorrect! Try again.
56Your app uses the Gemini SDK's streaming feature (generateContentStream) to display text. The design requires showing a "Stop Generating" button. How would you correctly implement the cancellation of an in-flight streaming request in a Kotlin Coroutine?
kotlin
// Assume 'viewModelScope' is the CoroutineScope
var job: Job? = null
fun startGeneration() {
job = viewModelScope.launch {
generativeModel.generateContentStream(...)
.collect { ... }
}
}
Integrating Google Gemini API in Android
Hard
A.Call Thread.interrupt() on the coroutine's thread.
B.The Gemini SDK does not support cancellation of streaming requests once started; you can only ignore subsequent emissions.
C.Call job?.cancel(), which leverages structured concurrency to propagate cancellation down to the underlying network call in the SDK.
D.Set a volatile boolean flag like isCancelled = true and check it inside the .collect block.
Correct Answer: Call job?.cancel(), which leverages structured concurrency to propagate cancellation down to the underlying network call in the SDK.
Explanation:
The Google Gemini SDK for Android is built with Kotlin Coroutines in mind and supports cooperative cancellation. Calling job.cancel() on the coroutine that launched the stream will propagate the cancellation signal through the coroutine hierarchy. The SDK's generateContentStream function is designed to listen for this signal and will terminate the underlying network request, saving bandwidth and resources.
Incorrect! Try again.
57You are building a chatbot that must avoid discussing certain sensitive topics. Which Gemini API feature is specifically designed to enforce content policies and prevent the model from generating harmful, unethical, or off-topic responses?
Generating AI text responses
Hard
A.Setting stop_sequences to a list of keywords related to the sensitive topics.
B.Using a detailed negative prompt like "Do not talk about [sensitive topic A] or [sensitive topic B]".
C.Configuring the safety_settings parameter in the API request, adjusting the block thresholds for categories like HARM_CATEGORY_HARASSMENT or HARM_CATEGORY_DANGEROUS_CONTENT.
D.Lowering the temperature to 0 to make the model less creative and less likely to stray into sensitive areas.
Correct Answer: Configuring the safety_settings parameter in the API request, adjusting the block thresholds for categories like HARM_CATEGORY_HARASSMENT or HARM_CATEGORY_DANGEROUS_CONTENT.
Explanation:
While prompt engineering and stop sequences can help, they are not foolproof. The safety_settings parameter is the dedicated, server-side mechanism provided by the API to enforce content safety. It allows you to define strict thresholds for various categories of harmful content, causing the API to block responses that violate these policies before they are even sent back to the client.
Incorrect! Try again.
58A productivity app wants to proactively suggest a "Create new project" action to the user via Google Assistant when the user receives an email about a new client. This requires the app's AI to operate based on contextual signals from other apps. What Android concept describes this type of proactive, on-device intelligence?
Role of AI assistants in Android
Hard
A.A background Service that uses Accessibility APIs to read the content of other apps.
B.On-device personalization using TensorFlow Lite.
C.App Actions, which are triggered by direct user voice commands.
D.Android's Private Compute Core, which enables features to process contextual data on-device in a privacy-preserving manner to power suggestions.
Correct Answer: Android's Private Compute Core, which enables features to process contextual data on-device in a privacy-preserving manner to power suggestions.
Explanation:
Android's Private Compute Core (PCC) is a secure, isolated environment on the device that allows apps and OS features (like Live Caption or Now Playing) to use on-device AI on ambient, contextual data without that data ever leaving the device. This is the underlying technology that would power proactive, privacy-safe suggestions from an assistant based on content from another app like Gmail. Using Accessibility APIs for this purpose would be a major privacy violation.
Incorrect! Try again.
59In the context of large language models, what is the fundamental difference between how top_p (nucleus) sampling and top_k sampling control token generation?
Overview of ChatGPT OpenAI and Gemini Google
Hard
A.top_k is used for text generation, while top_p is used exclusively for image generation models.
B.Both top_p and top_k are aliases for the temperature parameter and achieve the same result.
C.top_k selects a fixed number of the most probable tokens, while top_p selects a variable number of tokens whose cumulative probability exceeds a certain threshold p.
D.top_p selects a fixed number of tokens, while top_k selects tokens based on a probability threshold.
Correct Answer: top_k selects a fixed number of the most probable tokens, while top_p selects a variable number of tokens whose cumulative probability exceeds a certain threshold p.
Explanation:
top_k is a simple truncation: consider only the k most likely next words. top_p is more dynamic: it creates the smallest possible set of most likely words whose cumulative probability is at least p. This means if one word is overwhelmingly likely, the set might contain only one word; if many words are equally likely, the set will be larger. This makes top_p generally more adaptive than the fixed-size top_k.
Incorrect! Try again.
60You're building an Android app where the Gemini API generates markdown-formatted text. To render this in a TextView, you need to convert the markdown to a Spanned object. What is a critical consideration for performance and user experience when handling a streamed markdown response?
Handling user input and AI output
Hard
A.Wait for the entire stream to complete, then parse the full markdown string once at the end.
B.Use a WebView to render the markdown, as it is inherently better at handling streaming content.
C.Append the incoming text chunks to a StringBuilder. Use a debouncing mechanism (e.g., a coroutine with debounce) to trigger markdown parsing and rendering on the TextView only after a short pause in the stream (e.g., 200ms), preventing UI freezes from excessive re-parsing on every character.
D.For each small chunk of text received from the stream, append it to the TextView and re-parse the entire TextView's content as markdown.
Correct Answer: Append the incoming text chunks to a StringBuilder. Use a debouncing mechanism (e.g., a coroutine with debounce) to trigger markdown parsing and rendering on the TextView only after a short pause in the stream (e.g., 200ms), preventing UI freezes from excessive re-parsing on every character.
Explanation:
Re-parsing the entire markdown on every single incoming character is extremely inefficient and will cause the UI to stutter or freeze. Waiting until the end defeats the purpose of streaming. A debouncing strategy provides the best balance: it gives the user a near-real-time view of the text as it arrives but intelligently batches the computationally expensive markdown-to-Spanned conversion, ensuring a smooth UI thread and a responsive user experience.