Unit4 - Subjective Questions
CSE320 • Practice Questions with Detailed Answers
Explain the Seven Fundamental Principles of Software Testing.
Software testing is governed by seven fundamental principles:
- Testing shows the presence of defects: Testing can prove that defects exist, but cannot prove that the software is defect-free.
- Exhaustive testing is impossible: Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Risk analysis is used to focus testing efforts.
- Early Testing: Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives.
- Defect Clustering: A small number of modules usually contain most of the defects discovered during pre-release testing. This follows the Pareto Principle (80/20 rule).
- Pesticide Paradox: If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this, test cases need to be regularly reviewed and revised.
- Testing is context-dependent: Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
- Absence-of-errors fallacy: Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations.
Differentiate between Verification and Validation (V&V) in software engineering.
Verification and Validation are two distinct phases of software quality assurance:
| Feature | Verification | Validation |
|---|---|---|
| Question | Are we building the product right? | Are we building the right product? |
| Definition | The process of evaluating work-products (documents, design, code) of a development phase to determine whether they satisfy the specified requirements. | The process of evaluating software during or at the end of the development process to determine whether it satisfies specified business requirements. |
| Activities | Reviews, Walkthroughs, Inspections, Static Analysis. | Unit Testing, Integration Testing, System Testing, UAT (Dynamic Testing). |
| Execution | Code is not executed. | Code is executed. |
| Focus | Focuses on documentation, design, and adherence to specs. | Focuses on the actual product meeting user needs. |
Compare Functional and Non-Functional Testing with examples.
Functional Testing checks what the system does against functional requirements.
- Focus: Verifies operations, data manipulation, business logic, and user interactions.
- Examples: Unit testing, Smoke testing, Sanity testing, Integration testing.
- Scenario: Checking if a 'Login' button redirects to the dashboard upon entering valid credentials.
Non-Functional Testing checks how the system performs.
- Focus: Verifies attributes like reliability, scalability, performance, and security.
- Examples: Performance testing, Stress testing, Usability testing, Security testing.
- Scenario: Checking if the 'Login' process completes within 2 seconds when 1000 users try to log in simultaneously.
Explain the concept of Cyclomatic Complexity in White Box Testing. Calculate the Cyclomatic Complexity for a flow graph with 10 edges and 7 nodes.
Cyclomatic Complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It is used in White Box testing to ensure that every path has been executed at least once.
Calculation Formula:
The formula to calculate Cyclomatic Complexity is:
Where:
- = Number of edges in the flow graph.
- = Number of nodes in the flow graph.
- = Number of connected components (usually 1 for a single program).
Problem Calculation:
Given:
- Edges () = 10
- Nodes () = 7
- Connected components () = 1
Therefore, the Cyclomatic Complexity is 5, meaning there are 5 independent paths to be tested.
Describe Equivalence Partitioning (EP) as a Black Box testing technique with an example.
Equivalence Partitioning (EP) is a black-box testing technique that divides the input data of a software unit into partitions (classes) of equivalent data from which test cases can be derived. The hypothesis is that if one data point in a partition works (or fails), all other data points in that same partition will behave the same way.
Types of Partitions:
- Valid Partition: Values that should be accepted by the component.
- Invalid Partition: Values that should be rejected.
Example:
Consider a text field that accepts a number between 1 and 100 (inclusive).
- Valid Partition: Any number between 1 and 100 (e.g., 50).
- Invalid Partition 1: Any number less than 1 (e.g., 0 or -5).
- Invalid Partition 2: Any number greater than 100 (e.g., 101 or 500).
Using EP reduces the number of test cases significantly compared to testing every number from 1 to 100.
What is Boundary Value Analysis (BVA)? Why is it important?
Boundary Value Analysis (BVA) is a black-box testing technique based on testing the boundary values of valid and invalid partitions. It is an extension of Equivalence Partitioning.
Key Concepts:
- Experience shows that errors are most likely to occur at the boundaries of input domains rather than in the center.
- BVA focuses on values like Minimum, Minimum - 1, Minimum + 1, Maximum, Maximum - 1, and Maximum + 1.
Importance:
- Defect Detection: It is highly effective at finding "off-by-one" errors (e.g., using instead of ).
- Efficiency: It finds a high number of defects with a relatively small set of test cases.
- Completeness: It ensures the software handles the extreme limits of the input variables correctly.
Discuss the different strategies of Integration Testing.
Integration Testing verifies the interfaces and interactions between modules. The main strategies include:
- Big Bang Integration: All components are integrated simultaneously, and the system is tested as a whole. It is simple but makes fault localization difficult.
- Top-Down Integration: Testing starts from the main module and proceeds downwards.
- Stubs are used to simulate lower-level modules that are not yet developed.
- Advantage: Major design flaws are found early.
- Bottom-Up Integration: Testing starts from the lowest level units and proceeds upwards.
- Drivers are used to simulate the calling of main modules.
- Advantage: Useful when low-level modules perform critical utility functions.
- Sandwich (Hybrid) Integration: A combination of Top-Down and Bottom-Up approaches. It utilizes both Stubs and Drivers to test the middle layers.
Define User Acceptance Testing (UAT) and distinguish between Alpha and Beta testing.
User Acceptance Testing (UAT) is the final phase of testing performed by the end-user or client to verify if the system satisfies the business requirements and is ready for deployment.
Alpha vs. Beta Testing:
-
Alpha Testing:
- Performed by internal employees or a specific test team at the developer's site.
- Controlled environment.
- Done before the software is released to external users.
-
Beta Testing:
- Performed by a limited number of real users (customers) at the client's site (real-world environment).
- Uncontrolled environment.
- Done after Alpha testing but before the final release to the general market.
What is API Testing? What are the key protocols usually tested?
API (Application Programming Interface) Testing is a type of software testing that validates Application Programming Interfaces. The purpose is to check the functionality, reliability, performance, and security of the programming interfaces.
Unlike GUI testing, API testing is performed at the message layer without a user interface. It involves sending calls to the API, getting output, and validating the system's response.
Key Protocols/Architectures:
- REST (Representational State Transfer): Uses standard HTTP methods (GET, POST, PUT, DELETE). Most common in modern web services.
- SOAP (Simple Object Access Protocol): XML-based protocol often used in enterprise environments for high security and transactional reliability.
- GraphQL: A query language for APIs that allows clients to request exactly the data they need.
What are the specific challenges associated with Mobile Application Testing?
Mobile testing involves validating apps on mobile devices. Challenges include:
- Device Fragmentation: Testing must cover a vast array of devices with different screen sizes, resolutions, and hardware capabilities.
- OS Fragmentation: Different versions of Android (e.g., 10, 11, 12) and iOS behave differently. Testing must ensure backward compatibility.
- Network Conditions: Apps must function correctly under varying network speeds (4G, 5G, Wi-Fi) and handle transitions (e.g., losing signal).
- Battery and Performance: Apps must not drain the battery excessively or cause the device to overheat.
- Interrupt Testing: The app must handle interruptions like incoming calls, SMS, or low battery notifications gracefully.
Explain the basics of Web Testing. What are the key areas to verify in a web application?
Web Testing is the process of checking a web-based application for potential bugs before it is made live.
Key Areas to Verify:
- Functionality: Checking links (internal/external/broken), forms (validation), cookies (session management), and database connections.
- Usability: Navigation flow, content availability, and user-friendliness.
- Interface: Interaction between the Application, Web Server, and Database Server.
- Compatibility: Browser compatibility (Chrome, Firefox, Safari), OS compatibility (Windows, Mac, Linux), and Mobile browsing compatibility.
- Performance: Load times, stress handling under high traffic.
- Security: Checking for vulnerabilities like SQL Injection, XSS, and unauthorized access.
What is Automation Testing? Compare it with Manual Testing.
Automation Testing is the technique of using special software tools (like Selenium, Appium) to execute a suite of test cases. It compares actual results with expected results without human intervention.
Comparison:
| Aspect | Manual Testing | Automation Testing |
|---|---|---|
| Execution | Performed by a human sitting before a computer. | Performed by tools/scripts. |
| Accuracy | Prone to human error (fatigue). | Highly accurate. |
| Speed | Slower execution. | Significantly faster execution. |
| Exploratory | Best for exploratory and usability testing. | Not suitable for exploratory testing. |
| Cost | Lower initial cost, higher long-term cost for regression. | Higher initial cost (setup), lower long-term cost. |
| Regression | Tedious for repetitive regression tests. | Ideal for Regression testing. |
Provide an overview of Selenium IDE. How does the Record and Playback feature work?
Selenium IDE (Integrated Development Environment) is a browser extension (available for Chrome and Firefox) that provides a simple record-and-playback tool for test automation. It is the simplest tool in the Selenium suite.
Record and Playback Workflow:
- Installation: The user installs the Selenium IDE extension from the browser store.
- Recording: The user creates a new project and clicks the 'Record' button. They manually interact with the web application (clicking links, filling forms). Selenium IDE records these actions as commands (e.g.,
open,click,type). - Script Generation: The actions are stored in a script format (Selenese). Assertions (checks) can be added manually to verify elements.
- Playback: The user clicks 'Run', and the IDE executes the recorded steps automatically in the browser, highlighting successful steps in green and failures in red.
Limitation: It is suitable for prototyping but not for complex test suites involving conditional logic or data-driven testing.
What is Selenium WebDriver? How does it differ from Selenium IDE?
Selenium WebDriver is a collection of open-source APIs used to automate the testing of a web application. It allows users to write test scripts in programming languages like Java, Python, C#, or Ruby to control the browser directly.
Differences from Selenium IDE:
- Architecture: WebDriver interacts directly with the browser's native support for automation, whereas IDE is just a browser plugin wrapping JavaScript calls.
- Complexity: WebDriver supports complex logic (loops, conditional statements, exception handling) because it uses full programming languages. IDE is limited to sequential commands.
- Data-Driven Testing: WebDriver easily supports reading data from Excel/CSV/Database for testing. IDE has limited support.
- Browser Support: WebDriver supports Cross-Browser testing (Chrome, Firefox, Edge, Safari) comprehensively. IDE is limited to where the extension is installed.
Define Performance Testing and explain the differences between Load Testing and Stress Testing.
Performance Testing is a non-functional testing technique used to determine how a system performs in terms of responsiveness and stability under a particular workload.
Load vs. Stress Testing:
-
Load Testing:
- Goal: To verify that the system can handle the expected number of users and transactions.
- Method: Testing with a specific expected load (e.g., 1000 concurrent users).
- Outcome: Identifies bottlenecks (slow database queries, memory leaks) under normal conditions.
-
Stress Testing:
- Goal: To verify the system's robustness and error handling under extreme conditions.
- Method: Testing with a load well above the expected capacity (breaking point).
- Outcome: Determines the system's crash point and its ability to recover after a crash.
What is Security Testing? List the basic security attributes (CIA Triad) that are verified.
Security Testing verifies that the software is protected from external attacks and that information is maintained securely. It identifies vulnerabilities, threats, and risks in the software application.
The CIA Triad (Attributes Verified):
- Confidentiality: Ensuring that data is accessible only to those authorized to have access. (e.g., Encryption, Access Controls).
- Integrity: Ensuring that data is accurate and trustworthy and has not been tampered with by unauthorized users. (e.g., Hashing, Checksums).
- Availability: Ensuring that the data and services are available to authorized users when needed. (e.g., Protection against DoS/DDoS attacks).
Discuss the emerging trend of AI-assisted testing tools. How do they improve the testing process?
AI-assisted testing tools utilize Artificial Intelligence and Machine Learning (ML) algorithms to enhance the software testing lifecycle. They move beyond simple rule-based automation.
Benefits / Improvements:
- Self-Healing Scripts: Traditional automation scripts break when UI elements change (e.g., ID or Xpath change). AI tools can automatically detect the change and update the script during execution to prevent failure.
- Visual Validation: AI can perform visual regression testing by comparing screenshots pixel-by-pixel or using cognitive vision to detect UI anomalies that humans might miss.
- Test Generation: AI can analyze application usage logs to automatically generate test cases that cover the most frequently used paths.
- Defect Prediction: AI can analyze historical data to predict which modules are most likely to contain bugs, allowing testers to focus their efforts.
- Tools: Examples include Testim, Applitools, and Functionize.
Explain the difference between Black Box and White Box testing techniques.
| Feature | Black Box Testing | White Box Testing |
|---|---|---|
| Knowledge | Tester has no knowledge of internal code or implementation. | Tester has full knowledge of internal code and logic. |
| Basis | Based on software requirements and specifications. | Based on code structure, control flow, and branches. |
| Performed By | Usually performed by QA testers or end-users. | Usually performed by Developers. |
| Techniques | Equivalence Partitioning, Boundary Value Analysis, Decision Tables. | Statement Coverage, Branch Coverage, Path Coverage, Cyclomatic Complexity. |
| Goal | To check functionality and external behavior. | To check internal logic, code quality, and security holes. |
Describe the installation and setup requirements for Selenium WebDriver (Conceptual).
Setting up Selenium WebDriver involves binding a programming language with the browser's automation capabilities. The conceptual steps are:
- Install Programming Language Environment: Install the SDK for the chosen language (e.g., JDK for Java, Python). install an IDE (Eclipse, IntelliJ, PyCharm).
- Add Selenium Client Library: Download the Selenium Language Bindings (JAR files for Java, or
pip install seleniumfor Python) and add them to the project build path. - Download Browser Drivers: WebDriver cannot talk to browsers directly. You must download the specific executable driver for the browser you want to test:
- ChromeDriver for Google Chrome.
- GeckoDriver for Firefox.
- Configuration: In the test script, set the system property to point to the location of the downloaded driver file (e.g.,
System.setProperty("webdriver.chrome.driver", path)). - Instantiation: Create an instance of the WebDriver (e.g.,
WebDriver driver = new ChromeDriver();) to launch the browser.
What is System Testing? How does it differ from Integration Testing?
System Testing is a level of testing that validates the complete and fully integrated software product. It is performed to evaluate the system's compliance with the specified requirements.
Difference from Integration Testing:
- Scope: Integration testing focuses on the interface between two or more modules (e.g., Module A calling Module B). System testing focuses on the entire system as a whole.
- Environment: Integration testing might use stubs/drivers. System testing mirrors the production environment as closely as possible.
- Types: System testing includes both Functional (End-to-End flows) and Non-Functional testing (Performance, Reliability) covering the whole application architecture.
- Objective: Integration ensures modules work together. System testing ensures the product meets the business requirements.