1What is the primary focus of the ISO 9001 standard?
Quality management & standards: ISO 9001
Easy
A.Software coding conventions
B.Network security protocols
C.Hardware manufacturing processes
D.Quality Management Systems
Correct Answer: Quality Management Systems
Explanation:
ISO 9001 is an international standard that specifies requirements for a quality management system (QMS). It is not specific to software but can be applied to any organization's processes for creating and controlling products or services.
Incorrect! Try again.
2What does the acronym CMMI stand for?
Quality management & standards: SEI CMMI
Easy
A.Capability Maturity Model Integration
B.Component Maturity Model Interface
C.Computer Model Management Integration
D.Capability Measurement Model Initiative
Correct Answer: Capability Maturity Model Integration
Explanation:
CMMI stands for Capability Maturity Model Integration. It is a process level improvement training and appraisal program administered by the CMMI Institute.
Incorrect! Try again.
3Fixing a bug found in the software after its release is an example of which type of maintenance?
Software maintenance: types & challenges
Easy
A.Preventive Maintenance
B.Corrective Maintenance
C.Perfective Maintenance
D.Adaptive Maintenance
Correct Answer: Corrective Maintenance
Explanation:
Corrective maintenance involves diagnosing and fixing errors, bugs, and faults in the software that are discovered by users or reported by error logs.
Incorrect! Try again.
4What is the main purpose of Computer-Aided Software Engineering (CASE) tools?
B.To provide automated support for software development activities
C.To only manage project schedules and budgets
D.To write source code automatically from scratch
Correct Answer: To provide automated support for software development activities
Explanation:
CASE tools are software systems that are intended to provide automated assistance for software engineering tasks, such as design, code generation, testing, and management, throughout the software life cycle.
Incorrect! Try again.
5Low-code/No-code platforms are designed primarily to achieve which of the following?
Advanced and Future Techniques: Low-code / No-code platforms
Easy
A.Increase the complexity of software architecture
B.Accelerate application development with minimal hand-coding
C.Improve the performance of high-computation algorithms
D.Create new programming languages
Correct Answer: Accelerate application development with minimal hand-coding
Explanation:
The main goal of low-code/no-code platforms is to enable rapid delivery of applications by using visual development interfaces and pre-built components, thereby reducing the need for traditional programming.
Incorrect! Try again.
6What is the core idea behind Component-Based Software Development (CBSD)?
Software reuse & Component-Based Software Development (CBSD)
Easy
A.Writing every line of code from scratch for each new project
B.Assembling applications from pre-existing, reusable software components
C.Using a single, monolithic code base for all functionality
D.Focusing only on the user interface design
Correct Answer: Assembling applications from pre-existing, reusable software components
Explanation:
CBSD is a software development paradigm that focuses on building systems by composing them from existing, well-defined, and independent software components.
Incorrect! Try again.
7Tools like GitHub Copilot and AWS Code Whisperer primarily assist developers by doing what?
Advanced and Future Techniques: AI in software development (GitHub Copilot, Code Whisperer)
Easy
A.Managing project timelines and resources
B.Automatically deploying applications to the cloud
C.Designing user interface mockups
D.Suggesting code completions and entire functions
Correct Answer: Suggesting code completions and entire functions
Explanation:
These tools are AI-powered 'pair programmers' that analyze the context of the code being written and provide real-time suggestions, from single lines to complete functions, to speed up development.
Incorrect! Try again.
8What is the main objective of the Six Sigma methodology?
Quality management & standards: Six Sigma & PSP
Easy
A.To hire six new engineers for every project
B.To complete projects in six weeks or less
C.To reduce defects and minimize variability in processes
D.To increase the number of features in a product
Correct Answer: To reduce defects and minimize variability in processes
Explanation:
Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects in any process – from manufacturing to transactional and from product to service.
Incorrect! Try again.
9When you modify a software product to run on a new operating system, what type of maintenance are you performing?
Software maintenance: types & challenges
Easy
A.Adaptive Maintenance
B.Perfective Maintenance
C.Corrective Maintenance
D.Preventive Maintenance
Correct Answer: Adaptive Maintenance
Explanation:
Adaptive maintenance is concerned with modifying the software to cope with changes in its external environment, such as a new OS, hardware platform, or third-party dependencies.
Incorrect! Try again.
10Which of these architectural styles is most commonly associated with cloud-native applications?
Advanced and Future Techniques: Cloud-native software development
Easy
A.Monolithic architecture
B.Layered architecture
C.Microservices architecture
D.Client-Server architecture
Correct Answer: Microservices architecture
Explanation:
Cloud-native applications are often built using microservices, which are small, independent services that work together. This architecture is well-suited for the scalability, resilience, and flexibility offered by the cloud.
Incorrect! Try again.
11In the CMMI model, which level represents an 'Initial' and often chaotic process?
Quality management & standards: SEI CMMI
Easy
A.Level 1
B.Level 2
C.Level 5
D.Level 3
Correct Answer: Level 1
Explanation:
Level 1 of the CMMI model is called 'Initial'. Processes at this level are usually ad hoc and chaotic. The organization does not provide a stable environment to support processes.
Incorrect! Try again.
12What is a major benefit of practicing software reuse?
Software reuse & Component-Based Software Development (CBSD)
Easy
A.Longer time to market
B.Guaranteed project success
C.Increased number of software bugs
D.Reduced development time and cost
Correct Answer: Reduced development time and cost
Explanation:
By reusing existing, tested components, organizations can significantly decrease development effort, which leads to lower costs and faster delivery of the final product.
Incorrect! Try again.
13Adding a new report feature to an existing software system is an example of which maintenance type?
Software maintenance: types & challenges
Easy
A.Perfective Maintenance
B.Corrective Maintenance
C.Proactive Maintenance
D.Adaptive Maintenance
Correct Answer: Perfective Maintenance
Explanation:
Perfective maintenance involves implementing new or changed user requirements. This type of maintenance enhances the software by adding new functionalities or features.
Incorrect! Try again.
14What does PSP stand for in the context of individual software developer quality?
Quality management & standards: Six Sigma & PSP
Easy
A.Project Software Plan
B.Personal Software Process
C.Process System Protocol
D.Primary Software Platform
Correct Answer: Personal Software Process
Explanation:
The Personal Software Process (PSP) is a structured software development process that is designed to help individual software engineers better understand and improve their performance by using a disciplined, data-driven approach.
Incorrect! Try again.
15No-code platforms are typically aimed at which group of users?
Advanced and Future Techniques: Low-code / No-code platforms
Easy
A.Expert system architects
B.Cybersecurity experts
C.Database administrators
D.Business users and individuals with no programming skills
Correct Answer: Business users and individuals with no programming skills
Explanation:
No-code platforms are designed to empower 'citizen developers'—users with deep business knowledge but little to no coding experience—to build applications using visual drag-and-drop interfaces.
Incorrect! Try again.
16Is the ISO 9001 certification specific to the software development industry?
Quality management & standards: ISO 9001
Easy
A.No, it is a general quality standard applicable to any industry.
B.No, it is only for manufacturing industries.
C.Yes, but it can be adapted for hardware.
D.Yes, it was designed only for software companies.
Correct Answer: No, it is a general quality standard applicable to any industry.
Explanation:
ISO 9001 provides a framework for a quality management system that is generic and adaptable. It can be used by any organization, regardless of its size or field of activity, including software development.
Incorrect! Try again.
17A tool used for drawing UML diagrams (like class diagrams or sequence diagrams) would be classified as what type of tool?
CASE tools that assist with the analysis and design phases of the SDLC, such as those for creating UML diagrams, are known as Upper CASE tools or design/analysis tools.
Incorrect! Try again.
18What does 'containerization', a key technology in cloud-native development, primarily do?
Advanced and Future Techniques: Cloud-native software development
Easy
A.It designs the user interface.
B.It packages an application and its dependencies into an isolated unit.
C.It manages project budgets.
D.It writes code automatically.
Correct Answer: It packages an application and its dependencies into an isolated unit.
Explanation:
Containerization (using tools like Docker) bundles an application's code with all the files and libraries it needs to run. This makes the application portable and consistent across different environments, which is crucial for cloud-native deployment.
Incorrect! Try again.
19What is the underlying technology that powers tools like GitHub Copilot?
Advanced and Future Techniques: AI in software development (GitHub Copilot, Code Whisperer)
Easy
A.Large Language Models (LLMs) trained on code
B.Manual human operators
C.Relational databases
D.Simple text matching algorithms
Correct Answer: Large Language Models (LLMs) trained on code
Explanation:
These advanced AI tools are built on top of Large Language Models (like OpenAI's Codex) which have been trained on vast amounts of public source code, enabling them to understand context and generate relevant code.
Incorrect! Try again.
20A significant challenge in software maintenance is often the...
Software maintenance: types & challenges
Easy
A.Poor or missing documentation from the original development
B.Availability of too many development tools
C.Hardware being too fast for the old software
D.Lack of available programming languages
Correct Answer: Poor or missing documentation from the original development
Explanation:
One of the most common and difficult challenges in software maintenance is trying to understand and modify code that is poorly documented or has no documentation at all, making the maintainer's job much harder.
Incorrect! Try again.
21A software development organization has successfully implemented project management practices, tracking requirements, costs, and schedules for individual projects. They are currently assessed at CMMI Level 2 (Managed). To advance to Level 3 (Defined), what is the most critical organizational change they need to implement?
Quality management & standards: SEI CMMI
Medium
A.Standardize and document a common software development process to be used across the entire organization.
B.Introduce continuous process improvement driven by quantitative feedback.
C.Ensure that basic project management processes are established and followed for each project.
D.Focus on quantitative process management and statistical quality control.
Correct Answer: Standardize and document a common software development process to be used across the entire organization.
Explanation:
CMMI Level 2 (Managed) focuses on managing processes at the project level. The key evolution to Level 3 (Defined) is the establishment of a standardized, documented, and organization-wide software process. This ensures consistency across all projects. Option A describes Level 4 (Quantitatively Managed), Option C describes Level 5 (Optimizing), and Option D describes what is already achieved at Level 2.
Incorrect! Try again.
22A financial software application must be updated to comply with a new government regulation that changes how customer data is encrypted and stored. This modification does not fix any existing bugs or add new user-facing features. This activity is best classified as which type of software maintenance?
Software maintenance: types & challenges
Medium
A.Preventive Maintenance
B.Perfective Maintenance
C.Adaptive Maintenance
D.Corrective Maintenance
Correct Answer: Adaptive Maintenance
Explanation:
Adaptive maintenance involves modifying the software to cope with changes in its external environment. New laws, changes in hardware, or operating system updates fall into this category. Corrective maintenance is for fixing bugs, perfective maintenance is for adding features or improving performance, and preventive maintenance involves refactoring to improve future maintainability.
Incorrect! Try again.
23A team is building a new e-commerce platform using Component-Based Software Development (CBSD). They decide to use a third-party component for payment processing and another for inventory management. What is the most significant integration challenge they are likely to face?
Software reuse & Component-Based Software Development (CBSD)
Medium
A.A lack of commercially available components for standard business functions.
B.Architectural mismatch and conflicting assumptions between the components.
C.The performance overhead of method calls between components.
D.The difficulty of writing unit tests for individual, isolated components.
Correct Answer: Architectural mismatch and conflicting assumptions between the components.
Explanation:
A primary challenge in CBSD is 'architectural mismatch,' where components are built with different underlying assumptions about the system's architecture, data formats, control flow, or communication protocols. Integrating these 'black boxes' can be very difficult if their interfaces and implicit assumptions are incompatible.
Incorrect! Try again.
24A startup is designing a new video streaming service and wants to ensure high scalability, resilience, and the ability to rapidly and independently deploy new features. Which architectural approach is most aligned with these cloud-native development goals?
Advanced and Future Techniques: Cloud-native software development
Medium
A.Microservices architecture with services packaged in containers and managed by an orchestrator.
B.Monolithic architecture deployed on a single large virtual machine.
C.A three-tier client-server architecture with tightly coupled presentation, logic, and data layers.
D.Service-Oriented Architecture (SOA) with a central Enterprise Service Bus (ESB).
Correct Answer: Microservices architecture with services packaged in containers and managed by an orchestrator.
Explanation:
Cloud-native applications are designed to leverage cloud benefits. The microservices architecture, where the application is broken into small, independent, containerized services, is a cornerstone of this approach. It allows for independent scaling, deployment (CI/CD), and resilience (fault isolation), which directly addresses the stated goals.
Incorrect! Try again.
25How does an AI-powered code assistant like GitHub Copilot fundamentally differ from traditional IntelliSense or autocomplete features found in most IDEs?
Advanced and Future Techniques: AI in software development (GitHub Copilot, Code Whisperer)
Medium
A.It guarantees that the generated code is completely free of bugs and security vulnerabilities.
B.It generates entire blocks of code, including complex logic and functions, based on natural language comments and context, rather than just suggesting names of existing variables or methods.
C.It runs entirely on the local machine without requiring an internet connection to function.
D.It only works for Python and JavaScript, whereas IntelliSense is language-agnostic.
Correct Answer: It generates entire blocks of code, including complex logic and functions, based on natural language comments and context, rather than just suggesting names of existing variables or methods.
Explanation:
Traditional autocomplete suggests syntax-aware completions based on the existing codebase (e.g., variable names, function signatures). AI assistants like Copilot use large language models trained on vast amounts of code to synthesize entirely new code blocks, algorithms, and even documentation based on the broader context and natural language prompts.
Incorrect! Try again.
26A marketing department wants to build a simple internal application for tracking campaign leads. They decide to use a Low-Code Application Platform (LCAP) to accelerate development. What is the most significant trade-off they accept by choosing this approach over traditional custom development?
Advanced and Future Techniques: Low-code / No-code platforms
Medium
A.Significantly higher initial upfront development costs.
B.A much longer time-to-market compared to writing the application from scratch.
C.The inability to create a functional user interface without expert designers.
D.Reduced control over the underlying architecture and potential limitations in customization for highly complex or unique requirements.
Correct Answer: Reduced control over the underlying architecture and potential limitations in customization for highly complex or unique requirements.
Explanation:
The primary trade-off of low-code/no-code platforms is sacrificing fine-grained control for speed and simplicity. While excellent for standard business applications, they can be restrictive when dealing with unique business logic, complex integrations, or specific performance/scalability requirements, a phenomenon often called 'hitting the platform's wall'.
Incorrect! Try again.
27A software company is seeking ISO 9001 certification. An auditor is reviewing their documentation and practices. What is the primary focus of the ISO 9001 standard in this context?
Quality management & standards: ISO 9001
Medium
A.Mandating the use of specific programming languages and development tools.
B.Ensuring the company has a consistent, documented, and followed quality management system for its processes.
C.Evaluating the individual skill level and certifications of each developer in the company.
D.Certifying that the final software product is 100% free of defects.
Correct Answer: Ensuring the company has a consistent, documented, and followed quality management system for its processes.
Explanation:
ISO 9001 is a process-oriented standard. It does not certify the quality of a specific product but rather certifies that the organization has a robust and consistent Quality Management System (QMS) in place. It's about ensuring processes are defined, controlled, and improved, with the expectation that good processes lead to good products.
Incorrect! Try again.
28A software development team is in the initial phases of a project. They are using a tool to create data flow diagrams (DFDs), entity-relationship diagrams (ERDs), and to manage the requirements specification document. Which category of CASE tools are they primarily using?
Computer-Aided Software Engineering (CASE tools)
Medium
A.Upper CASE tools
B.Reverse Engineering tools
C.Lower CASE tools
D.Integrated CASE (I-CASE) tools
Correct Answer: Upper CASE tools
Explanation:
CASE tools are often categorized based on which phase of the SDLC they support. Upper CASE tools are used in the early phases of development, such as requirements analysis and design. Tools for creating DFDs, ERDs, and managing specifications fall squarely into this category. Lower CASE tools focus on later phases like coding, debugging, and testing.
Incorrect! Try again.
29A developer following the Personal Software Process (PSP) meticulously logs their time and defects. After completing a module, their data shows an unusually high rate of logic errors caught during personal code review. According to PSP principles, what is the most appropriate next step for this developer?
Quality management & standards: Six Sigma & PSP
Medium
A.Discard the module and rewrite it from scratch using a different approach.
B.Analyze the defect data to identify the root cause of the logic errors and update their personal design/coding checklist to prevent similar errors in the future.
C.Switch to a different programming language that offers better static analysis capabilities.
D.Immediately request a peer review from a senior developer for the module.
Correct Answer: Analyze the defect data to identify the root cause of the logic errors and update their personal design/coding checklist to prevent similar errors in the future.
Explanation:
A core principle of PSP is using personal data for process improvement. The first step after identifying a pattern of defects is to analyze the collected data to understand why the errors are occurring (root cause analysis) and then modify the personal process (e.g., design checklists, coding standards) to prevent them in the future.
Incorrect! Try again.
30A company relies on a critical 20-year-old system written in a legacy programming language. A new maintenance team is assigned to it. Besides the outdated technology itself, what is one of the most significant challenges they will likely face?
Software maintenance: types & challenges
Medium
A.The system being too simple and not having enough features to justify a maintenance team.
B.The system performance being too fast for modern hardware, causing timing issues.
C.Lack of original developers, poor or non-existent documentation, and undocumented business rules embedded in the code.
D.An overabundance of automated regression tests, making changes slow.
Correct Answer: Lack of original developers, poor or non-existent documentation, and undocumented business rules embedded in the code.
Explanation:
A defining challenge of maintaining legacy systems is the loss of domain and system knowledge. The original developers are often gone, documentation is typically sparse or outdated, and crucial business logic is often only discoverable by analyzing the source code itself. This makes any change risky, time-consuming, and difficult.
Incorrect! Try again.
31An organization wants to improve its software testing and verification processes independently, without having to overhaul its entire project management framework at the same time. Which CMMI representation would be more suitable for this targeted improvement approach?
Quality management & standards: SEI CMMI
Medium
A.The Staged Representation, because it provides a single, clear roadmap through maturity levels.
B.Both representations are equally suitable for this purpose.
C.Neither representation supports improving individual process areas; an all-or-nothing approach is required.
D.The Continuous Representation, because it allows an organization to improve specific process areas based on its unique business goals.
Correct Answer: The Continuous Representation, because it allows an organization to improve specific process areas based on its unique business goals.
Explanation:
The Continuous Representation allows an organization to select specific process areas (like Verification or Validation) and improve their capability level (from 0 to 5) independently of other areas. This provides flexibility for targeted improvements. The Staged Representation, in contrast, groups process areas into predefined maturity levels, requiring an organization to master a whole set of processes before moving to the next level.
Incorrect! Try again.
32A large software company wants to establish a systematic software reuse program to reduce development costs and time across multiple projects. What is the most crucial foundational step in implementing such a program?
Software reuse & Component-Based Software Development (CBSD)
Medium
A.Establish a process for identifying, classifying, certifying, and cataloging potentially reusable assets from existing and new projects.
B.Immediately purchase a large library of third-party components to show commitment.
C.Mandate that all developers write 100% reusable code for all new projects starting next month.
D.Rewrite all existing high-value applications using a component-based architecture.
Correct Answer: Establish a process for identifying, classifying, certifying, and cataloging potentially reusable assets from existing and new projects.
Explanation:
A successful reuse program isn't just about using components; it's about managing them. The foundational step is to create a robust process for the 'supply side' of reuse: identifying what is worth reusing, documenting and certifying its quality, and making it available in a trusted, searchable repository or library. Without this, any reuse effort will be ad-hoc and ineffective.
Incorrect! Try again.
33A software team is experiencing a high number of post-release critical defects. They decide to adopt a Six Sigma approach. In the 'Measure' phase of the DMAIC (Define, Measure, Analyze, Improve, Control) cycle, what activity would they most likely perform?
Quality management & standards: Six Sigma & PSP
Medium
A.Collect historical data on defect types, their point of origin (e.g., requirements, design, code), their severity, and their frequency to establish a performance baseline.
B.Brainstorm potential solutions like mandatory code reviews or new automated testing tools.
C.Implement the chosen solution and deploy the new process to the team.
D.Define the project charter and get stakeholder agreement on what constitutes a 'critical defect'.
Correct Answer: Collect historical data on defect types, their point of origin (e.g., requirements, design, code), their severity, and their frequency to establish a performance baseline.
Explanation:
The 'Measure' phase of DMAIC is focused on collecting data to establish a baseline and understand the current state of the process. In this scenario, the team needs to gather concrete metrics about the defects to quantify the problem before they can analyze its root cause in the 'Analyze' phase. Defining the problem (Option C) occurs in 'Define', and brainstorming solutions (Option A) happens in 'Improve'.
Incorrect! Try again.
34In the context of cloud-native development using a microservices architecture, what is the primary role of a container orchestration platform like Kubernetes?
Advanced and Future Techniques: Cloud-native software development
Medium
A.To write application code in a platform-agnostic language like Java or Go.
B.To replace the need for traditional relational databases with a more scalable NoSQL alternative.
C.To provide a secure, private code repository for storing container images like Docker Hub or AWS ECR.
D.To automate the deployment, scaling, load balancing, and self-healing of containerized applications across a cluster of machines.
Correct Answer: To automate the deployment, scaling, load balancing, and self-healing of containerized applications across a cluster of machines.
Explanation:
While containers (e.g., Docker) package an application and its dependencies, an orchestrator (e.g., Kubernetes) is needed to manage these containers at scale. It handles critical operational tasks like scheduling containers onto nodes, service discovery, load balancing, self-healing (restarting failed containers), and automated rollouts/rollbacks, which are essential for running resilient applications in the cloud.
Incorrect! Try again.
35A junior developer on a team heavily uses an AI code generation tool to write boilerplate code and complex algorithms. What is a significant quality assurance risk the team lead must mitigate?
Advanced and Future Techniques: AI in software development (GitHub Copilot, Code Whisperer)
Medium
A.The generated code may introduce subtle bugs, security vulnerabilities, or reflect outdated practices from its training data, requiring rigorous human review and testing.
B.The AI tool might refuse to generate code for proprietary or patented algorithms.
C.The tool will significantly slow down the overall development process due to the time taken to write prompts and review suggestions.
D.The AI-generated code is always less performant than human-written code, requiring manual optimization.
Correct Answer: The generated code may introduce subtle bugs, security vulnerabilities, or reflect outdated practices from its training data, requiring rigorous human review and testing.
Explanation:
AI code generators are trained on vast public codebases, which inevitably include code with bugs, security flaws, or non-optimal patterns. The generated code is a suggestion, not a guaranteed-correct solution. Relying on it without careful human review, static analysis, and robust testing can lead to the introduction of quality and security issues. The developer remains accountable for the code they commit.
Incorrect! Try again.
36What is the central and most critical feature of an Integrated CASE (I-CASE) tool suite that distinguishes it from a collection of separate Upper and Lower CASE tools?
Computer-Aided Software Engineering (CASE tools)
Medium
A.Built-in version control functionalities similar to Git.
B.A modern and intuitive graphical user interface.
C.The ability to generate application code in multiple programming languages.
D.A central repository or data dictionary that maintains consistency and traceability between all software artifacts (e.g., requirements, design models, code).
Correct Answer: A central repository or data dictionary that maintains consistency and traceability between all software artifacts (e.g., requirements, design models, code).
Explanation:
The defining characteristic of I-CASE tools is the central repository. This repository stores all information related to the project in a structured way. It ensures that a change in one artifact (e.g., a data element in a design model) can be automatically reflected or flagged in related artifacts (e.g., source code, requirements documents), thus maintaining consistency and traceability throughout the entire SDLC.
Incorrect! Try again.
37A business analyst needs to create a workflow automation that sends an email notification whenever a new record is added to a specific online spreadsheet. A software developer needs to build a custom data-driven web portal that requires integration with a legacy system via a custom-built API. Which platform types are most appropriate for each user respectively?
Advanced and Future Techniques: Low-code / No-code platforms
Medium
A.Low-code for the analyst, No-code for the developer.
B.No-code for both the analyst and the developer.
C.No-code for the analyst, Low-code for the developer.
D.Low-code for both the analyst and the developer.
Correct Answer: No-code for the analyst, Low-code for the developer.
Explanation:
No-code platforms are designed for non-technical users (like the business analyst) to solve problems using purely visual, drag-and-drop interfaces without writing any code. Low-code platforms are aimed at developers to accelerate their work; they provide visual development tools but also allow them to 'drop down' into code to handle custom logic, complex integrations (like calling a custom API), and extend the platform's native capabilities.
Incorrect! Try again.
38From a management and team morale perspective, what is a commonly cited non-technical challenge associated with staffing a dedicated software maintenance team?
Software maintenance: types & challenges
Medium
A.Maintenance tasks are often too complex and intellectually demanding for junior developers.
B.Maintenance work is often perceived as less prestigious or exciting than new development, which can lead to lower motivation and difficulty in attracting top talent.
C.There are too many opportunities for creative problem-solving in maintenance, leading to 'analysis paralysis'.
D.The hardware and software required for maintenance is typically more expensive than that for new development.
Correct Answer: Maintenance work is often perceived as less prestigious or exciting than new development, which can lead to lower motivation and difficulty in attracting top talent.
Explanation:
In many software engineering cultures, working on new, 'greenfield' projects is seen as more glamorous and career-advancing than maintaining existing systems. This perception can make it difficult to staff maintenance teams with highly motivated and skilled engineers, as the work can be viewed as 'second-class' citizenship, leading to morale and retention issues.
Incorrect! Try again.
39While both SEI CMMI and ISO 9001 are process improvement models, what is a key difference in their primary focus and origin?
Quality management & standards: ISO 9001, SEI CMMI, Six Sigma & PSP
Medium
A.CMMI originated specifically for improving software and systems engineering processes with a detailed set of prescribed practices, whereas ISO 9001 is a generic quality management standard applicable to any industry.
B.ISO 9001 mandates five specific maturity levels that an organization must achieve, while CMMI is more flexible.
C.CMMI certification is performed by government auditors, while ISO 9001 certification is performed by private, non-profit organizations.
D.ISO 9001 is exclusively for hardware manufacturing, while CMMI is only for software development.
Correct Answer: CMMI originated specifically for improving software and systems engineering processes with a detailed set of prescribed practices, whereas ISO 9001 is a generic quality management standard applicable to any industry.
Explanation:
The key distinction is their scope and origin. CMMI was developed by the Software Engineering Institute (SEI) specifically to be a model for improving software and systems development processes, and it is very detailed about what practices to implement. ISO 9001 is a general-purpose standard for a Quality Management System that can be applied to any organization, from a software company to a manufacturing plant or a hospital, and is less prescriptive about specific practices.
Incorrect! Try again.
40A developer has created a software module for currency conversion. To be considered a high-quality, reusable component in a Component-Based Development context, what is the most important characteristic it must possess?
Software reuse & Component-Based Software Development (CBSD)
Medium
A.It must be written in the most modern programming language available at the time.
B.It must be highly coupled with the specific data structures of the application it was first developed for.
C.It must contain the entire business logic for the financial application in a single, monolithic module.
D.It must have a well-defined, stable interface and be independent of the specific context or application in which it is used.
Correct Answer: It must have a well-defined, stable interface and be independent of the specific context or application in which it is used.
Explanation:
The essence of a reusable component is that it's a self-contained 'black box' that can be plugged into different systems. This requires it to be loosely coupled and have a clear, stable interface (its 'contract'). The component should not make assumptions about the larger system it will be part of (i.e., be context-independent) to be truly and easily reusable. High coupling (Option B) is the antithesis of reusability.
Incorrect! Try again.
41An organization is CMMI Level 4 certified and uses Statistical Process Control (SPC) to monitor its software defect injection rate. They observe that a particular module consistently shows a high number of defects, but the process remains statistically 'in control' (i.e., within control limits). To progress to CMMI Level 5, what is the most critical next step?
Quality management & standards: ISO 9001, SEI CMMI, Six Sigma & PSP
Hard
A.Perform a causal analysis to identify the root cause of defects specifically in that module and implement process changes to eliminate it.
B.Replace the development team working on the problematic module with a more experienced team.
C.Tighten the upper and lower control limits for the process to force an improvement.
D.Introduce more rigorous peer reviews for only the high-defect module to catch errors earlier.
Correct Answer: Perform a causal analysis to identify the root cause of defects specifically in that module and implement process changes to eliminate it.
Explanation:
CMMI Level 5 (Optimizing) is distinguished from Level 4 (Quantitatively Managed) by its focus on continuous process improvement and defect prevention, not just control. While the process is statistically in control at Level 4, Level 5 requires understanding why the mean defect rate is what it is and proactively working to lower it. Causal Analysis and Resolution (CAR) is a key process area for this. Tightening limits without understanding the cause is counterproductive. Replacing the team or adding reviews are reactive solutions, not a systemic process improvement as mandated by Level 5.
Incorrect! Try again.
42A system is constructed using Component-Based Software Development (CBSD), integrating three third-party binary components (C1, C2, C3). C1 is upgraded, which changes its method signatures (syntactic change) but preserves its overall functionality. C2 is upgraded, keeping the same API but altering its non-functional behavior (e.g., performance under load). C3 remains unchanged. Which type of composition problem is most likely to arise from this scenario?
Software reuse & Component-Based Software Development (CBSD)
Hard
A.Component Versioning Hell, as C3 may have an implicit dependency on the older version of C1.
B.Interface Incompatibility, as the system's calls to C1 will now fail at compile-time or link-time.
C.Property Mismatch, specifically an emergent property issue, where the system's overall performance degrades unpredictably due to C2's changes.
D.Architectural Mismatch, where C1's new dependencies conflict with the system's runtime environment.
Correct Answer: Property Mismatch, specifically an emergent property issue, where the system's overall performance degrades unpredictably due to C2's changes.
Explanation:
This question tests the nuanced understanding of CBSD composition challenges. The change in C1 is a straightforward Interface Incompatibility, which is usually easy to detect and fix. The more insidious and 'harder' problem is the Property Mismatch from C2. The component still 'works' according to its API contract, but its changed non-functional properties (like performance, memory usage, or security posture) can negatively affect the entire system's behavior in ways that are difficult to predict or test for. This is a classic example of a problematic emergent property in a component-based system.
Incorrect! Try again.
43According to Meir Lehman's Laws of Software Evolution, a software system that undergoes continuous corrective and adaptive maintenance without proactive restructuring will exhibit increasing complexity and deteriorating structure over time. This phenomenon leads to a point where the cost and effort of making further changes become prohibitive. Which law most directly describes this principle of decaying quality and increasing entropy?
Software maintenance: types & challenges
Hard
A.Law of Continuing Change (I)
B.Law of Increasing Complexity (II)
C.Law of Conservation of Organizational Stability (VI)
D.Law of Self Regulation (III)
Correct Answer: Law of Increasing Complexity (II)
Explanation:
Lehman's Second Law, the Law of Increasing Complexity, states that as an E-type system (a system that solves a problem in a real-world domain) evolves, its complexity increases unless work is done to maintain or reduce it. This directly addresses the concept of entropy and structural decay due to continuous changes without refactoring. The Law of Continuing Change simply states that systems must change to remain useful. The Law of Conservation of Organizational Stability deals with the rate of change, and Self Regulation describes the feedback mechanisms, but Increasing Complexity is the core law describing the inevitable decay without proactive maintenance (i.e., perfective maintenance like refactoring).
Incorrect! Try again.
44A distributed system designed using cloud-native principles experiences a network partition. According to the CAP theorem, the system must trade off between Availability and Consistency. If the system is designed to handle financial transactions where data integrity is paramount, and it chooses to remain consistent, what is the most likely behavior observed by a user trying to initiate a transaction on the smaller, partitioned side of the network?
Advanced and Future Techniques: Cloud-native software development
Hard
A.The transaction is accepted and placed in a queue, to be reconciled later, leading to eventual consistency.
B.The system allows the transaction to proceed but operates in a read-only mode, preventing any state changes.
C.The transaction is accepted, but the user is shown a 'pending' status indefinitely, with a risk of data loss if the partition is permanent.
D.The system returns an error or times out, refusing to accept the transaction until the partition is resolved.
Correct Answer: The system returns an error or times out, refusing to accept the transaction until the partition is resolved.
Explanation:
The CAP theorem states that a distributed system can only provide two of the following three guarantees: Consistency (C), Availability (A), and Partition Tolerance (P). In a real-world distributed system, Partition Tolerance is a must. Therefore, the choice is between C and A. For a financial system requiring strong consistency (a CP system), when a partition occurs, it must sacrifice availability to ensure that no inconsistent data is written. This means the partitioned nodes that cannot communicate with the majority quorum will refuse to accept writes, resulting in an error or timeout for the user. Accepting the transaction for later reconciliation would be an AP (Availability + Partition Tolerance) system prioritizing availability over immediate consistency.
Incorrect! Try again.
45A team is using the Personal Software Process (PSP) and has a developer whose collected data shows a consistently high 'Defect Removal Yield' of over 95% during the code review phase. However, integration testing reveals a significant number of defects originating from this developer's code. What is the most likely cause of this discrepancy?
Quality management & standards: ISO 9001, SEI CMMI, Six Sigma & PSP
Hard
A.The developer is exceptionally skilled at finding and fixing their own defects during personal reviews.
B.The developer's personal review process is effective at finding certain types of defects (e.g., logic errors) but is systematically missing others (e.g., interface or integration-related defects).
C.The 'Defect Type Standard' used by the developer for logging is too granular, over-counting minor stylistic issues as defects.
D.The developer is 'gaming the metric' by injecting and then removing simple, easy-to-find defects to inflate their yield percentage.
Correct Answer: The developer's personal review process is effective at finding certain types of defects (e.g., logic errors) but is systematically missing others (e.g., interface or integration-related defects).
Explanation:
This question assesses a deep understanding of process data analysis in PSP. A high Defect Removal Yield (the percentage of defects found before the first compile) is desirable, but it's only meaningful relative to the total number of defects injected. The fact that many defects are found later in integration testing points to a blind spot in the developer's personal review process. It's not that the process is ineffective, but rather that it is ineffective for a specific class of errors—those that only become apparent when the code interacts with other system components. This is a more complex and realistic failure mode of PSP than simply gaming the metrics or having a bad standard.
Incorrect! Try again.
46A developer uses an AI code assistant like GitHub Copilot to generate a complex data processing function. The generated code is syntactically correct, passes all existing unit tests, and appears functionally correct on the surface. However, a security audit later reveals a subtle data-leakage vulnerability where, under a specific edge-case input, the function logs sensitive information to a publicly accessible monitoring service. This scenario primarily highlights which fundamental risk of relying on generative AI for code?
Advanced and Future Techniques: AI in software development (GitHub Copilot, Code Whisperer)
Hard
A.The unit tests were poorly designed and did not cover the security-related edge cases.
B.The developer lacks the expertise to manually write secure code.
C.The AI assistant has a deterministic bug that inserts logging statements incorrectly.
D.The AI model's training data included non-secure code, and it probabilistically reproduced a bad pattern without understanding the security context.
Correct Answer: The AI model's training data included non-secure code, and it probabilistically reproduced a bad pattern without understanding the security context.
Explanation:
This is the core challenge with LLM-based code generators. They are incredibly powerful pattern-matching systems, not sentient reasoners. The model doesn't 'understand' security or data sensitivity. It generates code that is statistically likely based on the vast corpus of public code it was trained on, which unfortunately includes many examples of insecure practices. The vulnerability is a probabilistic artifact of the training data, not a deterministic bug in the AI. While poor unit tests and developer skill are contributing factors, the fundamental origin of the insecure code is the nature of the generative model itself.
Incorrect! Try again.
47An organization adopts a powerful Integrated CASE (I-CASE) tool that provides a central repository and supports the entire software lifecycle from requirements analysis to code generation and testing. Despite the tool's capabilities, the development team's productivity plummets, and the project falls behind schedule. What is the most likely cause of this paradoxical outcome?
Computer-Aided Software Engineering (CASE tools)
Hard
A.The hardware provided to the developers is insufficient to run the resource-intensive I-CASE tool.
B.The central repository becomes a performance bottleneck as too many developers try to access it simultaneously.
D.The process formalism imposed by the I-CASE tool is a poor fit for the organization's existing, more agile culture and workflow.
Correct Answer: The process formalism imposed by the I-CASE tool is a poor fit for the organization's existing, more agile culture and workflow.
Explanation:
This is a classic problem with large, prescriptive I-CASE tools, often called 'methodology in a box'. These tools are built around a specific, often rigid, software development process (like a strict waterfall or structured analysis model). If an organization has an informal, iterative, or agile culture, forcing them to conform to the tool's rigid process can create massive overhead, stifle creativity, and ultimately reduce productivity. This process mismatch is a more profound and common cause of failure for I-CASE adoption than technical issues like code quality or performance, which are often solvable.
Incorrect! Try again.
48A business-critical application built on a low-code platform needs to be extended with a feature that requires a custom, high-performance, multi-threaded algorithm not supported by the platform's visual components. The platform provides an 'escape hatch' allowing the integration of external code. What is the most significant architectural challenge the team will face when implementing this integration?
Advanced and Future Techniques: Low-code / No-code platforms
Hard
A.The versioning and dependency management of the external code library, which is not handled by the low-code platform's deployment mechanisms.
B.The performance overhead of the API calls between the low-code environment and the custom code will negate the algorithm's high performance.
C.The difficulty of debugging the custom code since it cannot be run within the low-code platform's integrated debugger.
D.Maintaining transactional integrity and a consistent state model between the declarative, managed environment of the platform and the imperative, unmanaged external code.
Correct Answer: Maintaining transactional integrity and a consistent state model between the declarative, managed environment of the platform and the imperative, unmanaged external code.
Explanation:
This question targets the architectural friction at the boundary of low-code and pro-code. Low-code platforms manage state, transactions, and data consistency automatically within their declarative model. When you 'escape' to external imperative code, that code operates outside this managed environment. The most difficult challenge is ensuring that operations performed by the custom code are transactionally safe and that its state changes are correctly synchronized back into the platform's state model without causing race conditions or data corruption. The other options are valid challenges, but this state/transactional management issue is the most fundamental architectural hurdle.
Incorrect! Try again.
49A large legacy system is maintained by a team using a feature branching workflow. A developer implements a new feature (perfective maintenance) on a long-lived branch. In parallel, another developer applies a critical security patch (corrective maintenance) to the main branch. When the feature branch is finally merged, the security patch is inadvertently reverted. This scenario best exemplifies which specific software maintenance challenge?
Software maintenance: types & challenges
Hard
A.Merge Ambiguity and Regression
B.Code Obfuscation
C.Ripple Effect
D.Semantic Decay
Correct Answer: Merge Ambiguity and Regression
Explanation:
This is a sophisticated and common maintenance problem in modern development workflows. It's not a simple 'ripple effect' where one change breaks another functionally. Instead, it's a tooling and process failure where the merge algorithm cannot understand the intent of the two changes. The merge tool sees conflicting lines and the developer might resolve it by choosing the version from their feature branch, unknowingly undoing the critical fix. This leads to a regression—the reintroduction of a previously fixed bug. 'Semantic Decay' refers to code losing its meaning over time, which is related but less specific. 'Code Obfuscation' is about code being hard to read.
Incorrect! Try again.
50A company is certified for both ISO 9001:2015 and SEI CMMI-DEV v2.0 at Maturity Level 3. An auditor finds that while the company has well-defined, organization-wide processes for software development (as required by CMMI L3), individual project teams frequently tailor these processes so aggressively that the resulting project-level process barely resembles the organizational standard. From the perspective of an ISO 9001 audit, what is the most significant non-conformance?
Quality management & standards: ISO 9001, SEI CMMI, Six Sigma & PSP
Hard
A.Violation of the ISO 9001 principle of 'Customer Focus', as inconsistent processes may lead to inconsistent quality.
B.Failure to meet the CMMI Level 3 specific practice 'Establish the Organization's Set of Standard Processes'.
C.A breakdown in the 'Plan-Do-Check-Act' (PDCA) cycle, as the 'Do' phase is not following the 'Plan' established by the organizational process.
D.The project is not adhering to the quantitative management objectives required for higher maturity levels.
Correct Answer: A breakdown in the 'Plan-Do-Check-Act' (PDCA) cycle, as the 'Do' phase is not following the 'Plan' established by the organizational process.
Explanation:
This question requires synthesizing knowledge of both standards. While the issue is also a CMMI failure, the question asks for the ISO 9001 perspective. ISO 9001 is fundamentally built on the PDCA cycle. The organizational standard process represents the 'Plan'. By not following it, the 'Do' phase is non-conformant. This leads to an inability to 'Check' the process effectiveness meaningfully and 'Act' on improvements systemically. This breakdown of the core quality management cycle is the most fundamental ISO 9001 non-conformance. 'Customer Focus' is a principle, but the PDCA breakdown is a more direct and actionable audit finding related to the Quality Management System's operation.
Incorrect! Try again.
51Which of the following describes the most significant difference between a software framework (e.g., Spring, Ruby on Rails) and a software library (e.g., a math library, a JSON parser)?
Software reuse & Component-Based Software Development (CBSD)
Hard
A.Frameworks are domain-specific, while libraries are always general-purpose.
B.Frameworks are typically larger and contain more lines of code than libraries.
C.A framework employs 'Inversion of Control' (IoC), where the framework calls the developer's code, whereas with a library, the developer's code calls the library.
D.Libraries are linked statically, while frameworks are linked dynamically.
Correct Answer: A framework employs 'Inversion of Control' (IoC), where the framework calls the developer's code, whereas with a library, the developer's code calls the library.
Explanation:
This is a key architectural distinction. When you use a library, you are in control. Your code calls functions/methods in the library to perform a task. When you use a framework, the framework is in control. It defines the application's overall structure and lifecycle, and it calls your custom code at specific points (e.g., via callbacks, event handlers, or dependency injection). This principle is known as Inversion of Control or the 'Hollywood Principle' ('Don't call us, we'll call you'). The other options are generalizations that are not always true: frameworks can be small, libraries can be dynamic or domain-specific.
Incorrect! Try again.
52A team is designing a cloud-native application and is following the '12-Factor App' methodology. To comply with Factor IX, 'Disposability,' which design choice is most critical for their microservices?
Advanced and Future Techniques: Cloud-native software development
Hard
A.Using a service mesh to handle retries for failed service-to-service communication.
B.Implementing a comprehensive health check endpoint for each service.
C.Ensuring services have a fast startup time and can be gracefully shut down via a SIGTERM signal.
D.Externalizing all configuration into the environment.
Correct Answer: Ensuring services have a fast startup time and can be gracefully shut down via a SIGTERM signal.
Explanation:
Factor IX, Disposability, emphasizes that processes should be stateless and share-nothing. This allows them to be started or stopped at a moment's notice, which is essential for rapid elastic scaling, robust deployments, and fast recovery from crashes. A fast startup time minimizes the delay in scaling up or recovering. A graceful shutdown (e.g., by handling SIGTERM to finish current work and release resources) prevents data corruption and ensures a clean state. While health checks (related to robustness) and externalized config (Factor III) are important 12-Factor principles, fast startup and graceful shutdown are the direct implementation of disposability.
Incorrect! Try again.
53In a Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control) project for software quality, the 'Measure' phase establishes a baseline process capability. If the process for delivering features has a sigma level of 2, what does this imply about the process's performance?
Quality management & standards: ISO 9001, SEI CMMI, Six Sigma & PSP
Hard
A.The process is producing approximately 308,537 defects per million opportunities (DPMO) and is not capable of meeting typical customer requirements.
B.The process is operating within the customer's specification limits 95% of the time.
C.The process is highly capable and produces only 3.4 defects per million opportunities (DPMO).
D.The process has a Cpk (Process Capability Index) value greater than 1.33, indicating it is a capable process.
Correct Answer: The process is producing approximately 308,537 defects per million opportunities (DPMO) and is not capable of meeting typical customer requirements.
Explanation:
This question tests the practical meaning of sigma levels. A common misconception is to confuse different sigma values. A 6-sigma process corresponds to 3.4 DPMO. A 2-sigma process is very poor, corresponding to a 69.1% yield or 308,537 DPMO. This level of performance is far from capable for most business applications. A Cpk of 1.33 is generally considered the minimum for a capable process, which corresponds to roughly a 4-sigma level. A 2-sigma process is significantly outside customer specification limits.
Incorrect! Try again.
54What is the primary function of a repository in an Integrated CASE (I-CASE) environment, distinguishing it from a simple version control system like Git?
Computer-Aided Software Engineering (CASE tools)
Hard
A.To store the source code and track changes using a directed acyclic graph.
B.To enforce a strict waterfall development model by locking artifacts from previous phases.
C.To act as a central data store for all software engineering artifacts (e.g., requirements, design models, code, test cases) and maintain traceability links and semantic relationships between them.
D.To automatically generate full applications from high-level graphical models.
Correct Answer: To act as a central data store for all software engineering artifacts (e.g., requirements, design models, code, test cases) and maintain traceability links and semantic relationships between them.
Explanation:
The key concept of a CASE repository (or encyclopedia) is its role as a rich, integrated database of all project information, not just source code files. Unlike Git, which primarily tracks changes to text files, a CASE repository understands the semantics of the artifacts. It knows that a specific UML class in a design model is implemented by a particular source code file and is tested by a set of test cases. This enables powerful capabilities like impact analysis, traceability, and ensuring consistency across the entire lifecycle, which is far beyond the scope of a version control system.
Incorrect! Try again.
55A team is tasked with performing adaptive maintenance on a 15-year-old monolithic application to make it compliant with new data privacy regulations (like GDPR). The original developers are gone, and documentation is sparse. The most significant initial challenge they will face is a lack of:
Software maintenance: types & challenges
Hard
A.Program Comprehension, as they must reverse-engineer business logic and data flows before they can determine what changes are needed.
B.Maintainability Index, as they have no metrics to gauge the code quality.
C.Scalability, as the monolithic architecture cannot handle the new processing load required by the regulations.
D.Testability, due to tightly coupled components and a lack of unit tests.
Correct Answer: Program Comprehension, as they must reverse-engineer business logic and data flows before they can determine what changes are needed.
Explanation:
Before any maintenance activity can begin on a poorly documented legacy system, the maintainers must first understand how it works. This process, known as program comprehension, is the most critical and time-consuming initial step. They need to figure out where and how personal data is stored, processed, and transmitted. While Testability, Maintainability Index, and Scalability are all valid and significant challenges, they are secondary to the fundamental problem of not understanding the system. You cannot test, measure, or scale a system whose logic and structure you do not comprehend.
Incorrect! Try again.
56Consider two approaches to software reuse: 'White-box reuse' (e.g., class inheritance) and 'Black-box reuse' (e.g., component composition). A development team chooses white-box reuse by creating a deep inheritance hierarchy for their application's domain model. What is the most significant long-term maintenance risk associated with this decision?
Software reuse & Component-Based Software Development (CBSD)
Hard
A.The 'Fragile Base Class' problem, where changes to a base class can have unforeseen and breaking impacts on a large number of subclasses.
B.The inability to replace a base class implementation without modifying all subclasses.
C.The increased difficulty of unit testing subclasses in isolation from their parent classes.
D.The performance overhead associated with dynamic dispatch in deep inheritance hierarchies.
Correct Answer: The 'Fragile Base Class' problem, where changes to a base class can have unforeseen and breaking impacts on a large number of subclasses.
Explanation:
The Fragile Base Class problem is a classic and severe issue with deep inheritance (a form of white-box reuse). Because subclasses are tightly coupled to the implementation details of their parents, a seemingly safe modification in a base class can break the assumptions or invariants of its descendants in subtle ways. This makes the entire hierarchy rigid and difficult to maintain. Black-box reuse (composition) avoids this by depending only on stable interfaces, not on implementation, promoting the 'composition over inheritance' principle.
Incorrect! Try again.
57From an IT governance perspective, what is the most profound challenge posed by the proliferation of citizen-developer-built applications on no-code platforms within a large enterprise?
Advanced and Future Techniques: Low-code / No-code platforms
Hard
A.The creation of disconnected data silos and inconsistent business logic that is difficult to manage, secure, and integrate at an enterprise level.
B.The high aggregate cost of licensing multiple no-code platforms across different business units.
C.The potential for vendor lock-in, making it difficult to migrate applications off the no-code platform.
D.The lack of performance and scalability in applications built by non-professional developers.
Correct Answer: The creation of disconnected data silos and inconsistent business logic that is difficult to manage, secure, and integrate at an enterprise level.
Explanation:
This is a strategic, governance-level problem. When individual business units build their own applications without central oversight (a form of Shadow IT), they often solve local problems effectively. However, at the enterprise level, this leads to a fragmented technology landscape. Data becomes trapped in platform-specific silos, business rules are duplicated and become inconsistent, and there is no holistic view of enterprise data or processes. This fragmentation undermines data integrity, security, and the ability to perform enterprise-wide analytics or process automation. While cost, performance, and lock-in are real issues, the architectural and data fragmentation is the most profound long-term governance challenge.
Incorrect! Try again.
58Comparing an AI pair programmer like GitHub Copilot with a traditional static analysis tool (linter), what is the fundamental difference in how they identify potential code issues?
Advanced and Future Techniques: AI in software development (GitHub Copilot, Code Whisperer)
Hard
A.Static analysis tools run asynchronously in a CI/CD pipeline, while AI tools provide real-time feedback in the IDE.
B.Static analysis focuses on security vulnerabilities, while AI tools focus on logical bugs and performance.
C.Static analysis tools operate deterministically based on a predefined set of rules and code patterns, while AI tools identify issues based on probabilistic models learned from vast datasets of existing code.
D.AI tools can fix the code automatically, whereas static analysis tools can only report the issues.
Correct Answer: Static analysis tools operate deterministically based on a predefined set of rules and code patterns, while AI tools identify issues based on probabilistic models learned from vast datasets of existing code.
Explanation:
This question gets to the core of the underlying technology. A traditional linter or static analyzer has a human-written rule set (e.g., 'a for loop variable should not be modified inside the loop'). Its analysis is deterministic and explainable. An AI tool like Copilot doesn't have explicit rules. It has learned the statistical likelihood of code patterns from its training data. It might flag code as 'unusual' or suggest a 'better' alternative because the existing code has a low probability compared to common patterns it has seen. This probabilistic nature is the key differentiator; it can find novel or stylistic issues but can also have false positives or 'hallucinate' problems because it lacks a formal, rule-based understanding of the code's semantics.
Incorrect! Try again.
59In a microservices architecture, a 'Saga' pattern is used to manage distributed transactions. One service fails midway through a saga, requiring compensating transactions to be executed to roll back the changes made by preceding services. What is the primary reason for using a Saga pattern instead of a traditional two-phase commit (2PC) protocol in this environment?
Advanced and Future Techniques: Cloud-native software development
Hard
A.Two-phase commit requires synchronous, blocking communication and holds resource locks across services, which severely reduces the availability and scalability of the system.
B.Two-phase commit protocols are proprietary and not supported by modern cloud infrastructure.
C.Two-phase commit is less secure than the Saga pattern for cross-service communication.
D.The Saga pattern guarantees ACID (Atomicity, Consistency, Isolation, Durability) properties across services, whereas 2PC does not.
Correct Answer: Two-phase commit requires synchronous, blocking communication and holds resource locks across services, which severely reduces the availability and scalability of the system.
Explanation:
This is a critical architectural trade-off in distributed systems. 2PC provides strong consistency (ACID guarantees) but at a very high cost. It requires a central transaction coordinator and forces all participating services to lock their resources and wait for the coordinator's signal to commit or abort. This synchronous blocking is antithetical to the goals of high availability and independent scalability in a microservices architecture. The Saga pattern sacrifices strong consistency (offering eventual consistency through compensations) to gain availability and loose coupling, which is a much better fit for the cloud-native/microservices paradigm.
Incorrect! Try again.
60A key philosophical difference between ISO 9001 and SEI CMMI is their primary focus. Which statement best captures this distinction?
Quality management & standards: ISO 9001, SEI CMMI, Six Sigma & PSP
Hard
A.ISO 9001 certification is granted to an entire organization, whereas CMMI appraisal applies only to specific projects or development teams.
B.CMMI is focused entirely on software development, whereas ISO 9001 is focused on manufacturing processes.
C.ISO 9001 is a process improvement model, while CMMI is a process compliance model.
D.ISO 9001 specifies what a quality system should achieve in a generic business context, while CMMI specifies how to achieve it through detailed software and systems engineering practices.
Correct Answer: ISO 9001 specifies what a quality system should achieve in a generic business context, while CMMI specifies how to achieve it through detailed software and systems engineering practices.
Explanation:
This is a high-level, analytical comparison. ISO 9001 is a generic quality management standard applicable to any industry. It defines the requirements for a Quality Management System (e.g., 'you must have a process for document control') but does not prescribe the specifics of that process. CMMI, on the other hand, is a capability and maturity model specifically for development. It provides a much more detailed and prescriptive framework of specific goals and practices (e.g., 'for Requirements Management, you must maintain bidirectional traceability from requirements to work products'). In essence, ISO 9001 sets the destination, while CMMI provides a detailed roadmap for getting there in a development context.