1What is the primary goal of software project management?
Project management basics
Easy
A.To use the newest available technology in the project
B.To write the most elegant and efficient code possible
C.To create the most comprehensive documentation
D.To deliver the software project on time, within budget, and with the required quality
Correct Answer: To deliver the software project on time, within budget, and with the required quality
Explanation:
The core responsibility of project management is to balance the constraints of time (schedule), cost (budget), and scope (features/quality) to achieve the project's goals successfully.
Incorrect! Try again.
2The 'Iron Triangle' of project management consists of which three constraints?
Project management basics
Easy
A.People, Process, and Technology
B.Scope, Time, and Cost
C.Requirements, Design, and Testing
D.Planning, Execution, and Delivery
Correct Answer: Scope, Time, and Cost
Explanation:
The Iron Triangle, or Triple Constraint, illustrates the fundamental trade-offs in project management: Scope (what the project will deliver), Time (the schedule), and Cost (the budget). Changing one affects the others.
Incorrect! Try again.
3What is a 'milestone' in the context of project planning?
Project planning & monitoring
Easy
A.A daily task assigned to a developer
B.A significant event or point of progress in the project timeline
C.A meeting to discuss project status
D.A bug reported by the testing team
Correct Answer: A significant event or point of progress in the project timeline
Explanation:
A milestone represents a major achievement or a key decision point in a project. It has zero duration and is used to mark progress, such as 'Design Phase Completed' or 'User Acceptance Testing Started'.
Incorrect! Try again.
4Which of the following is a bar chart that provides a visual representation of a project schedule over time?
Scheduling Techniques
Easy
A.PERT Chart
B.Gantt Chart
C.Flowchart
D.Data Flow Diagram
Correct Answer: Gantt Chart
Explanation:
A Gantt chart is a popular project management tool that displays tasks as horizontal bars along a timeline, showing their start dates, end dates, and durations, which helps in planning and tracking project progress.
Incorrect! Try again.
5What is the primary purpose of Software Configuration Management (SCM)?
Software Configuration Management (SCM)
Easy
A.To manage and control changes to software artifacts
B.To automate the testing process
C.To design the software architecture
D.To estimate the project's total cost
Correct Answer: To manage and control changes to software artifacts
Explanation:
SCM is the discipline of managing the evolution of software systems. Its main goal is to control changes, maintain integrity, and provide traceability for all project artifacts (code, documentation, etc.) throughout the development lifecycle.
Incorrect! Try again.
6A system that records changes to a file or set of files over time so that you can recall specific versions later is called a:
Software Configuration Management (SCM)
Easy
A.Integrated Development Environment (IDE)
B.Version Control System (VCS)
C.Database Management System (DBMS)
D.Compiler
Correct Answer: Version Control System (VCS)
Explanation:
A Version Control System, such as Git or SVN, is a fundamental tool of SCM that tracks revisions, allows for collaboration, and enables teams to revert to previous states of their work.
Incorrect! Try again.
7COCOMO is an acronym for which of the following?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Easy
A.Configuration Control Model
B.Component Costing Model
C.Constructive Cost Model
D.Comprehensive Code Model
Correct Answer: Constructive Cost Model
Explanation:
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm, which uses formulas based on project size (like lines of code) to predict effort and duration.
Incorrect! Try again.
8In the context of DevOps, what does 'CI' in CI/CD stand for?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Easy
A.Continuous Integration
B.Code Interaction
C.Configuration Item
D.Constant Improvement
Correct Answer: Continuous Integration
Explanation:
CI stands for Continuous Integration, the practice of automating the integration of code changes from multiple contributors into a single software project. It's a fundamental DevOps practice.
Incorrect! Try again.
9In project management, what does PERT stand for?
Scheduling Techniques
Easy
A.Path Estimation and Reporting Tool
B.Process Estimation and Review Technique
C.Program Evaluation and Review Technique
D.Project Execution and Reporting Tool
Correct Answer: Program Evaluation and Review Technique
Explanation:
PERT is a project scheduling technique used to analyze the tasks involved in completing a given project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project.
Incorrect! Try again.
10Which software estimation technique is based on the system's functionality from the user's point of view, making it independent of programming language?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Easy
A.Lines of Code (LOC)
B.COCOMO
C.PERT
D.Function Points (FP)
Correct Answer: Function Points (FP)
Explanation:
Function Point analysis measures software size by quantifying the functionality provided to the user based on logical design and user requirements, rather than counting physical lines of code.
Incorrect! Try again.
11The activity of tracking a project's progress and comparing it against the planned schedule and budget is known as:
Project planning & monitoring
Easy
A.Project Initiation
B.Requirements Gathering
C.Risk Analysis
D.Project Monitoring
Correct Answer: Project Monitoring
Explanation:
Project monitoring and control is a continuous process of observing project execution to identify potential problems in a timely manner and take corrective action when necessary.
Incorrect! Try again.
12What is the primary benefit of Continuous Integration (CI)?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Easy
A.To eliminate the need for project managers
B.To automatically deploy code to production
C.To guarantee that the software is bug-free
D.To find and fix integration bugs earlier
Correct Answer: To find and fix integration bugs earlier
Explanation:
By frequently merging code changes into a central repository and running automated builds and tests, CI helps teams detect integration issues as soon as they are introduced, making them easier and less costly to fix.
Incorrect! Try again.
13What does the 'Critical Path' in the Critical Path Method (CPM) represent?
Scheduling Techniques
Easy
A.The longest sequence of tasks that determines the project's minimum duration
B.The shortest possible route to complete the project
C.The tasks that have the highest cost
D.The list of the most important tasks
Correct Answer: The longest sequence of tasks that determines the project's minimum duration
Explanation:
The critical path is the longest-duration path through a network diagram. Any delay in a task on this path will directly delay the entire project's completion date.
Incorrect! Try again.
14In SCM, what is a 'baseline'?
Software Configuration Management (SCM)
Easy
A.The first line of code written for a project
B.The final version of the software released to the public
C.A developer's local copy of the code
D.A formally accepted version of a configuration item, fixed at a specific time
Correct Answer: A formally accepted version of a configuration item, fixed at a specific time
Explanation:
A baseline acts as a stable reference point in development. It's a snapshot of key project artifacts (e.g., requirements, code) that has been reviewed and approved and can only be changed through a formal change control process.
Incorrect! Try again.
15Who is the person with the overall responsibility for the successful planning, execution, and closing of a project?
Project management basics
Easy
A.Project Manager
B.Client/Stakeholder
C.Lead Developer
D.Quality Analyst
Correct Answer: Project Manager
Explanation:
The Project Manager is the key individual responsible for leading the project team to achieve the project goals within the given constraints.
Incorrect! Try again.
16In GitHub Actions, workflows are defined using which file format?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Easy
A.XML (.xml)
B.JSON (.json)
C.YAML (.yml or .yaml)
D.Markdown (.md)
Correct Answer: YAML (.yml or .yaml)
Explanation:
GitHub Actions workflows are configured using YAML syntax. These workflow files must be stored in the .github/workflows directory of your repository.
Incorrect! Try again.
17The Use Case Points (UCP) estimation method is most suitable for projects that use which development approach?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Easy
A.Waterfall model
B.Procedural programming
C.Agile with no formal requirements
D.Object-Oriented and Use Case-driven development
Correct Answer: Object-Oriented and Use Case-driven development
Explanation:
UCP is specifically designed to estimate project size and effort based on the number and complexity of use cases and actors, which are central artifacts in object-oriented analysis and design.
Incorrect! Try again.
18Which of the following is a well-known distributed version control system?
Software Configuration Management (SCM)
Easy
A.Microsoft VSS (Visual SourceSafe)
B.Git
C.CVS (Concurrent Versions System)
D.SVN (Subversion)
Correct Answer: Git
Explanation:
Git is a distributed VCS, meaning every developer has a full copy of the entire repository history. SVN, CVS, and VSS are examples of older, centralized version control systems.
Incorrect! Try again.
19A Work Breakdown Structure (WBS) is used in project planning to:
Project planning & monitoring
Easy
A.Decompose the project into smaller, more manageable components or tasks
B.Estimate the total cost of the project
C.Assign developers to specific roles
D.Identify all the risks associated with the project
Correct Answer: Decompose the project into smaller, more manageable components or tasks
Explanation:
The WBS is a key project deliverable that organizes the team's work into manageable sections. It is a hierarchical decomposition of the total scope of work to be carried out by the project team.
Incorrect! Try again.
20What is the primary input for the Basic COCOMO model to estimate effort?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Easy
A.The project deadline
B.The number of use cases
C.The estimated size of the software in thousands of lines of code (KLOC)
D.The number of developers on the team
Correct Answer: The estimated size of the software in thousands of lines of code (KLOC)
Explanation:
The Basic COCOMO model relies on an estimate of the program's size, typically measured in KLOC (Kilo Lines of Code) or DSI (Delivered Source Instructions), as its main driver for calculating development effort.
Incorrect! Try again.
21In a project network diagram, the path with zero slack is known as the critical path. If a non-critical task is delayed by an amount less than its total slack, what is the impact on the project's completion date?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Medium
A.The project will be delayed by the same amount of time.
B.The project completion date will not be affected.
C.The project will be completed earlier.
D.The critical path will change.
Correct Answer: The project completion date will not be affected.
Explanation:
Slack (or float) is the amount of time a task can be delayed without affecting subsequent tasks or the overall project completion date. As long as the delay is within the task's available slack, the project's final deadline remains unchanged.
Incorrect! Try again.
22A team is calculating Function Points (FP) for a new module. The initial Unadjusted Function Point (UFP) count is 350. The sum of all 14 General System Characteristics (GSCs) ratings is 40. Given the formula for the Value Adjustment Factor (VAF) is , what is the final Adjusted Function Point (AFP) count?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Medium
A.420
B.332.5
C.367.5
D.350
Correct Answer: 367.5
Explanation:
First, calculate the VAF: . Then, calculate the Adjusted Function Points (AFP) by multiplying the UFP by the VAF: . A VAF greater than 1.0 indicates the system is more complex than average.
Incorrect! Try again.
23A development team uses a branching strategy where a new branch is created for every new feature. Once the feature is complete and tested, it is merged back into the main development branch. What is the primary advantage of this SCM practice?
Software Configuration Management (SCM)
Medium
A.It automatically deploys the feature to production upon merging.
B.It eliminates the need for a central repository.
C.It isolates development work, preventing unstable code from destabilizing the main branch.
D.It guarantees that merge conflicts will never occur.
Correct Answer: It isolates development work, preventing unstable code from destabilizing the main branch.
Explanation:
This practice, known as Feature Branching, is a core concept in modern SCM. Its main purpose is to allow developers to work on new features or bug fixes in an isolated environment without affecting the stability and integrity of the main or release branch.
Incorrect! Try again.
24Consider the following GitHub Actions workflow snippet:
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Medium
A.It will run only if the 'build' job succeeds and the workflow was triggered by a push to the 'main' branch.
B.It will run concurrently with the 'build' job on every push.
C.It will run whenever there is a push to the 'main' branch, even if the 'build' job fails.
D.It will run every time the 'build' job succeeds, regardless of the branch.
Correct Answer: It will run only if the 'build' job succeeds and the workflow was triggered by a push to the 'main' branch.
Explanation:
The needs: build directive ensures that the 'deploy' job waits for the 'build' job to complete successfully. The if: github.ref == 'refs/heads/main' condition adds a second requirement: the event that triggered the workflow must have occurred on the 'main' branch. Both conditions must be met.
Incorrect! Try again.
25A project is reported as "90% complete" for three consecutive months. This phenomenon, known as the '90% syndrome', most likely indicates a failure in which project management activity?
Project planning & monitoring
Medium
A.Risk identification.
B.Stakeholder communication.
C.Defining and tracking small, measurable work packages.
D.Resource allocation.
Correct Answer: Defining and tracking small, measurable work packages.
Explanation:
The '90% syndrome' often occurs when tasks are too large and ill-defined. Progress is easy to report at the beginning, but the remaining 10% contains all the unforeseen complexities. Breaking work down into smaller, concrete, and measurable packages (a key principle of a Work Breakdown Structure) makes progress tracking more accurate and avoids this issue.
Incorrect! Try again.
26A project manager is using PERT for a task with high uncertainty. The estimates are: Optimistic (O) = 3 days, Most Likely (M) = 6 days, Pessimistic (P) = 15 days. What is the PERT expected time () and what does the large range between O and P suggest?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Medium
A. days; suggests the task is on the critical path.
B. days; suggests high uncertainty and risk for the task.
C. days; suggests the task requires more resources.
D. days; suggests the estimates are unreliable.
Correct Answer: days; suggests high uncertainty and risk for the task.
Explanation:
The PERT expected time is calculated using the formula . Plugging in the values: days. A wide range between the optimistic (3) and pessimistic (15) estimates is a clear indicator of high uncertainty and risk associated with the task.
Incorrect! Try again.
27In the context of the COCOMO model, what is the primary reason that the effort for a 'Semi-Detached' project is estimated to be higher than for an 'Organic' project of the same size (KLOC)?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Medium
A.The project is developed for a government contract.
B.The development team has mixed experience levels and less familiarity with the problem domain.
Correct Answer: The development team has mixed experience levels and less familiarity with the problem domain.
Explanation:
COCOMO modes are based on the nature of the project and the team. 'Organic' projects involve small, experienced teams working on familiar problems. 'Semi-Detached' projects involve teams with a mix of experience levels, facing requirements that are less rigid than 'Embedded' systems but more complex than 'Organic' ones, thus leading to higher effort.
Incorrect! Try again.
28What is the key difference in how merge conflicts are typically handled in a centralized version control system (like SVN) versus a distributed version control system (like Git)?
Software Configuration Management (SCM)
Medium
A.In SVN, merge conflicts are automatically resolved by the server, while in Git they must always be resolved manually.
B.Centralized systems prevent merge conflicts from ever happening, unlike distributed systems.
C.In Git, conflicts are resolved locally by the developer before pushing, whereas in SVN, conflicts are often discovered and resolved on the central server during a commit.
D.In Git, only the project administrator can resolve conflicts, while in SVN any developer can.
Correct Answer: In Git, conflicts are resolved locally by the developer before pushing, whereas in SVN, conflicts are often discovered and resolved on the central server during a commit.
Explanation:
In a DVCS like Git, merging happens on the developer's local repository. The developer resolves conflicts and tests the result before sharing the changes. In a CVCS like SVN, a developer's commit can fail if their local copy is out of date. They must update, resolve conflicts, and then attempt the commit again. The conflict resolution is tied to the act of synchronizing with the central server.
Incorrect! Try again.
29The 'Iron Triangle' in project management highlights the trade-offs between Scope, Time, and Cost. If a client insists on adding a major new feature (increasing Scope) but is unwilling to increase the budget (Cost), what is the most likely trade-off the project manager must make?
Project management basics
Medium
A.The project will be canceled.
B.The project's delivery date (Time) will have to be extended.
C.The team size will be reduced.
D.The quality of the final product will have to be decreased.
Correct Answer: The project's delivery date (Time) will have to be extended.
Explanation:
According to the Iron Triangle, if one constraint (Scope) is changed, at least one other constraint (Time or Cost) must also be adjusted to maintain balance. Since Cost is fixed, the only remaining variable that can be logically adjusted to accommodate more work is Time. Decreasing quality is another possible outcome, but extending the timeline is the direct trade-off.
Incorrect! Try again.
30A team wants to ensure their code is always in a deployable state. They set up a pipeline that automatically builds and runs unit and integration tests every time a developer pushes a change to the central repository. This practice is best described as:
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Medium
A.Infrastructure as Code (IaC)
B.Continuous Delivery (CD)
C.Continuous Deployment (CD)
D.Continuous Integration (CI)
Correct Answer: Continuous Integration (CI)
Explanation:
Continuous Integration (CI) is the practice of frequently merging all developer working copies to a shared mainline several times a day. Each merge triggers an automated build and test sequence. The goal is to detect integration errors as quickly as possible. It does not necessarily include automatic deployment to production, which is part of Continuous Deployment/Delivery.
Incorrect! Try again.
31When would a project manager choose Use Case Points (UCP) over Function Points (FP) for software effort estimation?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Medium
A.In the early stages of an object-oriented project where requirements are defined as use cases rather than detailed functions.
B.When estimating maintenance effort for a legacy system written in COBOL.
C.When the project is a real-time embedded system with strict hardware constraints.
D.When a very precise, low-level estimation is required after the detailed design is complete.
Correct Answer: In the early stages of an object-oriented project where requirements are defined as use cases rather than detailed functions.
Explanation:
Use Case Points are specifically designed for estimation based on use cases, which are common in object-oriented analysis and design. They are particularly useful early in the lifecycle when detailed functional decompositions (required for FPs) are not yet available, but the system's actors and their interactions (use cases) have been identified.
Incorrect! Try again.
32A Work Breakdown Structure (WBS) is a key project planning artifact. What is its primary role in the context of project monitoring and control?
Project planning & monitoring
Medium
A.It defines the communication plan for stakeholders.
B.It lists all potential project risks and mitigation strategies.
C.It serves as the foundation for detailed cost and schedule tracking against a baseline.
D.It specifies the sequential order of all project tasks.
Correct Answer: It serves as the foundation for detailed cost and schedule tracking against a baseline.
Explanation:
The WBS decomposes the total project scope into manageable work packages. Each work package can then have its own budget, schedule, and resource assignments. This hierarchical structure allows project managers to accurately track progress, measure performance (e.g., using Earned Value Management), and control costs and schedules at various levels of detail.
Incorrect! Try again.
33What is the primary purpose of establishing a 'baseline' in an SCM process?
Software Configuration Management (SCM)
Medium
A.To automatically generate project documentation from the source code.
B.To create a stable, formally approved point-in-time snapshot of the system that can be used as a reference for future work.
C.To lock the entire codebase, preventing any further changes from being made.
D.To measure the performance of the development team.
Correct Answer: To create a stable, formally approved point-in-time snapshot of the system that can be used as a reference for future work.
Explanation:
A baseline is a formally reviewed and agreed-upon version of a software configuration item (e.g., code, documentation, design). It serves as a fixed reference point for all subsequent changes. Any changes after a baseline is established must go through a formal change control process, ensuring stability and traceability.
Incorrect! Try again.
34A project manager is using a Gantt chart for a project. They observe that one task has been marked as a 'milestone'. What does this typically represent on the chart?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Medium
A.A task that can be performed in parallel with any other task.
B.A task that has a high risk of failure.
C.The longest task in the entire project.
D.A significant project event or deliverable with zero duration.
Correct Answer: A significant project event or deliverable with zero duration.
Explanation:
In project management and Gantt charts, a milestone is a specific point in time that marks the completion of a major phase, deliverable, or decision. It is represented as a diamond or a single point on the timeline because it has no duration itself; it is a marker of progress.
Incorrect! Try again.
35In the intermediate COCOMO model, cost drivers are used to adjust the nominal effort estimate. If a project requires 'High' software reliability, how would the corresponding cost driver typically affect the final effort estimation?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Medium
A.It will increase the estimated effort.
B.It will decrease the estimated effort.
C.It will have no effect on the estimated effort.
D.It will only affect the project schedule, not the effort.
Correct Answer: It will increase the estimated effort.
Explanation:
Cost drivers are multipliers that adjust the nominal effort. A requirement for 'High' reliability (RELY) means more rigorous testing, fault tolerance, and verification activities are needed. This increases the complexity and work required, so the effort multiplier for this driver will be greater than 1.0, thus increasing the final estimated effort.
Incorrect! Try again.
36A project manager is faced with a situation where a key technology has been deprecated, and the team lacks skills in its replacement. The manager arranges for team training and creates a contingency plan. This set of activities falls primarily under which project management knowledge area?
Project management basics
Medium
A.Risk Management
B.Cost Management
C.Scope Management
D.Quality Management
Correct Answer: Risk Management
Explanation:
This scenario involves identifying a potential problem (deprecated technology, skill gap), analyzing its impact, and planning a response (training, contingency plan). This entire process—from identification to response planning—is the core of project Risk Management.
Incorrect! Try again.
37What is the fundamental difference between Continuous Delivery and Continuous Deployment?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Medium
A.Continuous Deployment automatically deploys every passed build to production, while Continuous Delivery requires a manual approval step for production deployment.
B.Continuous Deployment uses tools like GitHub Actions, while Continuous Delivery uses tools like Jenkins.
C.Continuous Delivery is focused on testing, while Continuous Deployment is focused on building.
D.Continuous Delivery automatically deploys to a staging environment, while Continuous Deployment deploys to a testing environment.
Correct Answer: Continuous Deployment automatically deploys every passed build to production, while Continuous Delivery requires a manual approval step for production deployment.
Explanation:
Both practices ensure that software can be released at any time. The key differentiator is the final step. In Continuous Delivery, the release to production is a manual, business decision. In Continuous Deployment, the process is fully automated, and every change that passes all automated tests is deployed to production automatically.
Incorrect! Try again.
38An agile team is using story points for estimation and tracks their 'velocity' (the number of story points completed per sprint). The product owner tries to force the team to increase its velocity in the next sprint. Why is this approach generally considered a bad practice?
Project planning & monitoring
Medium
A.Increasing velocity always leads to a higher bug count.
B.It violates the agile principle of not communicating with the product owner.
C.Velocity is a fixed number that cannot be changed.
D.T
E.It can lead the team to compromise on quality or inflate future estimates to meet the target.
Correct Answer: It can lead the team to compromise on quality or inflate future estimates to meet the target.
Explanation:
Velocity is an indicator of a team's past performance, used for future planning, not a target to be manipulated. Forcing a team to increase velocity often results in perverse incentives, such as cutting corners on testing (reducing quality) or 'gaming the system' by assigning more points to tasks without an actual increase in work delivered (story point inflation).
Incorrect! Try again.
39Consider the following project activities and their dependencies:
- A (5 days)
- B (3 days), depends on A
- C (4 days), depends on A
- D (6 days), depends on B
- E (2 days), depends on C
What is the Critical Path for this project?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Medium
A.B -> D
B.A -> C -> E
C.A -> D
D.A -> B -> D
Correct Answer: A -> B -> D
Explanation:
To find the critical path, we calculate the duration of all possible paths from start to finish:
Path 1: A -> B -> D = 5 + 3 + 6 = 14 days.
Path 2: A -> C -> E = 5 + 4 + 2 = 11 days.
The critical path is the longest path, as it determines the minimum time to complete the project. Therefore, A -> B -> D is the critical path with a duration of 14 days.
Incorrect! Try again.
40A developer modifies three different files to fix a single bug. Which SCM action is most appropriate for grouping these changes together before sharing them with the team?
Software Configuration Management (SCM)
Medium
A.Committing all three files as a single logical changeset with a descriptive message.
B.Creating a separate branch for each file modification.
C.Uploading the three files to a shared network drive.
D.Baselining each file individually after it is changed.
Correct Answer: Committing all three files as a single logical changeset with a descriptive message.
Explanation:
A key principle of version control is the 'atomic commit'. All changes related to a single logical unit of work (like a bug fix or a new feature) should be grouped into a single commit or changeset. This makes the project history easier to understand, track, and revert if necessary.
Incorrect! Try again.
41A project activity has an optimistic time () of 8 days, a most likely time () of 11 days, and a pessimistic time () of 20 days. The project manager wants to find the probability of completing this specific activity in 10 days or less. Assuming a normal distribution for the activity duration, what is the approximate probability?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Hard
A.Approximately 50.0%
B.Approximately 34.1%
C.Approximately 18.7%
D.Approximately 2.3%
Correct Answer: Approximately 18.7%
Explanation:
This requires a multi-step PERT calculation. First, calculate the expected time () and the standard deviation ().
Expected Time: days.
Standard Deviation: days.
Now, calculate the Z-score for the target completion time of 10 days: .
Using a standard Z-table, a Z-score of -1.0 corresponds to a cumulative probability of approximately 0.1587 or 15.87%. The closest answer is 18.7%, which might reflect slight variations in distribution assumptions or rounding, but it's the only one in the correct ballpark. The core analysis shows the probability is significantly less than 50%.
Incorrect! Try again.
42A project is estimated using the intermediate COCOMO model. The initial effort is calculated as Person-Months. The team has 'Very High' analyst capability (ACAP = 0.71) but 'Very Low' virtual machine experience (VEXP = 1.21). All other 13 cost drivers are nominal (value = 1.00). What is the most significant consequence of these two non-nominal drivers on the final effort estimation?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Hard
A.The final effort will be significantly increased because the VEXP driver's penalty outweighs the ACAP driver's benefit.
B.The final effort will be moderately reduced, as the two drivers' effects nearly cancel each other out.
C.The effort will remain approximately 533 PM because the Effort Adjustment Factor (EAF) will be close to 1.0.
D.The final effort will be significantly reduced because the high analyst capability has a stronger multiplicative effect than the low VM experience.
Correct Answer: The final effort will be significantly reduced because the high analyst capability has a stronger multiplicative effect than the low VM experience.
Explanation:
The analysis requires understanding the multiplicative nature of the Effort Adjustment Factor (EAF). The EAF is the product of all 15 cost drivers. Here, EAF = 0.71 (ACAP) 1.21 (VEXP) (1.00)^13 = 0.8591. The adjusted effort is Person-Months. This is a significant reduction from the initial estimate of 533 PM. The key insight is that cost drivers are multiplicative, not additive. A driver like ACAP with a value of 0.71 provides a 29% reduction, while VEXP with a value of 1.21 adds a 21% penalty. The reduction is stronger than the penalty, leading to an overall significant decrease in the estimated effort.
Incorrect! Try again.
43A large-scale project with a strict quarterly release cycle and multiple feature teams working in parallel decides to adopt a branching strategy. Team A is developing a high-risk, long-running feature (Feature X), while Team B is working on short-lived bug fixes and minor enhancements. Which branching strategy would best accommodate these conflicting development cadences while minimizing merge conflicts and maintaining a stable main branch?
Software Configuration Management (SCM)
Hard
A.Trunk-Based Development with short-lived feature branches and feature toggles.
B.Gitflow, using develop for integration, feature branches for all new work, and release branches for stabilization.
C.GitHub Flow, where every change goes through a pull request to main and is deployed immediately.
D.A single shared main branch where all teams commit directly to avoid branching overhead.
Correct Answer: Gitflow, using develop for integration, feature branches for all new work, and release branches for stabilization.
Explanation:
Gitflow is specifically designed for projects with scheduled releases. The long-running develop branch allows for integration without destabilizing main (production). Team A can work on its long-running feature/X branch for months, merging into develop when ready. Team B can use short-lived feature branches or hotfix branches for their work. The release branch provides a dedicated place to stabilize the quarterly release without blocking new development from being merged into develop. Trunk-Based Development would be difficult with the long-running feature unless it's meticulously hidden behind feature toggles, which adds complexity. Direct commits to main or GitHub Flow are unsuitable for a project with a strict, non-continuous release cycle and long-running, potentially unstable features.
Incorrect! Try again.
44A GitHub Actions workflow is triggered by the pull_request_target event to run tests that require access to production-level secrets (e.g., an API key for a staging environment). A contributor from a forked repository opens a pull request that maliciously modifies the test script to exfiltrate these secrets. What is the outcome?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Hard
A.The workflow requires a manual approval from a maintainer before it can run with secrets.
B.The workflow runs, but the modified script is ignored; GitHub Actions runs the version of the workflow from the base repository's branch, not the PR branch.
C.The workflow fails because GitHub prevents workflows triggered by pull_request_target from accessing secrets.
D.The workflow runs successfully, and the secrets are compromised because pull_request_target runs in the context of the base repository and has access to its secrets.
Correct Answer: The workflow runs successfully, and the secrets are compromised because pull_request_target runs in the context of the base repository and has access to its secrets.
Explanation:
This question highlights a critical security vulnerability in GitHub Actions. The pull_request_target event is dangerous because, unlike pull_request, it runs the workflow defined in the base repository but checks out the code from the head of the pull request. If the workflow script itself is not what's being modified, but a build or test script called by the workflow is, that malicious code from the PR gets executed with the permissions and secret access of the base repository. Option C is incorrect because the workflow file is from the base, but the rest of the code, including scripts it might run, is from the PR. Option A is incorrect because pull_request_target is specifically designed to access secrets for tasks like labeling. Option D describes a feature (environment with protection rules) but is not the default behavior. The default behavior is dangerously permissive, leading to secret exfiltration.
Incorrect! Try again.
45A project has a Budget at Completion (BAC) of $200,000. After three months, the project manager reports the following Earned Value Management (EVM) metrics: Planned Value (PV) = $75,000, Earned Value (EV) = $60,000, and Actual Cost (AC) = $80,000. Assuming the current cost and schedule variances are expected to continue for the rest of the project, what is the most likely Estimate at Completion (EAC)?
Project planning & monitoring
Hard
A.EAC = $200,000
B.EAC = $180,000
C.EAC = $215,000
D.EAC = $266,667
Correct Answer: EAC = $266,667
Explanation:
This requires selecting the correct EAC formula based on the project's performance context. The key phrase is "current cost and schedule variances are expected to continue".
First, calculate the Cost Performance Index (CPI) and Schedule Performance Index (SPI).
CPI = EV / AC = $60,000 / $80,000 = 0.75 (The project is over budget).
SPI = EV / PV = $60,000 / $75,000 = 0.80 (The project is behind schedule).
There are several formulas for EAC. Since the prompt states that current variances are typical for the rest of the project, the most appropriate formula is EAC = BAC / CPI.
EAC = $200,000 / 0.75 = $266,666.67.
This formula projects that the current rate of cost overrun (25% over budget for every unit of work) will persist until the end. Other formulas, like EAC = AC + (BAC - EV), assume future work will be done at the budgeted rate, which contradicts the problem statement.
Incorrect! Try again.
46When calculating Unadjusted Function Points (UFP), an application has two External Inquiries (EI) that query the same database table (an Internal Logical File, or ILF). The first EI retrieves 3 Data Element Types (DETs) and the second EI retrieves 10 DETs. The ILF itself contains 25 DETs. How should these components be rated for complexity according to standard Function Point Analysis rules?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Hard
A.Both EIs are rated 'Average' complexity, and the ILF is rated 'Low' complexity.
B.Both EIs are rated 'Low' complexity, and the ILF is 'Average' complexity.
C.The first EI is 'Low', the second is 'Average', and the ILF is 'Average'.
D.The first EI is 'Low', the second is 'Average', and the ILF is 'Low'.
Correct Answer: The first EI is 'Low', the second is 'Average', and the ILF is 'Average'.
Explanation:
This question requires detailed knowledge of the IFPUG complexity matrices.
For the ILF: It has 1 Record Element Type (RET) and 25 DETs. The complexity matrix for ILFs/EIFs is: Low (1-19 DETs), Average (20-50 DETs), High (>50 DETs). Since 25 DETs falls in the 20-50 range, the ILF is 'Average'.
For 0-1 FTR: Low (1-4 DETs), Average (5-15 DETs), High (>15 DETs).
For 2-3 FTRs: Low (1-4 DETs), Average (5-15 DETs), High (>15 DETs).
For >3 FTRs: Low (1-3 DETs), Average (4-12 DETs), High (>12 DETs).
First EI: 1 FTR, 3 DETs. This falls into the 'Low' category.
Second EI: 1 FTR, 10 DETs. This falls into the 'Average' category.
Therefore, the correct classification is: EI-1 is Low, EI-2 is Average, and the ILF is Average.
Incorrect! Try again.
47In a Critical Path Method (CPM) network diagram, two parallel paths merge into a single activity 'F'. Path 1 consists of activities A->C->E with durations 4, 5, and 3 days respectively. Path 2 consists of activities B->D with durations 6 and 7 days. Activity F cannot start until both E and D are complete. What is the total float (slack) for activity C, assuming it has a simple Finish-to-Start dependency with A and E?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Hard
A.0 days
B.13 days
C.2 days
D.1 day
Correct Answer: 1 day
Explanation:
To find the float of activity C, we must first identify the critical path.
Path 2 duration: Duration(B) + Duration(D) = 6 + 7 = 13 days.
The critical path is the longest path to the merge point, which is Path 2 (B->D) with a duration of 13 days. Activity F can only start on day 14 (after 13 full days have passed).
Now, let's analyze Path 1. Its total duration is 12 days. This means it has 1 day of slack before it would delay the start of activity F. This 1 day of slack is shared among all activities on Path 1 (A, C, and E). Therefore, the total float for activity C is 1 day. It can be delayed by 1 day without affecting the project's overall completion time.
Incorrect! Try again.
48You are designing a GitHub Actions workflow that builds a Docker image, runs tests against it, and then pushes it to a container registry, but only if the tests pass and the commit is on the main branch. The workflow has three jobs: build, test, and push. The test job depends on the build job. The push job depends on the test job. How would you configure the push job to meet these requirements?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Hard
A.yaml
push:
needs: test
if: always() && github.ref == 'refs/heads/main'
This question tests the nuances of GitHub Actions conditional execution. The default behavior is that a job only runs if all its dependencies (needs) succeed. Therefore, checking needs.test.result == 'success' or success() is redundant for the success path; the job won't even start if test fails. The primary challenge is to restrict the push to the main branch.
Option A (success() && ...) is redundant.
Option C (needs.test.result == 'success' && ...) is also redundant for the same reason.
Option D (always() && ...) is incorrect because always() would cause the job to run even if the test job failed.
Option B is the most robust and idiomatic way. It correctly specifies the dependency with needs: test. The if condition correctly checks that the trigger event was a push and that the branch name (ref_name) is main. The implicit success dependency from needs: test handles the test-passing requirement automatically.
Incorrect! Try again.
49A project is suffering from 'integration hell,' where merging features developed in long-lived branches at the end of a release cycle is taking weeks. The SCM manager proposes a shift to Trunk-Based Development (TBD). What is the most critical prerequisite cultural and technical practice the team must adopt for TBD to be successful and not lead to a constantly broken main branch?
Software Configuration Management (SCM)
Hard
A.Mandating that all developers use the same IDE and development environment.
B.Enforcing a strict code-review process with at least three senior developer approvals for every commit.
C.Adopting a more complex branching model like Gitflow to manage the chaos.
D.Implementing a comprehensive, fast, and highly reliable automated testing suite that runs on every proposed change.
Correct Answer: Implementing a comprehensive, fast, and highly reliable automated testing suite that runs on every proposed change.
Explanation:
Trunk-Based Development relies on frequent, small commits to the main branch (the trunk). To prevent this trunk from being constantly broken, there must be a strong safety net. The most critical part of this safety net is a robust automated testing suite (including unit, integration, and sometimes end-to-end tests) that is executed automatically on every commit or pull request before it is merged. This CI (Continuous Integration) practice ensures that changes don't break existing functionality. While code reviews (Option C) are important, they are not sufficient to catch all regressions and can become a bottleneck. Uniform IDEs (Option A) are a minor convenience, not a critical prerequisite. Adopting Gitflow (Option D) is the exact opposite of the proposed solution.
Incorrect! Try again.
50A team is developing a novel medical imaging analysis tool using machine learning. The user interface requirements are well-understood and stable, but the core ML algorithm's feasibility and performance are highly uncertain and require extensive experimentation. Which project management lifecycle model is most appropriate for this scenario?
Project management basics
Hard
A.Spiral model, because it explicitly manages high technical risk through prototyping and risk analysis in each iteration.
B.Scrum, because it allows for iterative development of the entire product.
C.V-Model, as it emphasizes verification and validation at each stage of development.
D.Waterfall model, because the UI requirements are stable.
Correct Answer: Spiral model, because it explicitly manages high technical risk through prototyping and risk analysis in each iteration.
Explanation:
The key challenge in this project is the high technical risk and uncertainty related to the core algorithm, even though other parts of the project are stable. The Spiral model is uniquely suited for this. Each loop of the spiral involves identifying objectives, evaluating alternatives, identifying and resolving risks (e.g., building a prototype of the ML algorithm), and planning the next iteration. This allows the team to tackle the high-risk component through experimentation and prototyping before committing to a full-scale implementation. Waterfall is unsuitable due to the high uncertainty. While Scrum is iterative, it doesn't have the same explicit, formal risk-assessment phase in each cycle as the Spiral model. The V-Model is a variation of Waterfall and shares its rigidity in the face of uncertainty.
Incorrect! Try again.
51A system is being estimated using Use Case Points (UCP). An 'API Gateway' is identified as an actor because it initiates requests to the system. A 'Logging Service' is also identified as an actor because the system sends it data. The API Gateway is a complex, message-based system, while the Logging Service is a simple 'fire-and-forget' REST endpoint. According to the standard UCP methodology, how should these actors be weighted?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Hard
A.API Gateway is 'Complex' (weight 3), and Logging Service is 'Simple' (weight 1).
B.API Gateway is 'Average' (weight 2), and Logging Service is 'Simple' (weight 1).
C.Both should be weighted as 'Complex' (weight 3) because they are non-human systems.
D.Both should be weighted as 'Simple' (weight 1) as they are secondary to the primary human user.
Correct Answer: API Gateway is 'Complex' (weight 3), and Logging Service is 'Simple' (weight 1).
Explanation:
This question tests the subtle rules for weighting actors in UCP. Actor complexity is not just about being human vs. system. It's about the interaction protocol.
A 'Simple' actor (weight 1) is another system with a defined API, like a REST service. The Logging Service fits this perfectly.
An 'Average' actor (weight 2) is another system that interacts through a protocol like TCP/IP or a database.
A 'Complex' actor (weight 3) is a human interacting through a GUI or a system with a complex, message-based protocol. The API Gateway, being a complex message-based system, correctly fits the 'Complex' category. Therefore, weighting the API Gateway as complex and the Logging Service as simple is the most accurate application of the UCP rules.
Incorrect! Try again.
52A GitHub Actions workflow has a build job that produces a large binary artifact. A subsequent test job, which runs on a different runner, needs this binary. To optimize the workflow, you use actions/upload-artifact in the build job and actions/download-artifact in the test job. Another job, deploy, also needs the same binary. How should the deploy job acquire the binary to be most efficient in terms of time and cost?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Hard
A.The build job should push the artifact to an external storage like S3, and deploy should pull from there.
B.It should run the build script again within the deploy job to regenerate the binary locally.
C.It should access the binary directly from the build job's runner filesystem.
D.It should use actions/download-artifact to download the artifact uploaded by the build job.
Correct Answer: It should use actions/download-artifact to download the artifact uploaded by the build job.
Explanation:
This question assesses the understanding of data persistence and sharing between jobs in GitHub Actions. Jobs run on separate, clean virtual machine instances and cannot access each other's filesystems directly (ruling out Option C). The intended mechanism for sharing data between jobs within the same workflow run is artifacts. The build job uploads the artifact once. Both the test and deploy jobs can then download and use this same artifact. This 'build once, use many times' pattern is highly efficient. Regenerating the binary (Option B) is wasteful of compute time and resources. While using external storage like S3 (Option D) is a valid pattern for sharing artifacts between different workflow runs or projects, it adds unnecessary complexity and latency for sharing data within a single workflow run compared to the built-in artifact mechanism.
Incorrect! Try again.
53A project's baseline plan is meticulously crafted. During execution, the client requests a significant change that adds new features. The project manager correctly follows the change control process, gets the change approved by the Configuration Control Board (CCB), and updates the project plan. This action is known as:
Project planning & monitoring
Hard
A.Re-baselining
B.Scope creep
C.Re-planning
D.Gold plating
Correct Answer: Re-baselining
Explanation:
This question differentiates between related but distinct project management concepts. 'Scope creep' (Option B) is the uncontrolled expansion of project scope without adjustments to time, cost, and resources. Since the change was formally approved and the plan was updated, it's not scope creep. 'Gold plating' (Option C) is when the team adds extra features that were not requested by the client, which is not the case here. 'Re-planning' (Option A) is a general term, but 'Re-baselining' (Option D) is the specific, correct term for formally updating the project's performance measurement baseline (the original scope, schedule, and cost plan) to reflect an approved major change. After re-baselining, EVM metrics will be calculated against this new baseline.
Incorrect! Try again.
54A project manager is using a Gantt chart to manage a complex software project with a resource-constrained team of 3 developers. The chart shows four parallel tasks (T1, T2, T3, T4), each requiring one developer and scheduled to start on the same day. However, since there are only 3 developers, one task must be delayed. This situation reveals which fundamental limitation of a basic Gantt chart?
Scheduling Techniques (CPM, PERT, Gantt Charts)
Hard
A.It cannot display the critical path of the project.
B.It does not inherently manage or visualize resource allocations and constraints.
C.It is unable to represent probabilistic task durations like PERT.
D.It fails to show the percentage of a task that has been completed.
Correct Answer: It does not inherently manage or visualize resource allocations and constraints.
Explanation:
While basic Gantt charts are excellent for visualizing schedules and dependencies (Task A must finish before Task B), they do not, by default, track resource allocation. The chart might show that four tasks can happen in parallel from a dependency standpoint, but it doesn't know that there are only three resources (developers) available to perform them. This leads to an unrealistic plan. Advanced project management tools can perform 'resource leveling' which adjusts the schedule shown on the Gantt chart to respect resource constraints, but this is an additional feature layered on top. The fundamental limitation of the chart itself is its lack of inherent resource management. Option A is true, but not what the scenario highlights. Option B is a common feature (progress bars). Option D is also a limitation, but not the one illustrated by the developer shortage scenario.
Incorrect! Try again.
55A team is deciding between Function Points (FP) and Use Case Points (UCP) for an early-stage estimate of a large business application. The team has a set of high-level business requirements but has not yet defined the detailed data structures or user interface. They have, however, documented the primary user interactions and system actors. In this specific scenario, what is the most significant advantage of UCP over FP?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Hard
A.UCP is better suited for early-stage estimation when detailed internal data structures (ILFs/EIFs) are unknown, as it focuses on externally visible functionality described by use cases.
B.UCP provides a more accurate final estimate because it is based on the COCOMO model.
C.UCP is simpler to calculate because it does not have any technical or environmental adjustment factors.
D.FP is always superior because it is an ISO standard and is not dependent on the quality of use case descriptions.
Correct Answer: UCP is better suited for early-stage estimation when detailed internal data structures (ILFs/EIFs) are unknown, as it focuses on externally visible functionality described by use cases.
Explanation:
This question requires a comparative analysis of the two methods' applicability. Function Point analysis requires identifying and classifying data functions (Internal Logical Files, External Interface Files) and transactional functions (External Inputs, External Outputs, External Inquiries). This requires a relatively detailed understanding of the system's data model. In the early stages described, this information is not available. Use Case Points, however, are derived from use cases and actors, which represent user interactions and system boundaries. This information is available. Therefore, UCP is much better suited for estimation early in the lifecycle when requirements are captured as use cases but before detailed design. Option A is incorrect (UCP is not based on COCOMO). Option C is an overgeneralization; FP's suitability depends on the available information. Option D is incorrect; UCP has both a Technical Complexity Factor (TCF) and an Environmental Complexity Factor (ECF).
Incorrect! Try again.
56In the context of SCM, what is the most precise definition of 'baseline drift' and what is its primary cause in large projects?
Software Configuration Management (SCM)
Hard
A.The divergence of a feature branch from the main branch, caused by not merging frequently.
B.A shift in the project's requirements mandated by the client after the project has started.
C.The gradual decrease in code quality over time, caused by inexperienced developers.
D.The uncontrolled and undocumented incorporation of changes into an established baseline, often caused by an informal or bypassed change control process.
Correct Answer: The uncontrolled and undocumented incorporation of changes into an established baseline, often caused by an informal or bypassed change control process.
Explanation:
This question tests the precise definition of a key SCM term. A 'baseline' is a formally agreed-upon version of a configuration item (e.g., requirements document, code version) that serves as a reference point for further development. 'Baseline drift' specifically refers to the erosion of this reference point through a series of small, unapproved, or poorly documented changes. The primary cause is the failure to adhere to the formal change control process, where changes are proposed, reviewed, approved, and then incorporated to create a new baseline. Option B describes branch divergence, a related but different problem. Option A is about code quality, not configuration control. Option D describes a formal change request, which, if handled correctly, leads to re-baselining, not drift.
Incorrect! Try again.
57Consider the following GitHub Actions workflow snippet designed to build and test code: on: [push, pull_request]. The development team observes that when they push a new commit to a branch for which a pull request is already open, the workflow runs twice. Why does this happen and what is the most effective way to prevent this redundant run?
Introduction to CI/CD tools (GitHub Actions, GitHub CI/CD workflows)
Hard
A.The push event triggers a run on the branch, and the pull_request event also triggers a run for the PR update. The fix is to trigger only on pull_request and on push to the main branch specifically, like: on: push: branches: [main] pull_request: branches: [main].
B.One workflow is running on the base repository and one on the fork. The fix is to use pull_request_target instead.
C.This is a bug in GitHub Actions; workflows should not run twice. The only fix is to contact GitHub support.
D.The first run is for the push event on the branch, and the second is for the pull_request event, which is updated by the new commit. To fix this, add a condition: if: github.event_name == 'push' && github.ref == 'refs/heads/main' || github.event_name == 'pull_request'.
Correct Answer: The push event triggers a run on the branch, and the pull_request event also triggers a run for the PR update. The fix is to trigger only on pull_request and on push to the main branch specifically, like: on: push: branches: [main] pull_request: branches: [main].
Explanation:
This is a common and subtle issue in GitHub Actions. When you have on: [push, pull_request], a commit pushed to a branch with an open PR triggers two events: the push event to the branch itself, and the pull_requestsynchronize event. This results in two workflow runs. The goal is typically to run tests for every PR commit, and also for every commit that lands on main.
Option D provides the most idiomatic and effective solution: it configures the workflow to run for pushes only to the main branch, and for any updates to pull requests that target the main branch. This eliminates the duplicate run on feature branches while maintaining CI coverage for both pre-merge and post-merge scenarios. Option B's complex if condition is a less clean way to achieve this and can have unintended side effects. Option C misdiagnoses the problem. Option A is incorrect; this is documented behavior.
Incorrect! Try again.
58A project is reported to have a Schedule Performance Index (SPI) of 1.2 and a Cost Performance Index (CPI) of 0.8. Which statement provides the most accurate and insightful interpretation of the project's status?
Project planning & monitoring
Hard
A.The project is behind schedule but under budget, so the variances will likely balance out over time.
B.The project is ahead of schedule and under budget, indicating excellent performance.
C.The project is performing more work than planned but is spending more money than budgeted for the work accomplished, indicating a potential 'crash' effort.
D.The project's performance cannot be determined without knowing the Planned Value (PV).
Correct Answer: The project is performing more work than planned but is spending more money than budgeted for the work accomplished, indicating a potential 'crash' effort.
Explanation:
This question requires a deep interpretation of EVM metrics, not just a superficial definition.
SPI = EV / PV = 1.2. Since SPI > 1, it means Earned Value (EV) is greater than Planned Value (PV). The team has completed more work than was scheduled to be done by this point. The project is ahead of schedule.
CPI = EV / AC = 0.8. Since CPI < 1, it means Earned Value (EV) is less than Actual Cost (AC). The project is spending more money than the value of the work it has accomplished. The project is over budget.
Combining these two insights: the team is working faster than planned, but at a very high cost. This pattern is a classic indicator of 'crashing' the schedule—adding extra resources (e.g., overtime, more staff) to accelerate work, which gets the project ahead of schedule but at a cost premium. Option B captures this complex dynamic perfectly. Option A is wrong on budget. Option C is wrong on schedule.
Incorrect! Try again.
59During a Function Point count, a 'User Profile' screen is analyzed. This screen allows a user to view their data (retrieved from a 'Users' ILF), update their data (updating the 'Users' ILF), and also displays a list of recent orders (retrieved from an 'Orders' EIF, as it's owned by another application). This single screen corresponds to which set of transactional functions?
Cost estimation methods (Function Points, Use Case Points, COCOMO (intro))
Hard
A.One External Input (EI) for the entire screen.
B.One External Inquiry (EQ) to view user data and recent orders, and one External Input (EI) to update.
C.One External Inquiry (EQ) to view user data, one External Output (EO) to display orders, and one External Input (EI) to update.
D.One External Inquiry (EQ) to view, and one External Input (EI) to update.
Correct Answer: One External Inquiry (EQ) to view user data and recent orders, and one External Input (EI) to update.
Explanation:
This question tests the ability to correctly decompose a user interface into its fundamental transactional functions according to IFPUG rules. The key is that a transactional function is the smallest unit of activity that is meaningful to the user and leaves the business in a consistent state.
Viewing Data: The user initiates a request to see their profile. The system retrieves data from two different files (ILF and EIF) and presents it. This entire 'read' operation, even with multiple data sources, is typically counted as a single External Inquiry (EQ) because it's a single, user-initiated request for information. An EO requires derived data, which isn't specified here, making EQ the better fit for a direct retrieval and presentation.
Updating Data: The user's action of saving changes is a distinct business process that modifies an ILF. This is a classic External Input (EI).
Therefore, the screen comprises two distinct transactional functions: one EQ for the initial data load/view, and one EI for the update action. Option C is a common mistake; EO is for derived data/reports, whereas EQ is for direct queries, which is more applicable here.
Incorrect! Try again.
60A project manager is conducting a risk assessment and identifies a potential positive risk (an opportunity): a new, more efficient open-source library could be released mid-project, which could significantly reduce development time for a key module. What is the most appropriate risk response strategy according to the PMBOK Guide for this opportunity?
Project management basics
Hard
A.Mitigate: Take steps to increase the likelihood of the library being released on time.
B.Exploit: Actively change the project plan to depend entirely on this new library, assuming it will be released.
C.Enhance: Allocate resources to a small R&D task to monitor the library's progress and build a prototype integration, increasing the potential benefit if the opportunity occurs.
D.Accept: Do nothing and simply take advantage of the library if it happens to be released and is stable.
Correct Answer: Enhance: Allocate resources to a small R&D task to monitor the library's progress and build a prototype integration, increasing the potential benefit if the opportunity occurs.
Explanation:
This question requires knowledge of the specific risk response strategies for positive risks (opportunities).
Exploit is the most aggressive strategy, seeking to make the opportunity definitely happen. This is too risky as the project would be completely dependent on an external event it doesn't control.
Mitigate is a strategy for negative risks (threats), not opportunities.
Accept is a passive strategy. While valid, it's not the most appropriate proactive strategy.
Enhance is the correct strategy here. It involves taking proactive steps to increase the probability and/or impact of the opportunity. By allocating a small amount of time for R&D and prototyping, the team isn't fully dependent on the library but is perfectly positioned to integrate it quickly and effectively if it becomes available, thus maximizing (enhancing) the potential benefit (reduced development time).