STE Question Bank Chapter4
STE Question Bank Chapter4
“Chapter 4”
2MARKS QUESTION :
1. Which are different causes of software defect?
Miscommunication of requirements
Unrealistic deadlines
Lack of design/coding experience
Human errors
Lack of version control
Third-party tool issues
Last-minute changes
Poor testing skills.
Static Techniques: Code reviews and inspections without executing the code.
Dynamic Techniques: Running test cases to identify bugs.
Operational Techniques: Defects found in production environments.
4MARKS QUESTION:
Severity-wise: This classification is based on the severity of the defect's impact on the
system:
o Major: A defect causing a significant departure from requirements or failure
in functionality.
o Minor: A defect that doesn't significantly impact the product's execution.
o Fatal: A defect that causes the system to crash or other applications to
malfunction.
Work product-wise: Defects arising from specific project deliverables:
o System Study Document (SSD): Defects found in the initial system analysis.
o Functional Specification Document (FSD): Defects in the functional design.
o Source Code: Errors in the actual code implementation.
Type of Errors-wise: Defects based on error type:
o Comments Error: Inadequate or misleading comments in the code.
o Computation Error: Incorrect calculation or logic errors.
o Data Error: Incorrect data population or retrieval in the database.
o Interface Error: Issues with internal or external interfaces between modules.
A diagram illustrating these steps would show a loop of prevention activities, including
planning, analyzing, reviewing, and improving processes.
4. Explain defect life cycle to identify the status of a defect with a proper labeled
diagram.
The defect life cycle outlines the stages a defect goes through from discovery to closure:
“Chapter 5”
2MARKS QUESTION :
1. Describe any two limitations of testing:
Reduces time: Automated tests are faster and can run repeatedly.
Improves accuracy: Eliminates human error and provides consistent results.
Efficient testing: Allows for complex tests to be run more frequently.
Static Testing Tools: Analyze code without executing it (e.g., flow analyzers,
coverage analyzers) to ensure correctness.
Dynamic Testing Tools: Execute code and test its behavior with live data (e.g., test
drivers, emulators).
5. What do you mean by software metrics? List any three types of metrics:
Software metrics are measurements used to assess the quality and performance of software
processes and products.
Three types of metrics:
4MARKS QUESTION:
Time-Consuming: Manual testing requires significant time and effort, especially for
large and complex applications. Covering all possible test cases takes a long time,
making it inefficient for frequent releases.
Less Accuracy: Since manual testing involves human intervention, there's a higher
risk of errors or overlooking bugs. This can lead to unreliable results, especially when
running repetitive tests.
Impractical for Performance Testing: Manually simulating the behavior of
thousands of users or testing for performance under heavy loads is nearly impossible.
Automation tools can handle such complex scenarios better.
Repetitive and Tedious: Running the same tests repeatedly, such as in regression
testing, can become monotonous for testers and may lead to mistakes due to fatigue or
inattention.
Advantages:
o Human Insight: Manual testing allows testers to find issues related to user
experience and interface design, which automated tools might miss. Human
judgment is critical in identifying aesthetic and usability issues.
o Flexibility: Manual testing offers greater flexibility and adaptability in
handling unplanned changes or conducting exploratory testing without
predefined test scripts.
o Simplicity: Manual testing doesn't require special technical knowledge or
tools for basic tests, making it suitable for smaller projects or early-stage
development.
Disadvantages:
o Time-Intensive: Manual testing for large and complex systems takes
significantly more time than automated testing, especially for repetitive tasks
like regression testing.
o Prone to Errors: Human testers are more likely to make mistakes or miss
issues, especially when performing repetitive tasks.
o Limited Scalability: Manual testing becomes impractical when testing for
performance, load, or handling massive amounts of data. Automation is better
suited for such large-scale scenarios.
3. Differentiate between manual testing and automation testing:
Saves Time: Automation allows repetitive tests like regression tests to be executed
quickly, saving a significant amount of time compared to manual testing.
Improves Accuracy: Automated tests eliminate the risk of human errors that can
occur during manual testing, ensuring that the same tests are performed consistently.
Increases Test Coverage: Automation enables the execution of large and complex
test suites, allowing more extensive coverage of different scenarios within a shorter
period.
Enhances Productivity: Automation frees up testers to focus on more complex or
exploratory testing tasks by handling repetitive and routine tests automatically. It
allows testing to be done even during off-hours, ensuring faster feedback loops.
Increased Efficiency: Automated tools execute tests much faster than manual testing.
They can run a large number of tests quickly and repeatedly, which improves overall
testing efficiency.
Improved Consistency: Automation ensures that tests are executed the same way
each time, reducing variability in test execution and ensuring consistent results.
Long-term Cost Savings: While automation tools may have a higher upfront cost for
setup, they save money in the long run by reducing manual labor and allowing for
faster test execution.
Handles Complex Scenarios: Automated tools can handle complex testing scenarios,
such as performance or load testing, which are difficult or impossible to achieve
manually. They can simulate thousands of virtual users and execute tests across
multiple environments simultaneously.
6. Define metrics and measurement. Explain the need for software measurement:
Product Metrics: These metrics measure the characteristics of the software product
itself, such as its size, complexity, functionality, or performance.
Process Metrics: These metrics focus on the effectiveness and efficiency of the
software development process, such as the time taken to complete tasks or the number
of defects detected during each phase.
Progress Metrics: These metrics track the progress of the project, such as the number
of tests completed, the number of bugs fixed, or the percentage of tasks completed.