STE SHORTSYL

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Unit – I: Basics of Software Testing and Testing Methods

1.1 Software Testing, Objectives of Testing

 Definition: Software testing is the process of checking if a program works


correctly and finding problems (bugs).
 Objective: Ensure the software works as expected, is error-free, and meets
user requirements.

1.2 Failure, Error, Fault, Defect, Bug Terminology

 Failure: When the program doesn’t work as expected.


 Error: Mistake made by the programmer.
 Fault/Defect/Bug: Problems in the code causing incorrect results.

1.3 Test Case, When to Start and Stop Testing of Software (Entry and Exit
Criteria)

 Test Case: A document that describes steps, inputs, and expected results to
check if the program works.
 Entry Criteria: When testing should start (e.g., code is ready).
 Exit Criteria: When testing can stop (e.g., no major bugs remain).

1.4 Verification and Validation (V Model), Quality Assurance, Quality


Control

 Verification: Checking if the software meets design requirements.


 Validation: Ensuring the final product works as the user wants.
 V Model: A testing process where each development phase has its testing
phase.
 Quality Assurance (QA): Ensuring processes are in place for high-quality
software.
 Quality Control (QC): Checking the final product for defects.

1.5 Methods of Testing: Static and Dynamic Testing

 Static Testing: Testing without running the code (e.g., code reviews).
 Dynamic Testing: Running the software to find errors.
1.6 White Box Testing

 Testing internal code and logic. Includes:


o Inspections: Reviewing code.
o Walkthroughs: Checking code with team members.
o Technical Reviews: Formal code reviews.
o Functional Testing: Checking software features.
o Code Coverage Testing: Ensuring all code is tested.
o Code Complexity Testing: Measuring code difficulty.

1.7 Black Box Testing

 Testing the software without knowing the internal code. Includes:


o Requirement-Based Testing: Checking requirements.
o Boundary Value Analysis: Testing edge values (e.g.,
smallest/largest).
o Equivalence Partitioning: Dividing inputs into groups to test fewer
cases.

Unit – II: Types and Levels of Testing

2.1 Levels of Testing

 Testing happens at different stages of development.

2.1.1 Unit Testing

 Testing individual parts of the program.


 Driver/Stub: Dummy programs used to test components in isolation.

2.2 Integration Testing

 Checking if parts of the software work together.


 Types:
o Top-Down Integration: Testing starts from the top level.
o Bottom-Up Integration: Testing starts from the lower level.
o Bi-Directional Integration: Combines both approaches.
2.3 Testing on Web Applications

 Testing programs that run on the internet.

Performance Testing: Checking how fast and efficient the software works.

2.4 Stress Testing, Security Testing, Client-Server Testing

 Stress Testing: Checking software under heavy load.


 Security Testing: Ensuring software is secure.
 Client-Server Testing: Testing applications on client-server setups.

Acceptance Testing

 Checking if the software meets user needs. Types:


o Alpha Testing: Testing done by developers.
o Beta Testing: Testing done by users.
o GUI Testing: Checking the visual parts of the software.

Unit – III: Test Management

3.1 Test Planning

 Creating a plan for testing:


o Setting goals, defining testing methods, identifying resources, and
organizing tasks.

3.2 Test Management

 Managing test infrastructure (tools, processes) and people involved in


testing.

3.3 Test Process

 Steps involved in testing:


o Baseline: Creating a base version of a test plan.
o Specification: Writing detailed test cases.
3.4 Test Reporting

 Recording results of tests and summarizing outcomes in reports.

Unit – IV: Defect Management

4.1 Defect Classification, Defect Management Process

 Defect Classification: Grouping defects by severity, impact, or priority.


 Defect Management: Steps to identify, report, and fix defects.

4.2 Defect Life Cycle, Defect Template

 Defect Life Cycle: Stages of a defect (e.g., found, fixed, tested, closed).
 Defect Template: A form to record defect details.

4.3 Estimate Expected Impact of a Defect, Techniques for Finding Defects,


Reporting Defects

 Impact of Defects: How much damage the defect causes.


 Techniques for Finding Defects: Tools or methods to identify issues.
 Reporting Defects: Recording and communicating defects to developers.

Unit – V: Testing Tools and Measurements

5.1 Manual Testing and Need for Automated Tools

 Manual Testing: Human testing without tools.


 Automated Testing: Using tools to test faster and more accurately.

5.2 Advantages and Disadvantages of Using Tools

 Advantages: Faster, consistent, reduces human error.


 Disadvantages: Expensive tools, requires learning.

5.3 Selecting a Testing Tool

 Choosing the right tool based on project needs, budget, and skills.
5.4 When to Use Automated Tools, Testing Using Automated Tools

 Use automated tools for repetitive or large-scale tests.

5.5/5.6 Metrics and Measurement

 Metrics: Standards to measure testing performance. Types include:


o Product Metrics: Measures product quality.
o Process Metrics: Measures testing process efficiency.
o Object-Oriented Metrics: Metrics for programs using object-
oriented designs.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy