UNIT 4 oose
UNIT 4 oose
Testing - Unit testing - Black box testing- White box testing - Integration and
System testing- Regression testing - Debugging - Program analysis – Symbolic
execution - Model Checking-Case Study
Testing
Testing is the process of evaluating a software application to ensure that it performs correctly,
meets the requirements, and is free from bugs or defects. The main purpose of testing is to
identify errors in the software before it is released to users. It is an important part of the
software development life cycle (SDLC).
2. Test Planning
Testers plan how testing will be done. This includes deciding on resources, tools, schedule,
and testing techniques.
Example:
For the library system, the test plan might say:
• Functional testing will be manual
• Testers will check features like "issue book" and "return book"
• Testing will start next Monday and finish in 2 weeks
5. Test Execution
Testers run the test cases on the actual software to see if it behaves as expected.
Example:
The tester logs in, searches for "Harry Potter," clicks "Issue," and checks if the system
behaves as described in the test case.
7. Bug Fixes
The development team fixes the bugs reported by testers. After fixing, testers re-test the
software to confirm the bug is resolved.
Example:
The developer updates the code to prevent the crash. The tester runs the test again to make
sure the "Issue Book" function now works without errors.
8. Software Release
Once all major bugs are fixed and testing is complete, the software is released to the users
or client.
Example:
The library system passes all tests, so it is deployed for real-time use in schools or colleges.
Objectives of Testing
Testing is done to achieve several important goals. Below are the main objectives
along with real-life examples to help you understand each one better:
Types of Testing
Testing can be broadly classified into the following types:
a. Manual Testing
• Done by humans without using any tools.
• Testers follow a set of test cases and observe the application’s behavior.
• Useful for exploratory, usability, or ad-hoc testing.
• Example: Imagine you're testing a login page on a website. A tester manually types
in different usernames and passwords (valid and invalid), clicks the "Login" button,
and observes the results.
b. Automated Testing
• Uses software tools to run tests automatically.
• Faster and more reliable for repetitive tests.
• Examples of tools: Selenium, JUnit, TestNG, etc.
• Suppose you have an e-commerce website and want to test whether the "Add to
Cart" button works on every product page. Writing a test script using a tool like
Selenium can automatically open each product page, click the button, and verify that
the item was added to the cart.
Unit testing
• Unit testing is a type of software testing that focuses on individual units or
components of a software system.
• The purpose of unit testing is to validate that each unit of the software works as
intended and meets the requirements.
• Unit testing is typically performed by developers, and it is performed early in the
development process before the code is integrated and tested as a whole system.
• Unit tests are automated and are run each time the code is changed to ensure that new
code does not break existing functionality.
Methods
• Equivalence Testing → split module & workout will end
• Boundary value analysis → They set boundary value & check end value.
• Decision Table Testing → testing systems based on a combination of inputs.
• Error Guessing → Error Guessing is a technique based on the tester’s experience to
guess the areas of the application that might be prone to errors.
1. Functional Testing
This type of testing verifies what the system does. It focuses on checking the application’s
behavior against the defined functional requirements.
Purpose:
To ensure that each function of the software operates according to the requirements
specification.
Examples:”
• Login Page – The system should allow login with valid credentials and reject invalid
ones.
• Registration Form – Fields like email, password, and phone number should be
validated.
• Payment Processing – The system should correctly handle transactions and display
confirmation.
• Form Submission – Clicking the “Submit” button should save or process the data as
intended.
Different level of testing:
Under functional Black Box Testing, different levels of testing can be conducted:
1. Unit Testing (Typically White Box, but can have Black Box Approach)
Definition:
Unit testing involves testing individual components or functions of a software system in
isolation to verify that they work correctly.
Black Box Approach:
Although unit testing is usually white box (done by developers), it can also be done using a
black box approach, where:
• Each unit (like a function or method) is treated as a "black box."
• Testers supply inputs and verify if the output is correct, without knowing the internal
logic.
Example:
Suppose there's a function calculateTotal(price, quantity). The tester gives values like (100, 2)
and checks if the output is 200 — without knowing how the function computes it internally.
2. Integration Testing (Black Box Perspective)
Integration testing checks whether different modules or components of system work
together as intended.
Black Box Approach:
• Testers treat each module as a black box and verify the data flow between them.
• Focus is on interfaces, data exchange, and interactions between modules.
• No need to know how each module works internally.
Example:
In an online shopping system:
• The order module sends data to the payment module.
• Black box integration testing will check if the order total is correctly passed and if the
payment module responds appropriately.
3. System Testing
System testing tests the entire software system as a whole, to ensure that it meets
the specified requirements.
Black Box Approach:
• The entire application is treated as a black box.
• Testers evaluate end-to-end functionality, performance, security, and behavior under
different conditions.
• No knowledge of internal code or logic is required.
Example:
For a banking application, system testing might include:
• Logging in with valid and invalid credentials.
• Transferring money.
• Viewing account balances.
All from a user's perspective, without knowing how these features are implemented.
This type of testing verifies how well the system performs, rather than what it does. It focuses
on aspects like performance, usability, and reliability.
Purpose:
To evaluate the software's quality attributes and ensure it performs well under expected
conditions.
2. Cross-Browser Compatibility
Ensures the website or app looks and works the same across different web browsers.
Example: A button that works on Chrome should also respond properly on Firefox and Edge,
with no layout issues or errors.
3. Usability Testing
Evaluates how user-friendly, intuitive, and easy the software is for real users.
Example: Testing whether users can easily find the “Checkout” button on an e-commerce site
without confusion or extra steps.
4. Reliability Testing
Checks if the software runs consistently and without crashing over time or repeated use.
Example: A video streaming app should continue playing videos for hours without freezing or
crashing, even if used daily.
5. Reporting:
o If any failures or bugs are found, they are reported, and the software is sent back
to the development team for fixes.
o Re-testing is done to ensure the issues have been resolved.
----------------------------------------------------------------------------------------------------------------
White Box Testing
White Box Testing is a method where the tester knows how the system or code works internally.
The goal is to check the internal logic, code structure, and flow of the program. It's also called
clear box or glass box testing because the tester can "see through" the code.
It focuses on:
• Path Coverage: Testing all possible paths through the code.
• Loop Testing: Checking how loops behave (e.g., zero times, once, many times).
• Condition Testing: Verifying each condition (like if, while, etc.) for true and false
outcomes.
• Statement Coverage: Making sure each line of code is executed at least once during
testing.
4. Execute Tests
• Run the test cases directly against the code (often using unit testing tools like JUnit,
NUnit, etc.).
• Debug and analyze the output of each test case.
6. Document Results
• Record which tests passed or failed.
• Report bugs with details like:
o Input used
o Actual vs expected output
o Location in code (if known)
This structured process helps find logic errors, unreachable code, incorrect conditions, and
boundary-related bugs early in development.
----------------------------------------------------------------------------------------------------------------
Integration Testing
Integration testing is a type of software testing where individual software modules are
combined and tested as a group. The primary objective of integration testing is to identify issues
that might occur when the different modules or components of a software system interact with
each other.
7. Final Validation
• Perform final checks to ensure overall system stability.
• Confirm that data flows correctly between modules and no major defects remain.
----------------------------------------------------------------------------------------------------------------
System Testing
System Testing is a level of software testing where a complete and integrated software
application is tested as a whole.
It verifies the system’s compliance with specified requirements, both functional and non-
functional.
• It checks the entire software system (including hardware, software, and network)
from end to end.
• This testing simulates a real-world environment where the system will be used.
Goal: To ensure that the system works as a single, cohesive unit that performs its intended
functions.
Process:
1. Requirement Analysis
Understand system requirements and features to be tested. Identify critical functionality to
ensure comprehensive testing.
Example: "Users should be able to add products to the shopping cart."
2. Test Planning
Create a testing strategy, define the scope, and assign resources. Plan the testing schedule and
necessary tools.
Example: Plan to test login, cart, and payment functions.
5. Test Execution
Run the test cases, verifying that the system functions as expected. Track the results and note
any defects.
Example: Log in to the website and verify that the homepage appears correctly.
8. Test Closure
Finalize testing and document the results. Prepare reports and ensure all issues have been
resolved.
Example: "95 tests passed, 5 defects fixed, prepare final report."
9. Post-Testing Activities
Ensure the system is ready for User Acceptance Testing (UAT) and release to production.
Perform final checks.
Example: Verify that all critical defects are fixed and the system is ready for UAT.
==================================================================
Regression Testing
Regression testing is the process of testing a software application after making changes to
ensure that the new changes haven't broken or affected the existing features of the
application. The goal is to make sure that the software still works as expected after changes
such as bug fixes, new features, or upgrades.
Key Aspects of Regression Testing:
1. Preserves Existing Functionality: The main aim of regression testing is to ensure
that new code doesn't disrupt the current working of the application.
2. Tests Changes in Code: Whenever a part of the software is modified or a new feature
is added, regression testing checks to see if these changes have caused any unexpected
issues with the existing parts of the system.
3. Detects Side Effects: Sometimes, changes in one part of the system might have side
effects on other parts. Regression testing helps to identify these issues early.
4. Repeated Testing: Regression tests are often repeated frequently as new updates are
made to the software, making it an ongoing process throughout the software
development lifecycle.
4. Evaluate Results
After running the tests, compare the current results with the expected outcomes.
Example:
After running the payment test case, if the actual result is:
• Expected Result: The user gets a confirmation email after a successful transaction.
• Actual Result: The user does not receive the email.
This indicates a failure, which could be related to the changes made in the payment
gateway.
5. Fix Issues
Developers address any issues that were found during regression testing. After fixing the
problems, the tests need to be rerun.
Example:
The developer identifies that an email service was disrupted during the payment gateway
update. Once the issue is fixed, the test case is rerun, and now the user receives the
confirmation email.
==================================================================
Debugging
Debugging is the process of identifying, isolating, and fixing issues or bugs in a software
program to ensure it functions as intended. It involves systematically analyzing code,
identifying errors, and applying corrective measures to remove or handle these errors.
Debugging can include checking for syntax errors, logical flaws, or runtime issues that affect
the performance or output of the program. The goal is to make the code robust, efficient, and
error-free.
Example:
• Imagine you're using a calculator app, and when you try to add two numbers, the result
is wrong. You check the app's logic and discover that it mistakenly adds a multiplication
operation instead of the addition one.
• To debug, you review the app's logic, find the error in the code that handles operations,
and correct it to ensure it adds the numbers properly. After fixing the error, the calculator
now returns the correct result.
debugging techniques:
1. Print Debugging (Manual Tracing)
This technique involves inserting print statements into the program to monitor the values of
variables, the flow of execution, or the state of objects. It is one of the simplest ways to
observe what the program is doing at runtime.
Real-time Example:
In a library management system, the availability status of a book is not updating after it is
issued. By printing the book object's status before and after issuing, the developer can verify
if the method is working correctly.
2. Debugger Tool (Interactive Debugging)
A debugger is a tool provided by most modern IDEs that allows developers to pause
execution, inspect object properties, step through code line by line, and monitor program
flow. It is highly effective for tracing logic and control issues.
Real-time Example:
In an e-commerce application, the total price in the shopping cart is displaying incorrectly.
The developer sets breakpoints in the discount and tax calculation functions and steps
through the code to find where the computation is failing.
3. Unit Testing
Unit testing involves writing specific test cases to verify that individual units (like classes or
methods) perform as expected. These tests can be automated and are useful for detecting bugs
early during development.
Real-time Example:
In a passport automation system, unit tests are created to check whether the user input
validation accepts only correct formats and rejects invalid or incomplete entries.
4. Logging
Logging records the internal operations of a program during execution. Developers can
analyze logs to understand what happened before an error occurred, making it easier to debug
issues in production or complex systems.
Real-time Example:
In a hospital management system, a doctor’s login occasionally fails. Reviewing the logs
reveals that the authentication service throws a null value error because the department
information was missing during login.
5. Code Review and Pair Debugging
This involves collaborating with another developer to examine the code for bugs, design
flaws, or improper use of object-oriented principles. It helps in identifying issues that tools
might miss.
Real-time Example:
In a school management system, a team member discovers during code review that the
Teacher class was unnecessarily inheriting from the Student class, which could lead to logical
inconsistencies.
6. Static Code Analysis
Static analysis uses tools to examine source code without executing it. These tools check for
syntax issues, bad coding practices, and violations of object-oriented design rules.
Real-time Example:
In a banking application, static analysis detects that certain transaction classes are not
releasing database connections, leading to potential memory leaks and performance issues.
Steps in debugging:
1. Identify the Bug
In this step, you first need to figure out that something is wrong. Bugs are often reported
through user feedback, test failures, or monitoring logs. It is crucial to recognize patterns that
point to an issue, like unexpected behavior, incorrect output, or crashes.
Real-time Example:
In a student registration system, a user encounters a "null pointer exception" when
submitting the student form. The error is identified when the form submission is attempted
and the application crashes or shows an unexpected result. The system log or error message
indicates a null pointer exception, signaling that something in the form submission is causing
the issue.
==================================================================
Program analysis
2. Dynamic Analysis
Dynamic analysis involves running the program and observing its behavior at runtime. This
helps identify issues that cannot be detected through static analysis, such as memory leaks,
race conditions, and performance issues. Key aspects include:
• Execution monitoring: Tracking how the program behaves during execution, often
using debugging tools or profilers.
• Testing: Running unit tests, integration tests, and system tests to validate the
program’s behavior.
• Memory analysis: Monitoring the program's memory usage and identifying potential
memory leaks or excessive resource consumption.
3. Behavioral Analysis
This type of analysis focuses on how the system behaves in response to specific inputs or
actions. It involves:
• Use case analysis: Identifying and analyzing use cases to ensure the system meets the
functional requirements.
• State transitions: Examining state machines or sequence diagrams to understand how
the system transitions between different states or behaviors.
• Object interaction: Analyzing how objects communicate, interact, and collaborate to
perform tasks.
5. Formal Methods
In some cases, formal methods are used to rigorously prove the correctness of a program.
This involves using mathematical techniques to model the system and verify that it behaves
as expected under all conditions. Techniques include:
• Formal specification: Writing precise mathematical descriptions of the system’s
behavior.
• Model checking: Automatically checking if a model satisfies certain properties or
specifications.
6. Impact Analysis
Impact analysis involves determining the effects of changes in the program, particularly in
object-oriented systems where changes in one class or method can ripple through the system.
It includes:
• Change propagation: Identifying which parts of the code may be affected by a
change in a class or method.
• Regression testing: Ensuring that changes to the program do not introduce new bugs
by re-running previous tests.
7. Tools and Techniques
There are various tools and techniques that can assist in program analysis in OOSE:
• UML tools: Tools like Enterprise Architect or Visual Paradigm help in modeling and
visualizing the program’s structure and behavior.
• Static analysis tools: Tools like SonarQube, Checkstyle, and PMD analyze the code
without executing it to find potential issues.
• Profilers: Tools like VisualVM or JProfiler monitor memory usage, CPU usage, and
thread activity during runtime.
• Testing frameworks: JUnit, TestNG, and other testing frameworks help in
performing unit testing, integration testing, and system testing.
==================================================================
Symbolic Execution
Key Concepts
a) System Model (State Transition System)
• The system is represented as a graph of states and transitions.
• A state is a snapshot of all variable values at a particular moment.
• A transition represents a change from one state to another.
b) Specification (Properties)
• Specifications are written in temporal logic.
• Two commonly used logics:
o LTL (Linear Temporal Logic): Focuses on sequences of events.
o CTL (Computation Tree Logic): Explores branching paths of executions.
c) Model Checking Tool
• A model checker automatically verifies if the specification holds in the model.
• If it doesn’t, a counterexample trace is given to help fix the error.