0% found this document useful (0 votes)
14 views32 pages

UNIT 4 oose

The document outlines the software testing process, detailing the phases of the Software Testing Life Cycle (STLC) and the objectives of testing, which include finding defects, ensuring compliance with requirements, and verifying software quality. It describes various testing types, including unit testing, black box testing, and white box testing, along with their methodologies and techniques. Additionally, it emphasizes the importance of regression testing to maintain software stability after updates or bug fixes.

Uploaded by

Gayathri Meena G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views32 pages

UNIT 4 oose

The document outlines the software testing process, detailing the phases of the Software Testing Life Cycle (STLC) and the objectives of testing, which include finding defects, ensuring compliance with requirements, and verifying software quality. It describes various testing types, including unit testing, black box testing, and white box testing, along with their methodologies and techniques. Additionally, it emphasizes the importance of regression testing to maintain software stability after updates or bug fixes.

Uploaded by

Gayathri Meena G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT 4

Testing - Unit testing - Black box testing- White box testing - Integration and
System testing- Regression testing - Debugging - Program analysis – Symbolic
execution - Model Checking-Case Study

Testing
Testing is the process of evaluating a software application to ensure that it performs correctly,
meets the requirements, and is free from bugs or defects. The main purpose of testing is to
identify errors in the software before it is released to users. It is an important part of the
software development life cycle (SDLC).

8 phases of the software testing life cycle (STLC)


1. Requirement Analysis
In this phase, the testing team studies and understands the requirements of the software to
determine what needs to be tested.
Example:
If you're building a library management system, testers will review requirements like "users
should be able to search for books," "admin can add/remove books," etc.

2. Test Planning
Testers plan how testing will be done. This includes deciding on resources, tools, schedule,
and testing techniques.
Example:
For the library system, the test plan might say:
• Functional testing will be manual
• Testers will check features like "issue book" and "return book"
• Testing will start next Monday and finish in 2 weeks

3. Test Design and Review


Testers design test scenarios and test cases. A senior tester or QA lead reviews them for
accuracy and completeness.
Example:
Test case:
• Test if a student can borrow a book when logged in.
• Expected result: The system reduces the available book count by one and shows
"Book issued successfully."

4. Test Case Preparation


All test cases are documented in detail, including inputs, steps, and expected outputs.
Example:
Test Case ID: TC_01
• Step 1: Login as student
• Step 2: Search for a book
• Step 3: Click "Issue"
• Expected: System shows confirmation message and updates database

5. Test Execution
Testers run the test cases on the actual software to see if it behaves as expected.
Example:
The tester logs in, searches for "Harry Potter," clicks "Issue," and checks if the system
behaves as described in the test case.

6. Test or Bug Report


If a test fails, testers log a bug report with details like steps to reproduce the error,
screenshots, and severity.
Example:
Bug Report:
• Issue: "Issue Book" button crashes the app.
• Severity: High
• Steps: Search book > Click Issue > App crashes

7. Bug Fixes
The development team fixes the bugs reported by testers. After fixing, testers re-test the
software to confirm the bug is resolved.
Example:
The developer updates the code to prevent the crash. The tester runs the test again to make
sure the "Issue Book" function now works without errors.

8. Software Release
Once all major bugs are fixed and testing is complete, the software is released to the users
or client.
Example:
The library system passes all tests, so it is deployed for real-time use in schools or colleges.

Objectives of Testing
Testing is done to achieve several important goals. Below are the main objectives
along with real-life examples to help you understand each one better:

1. To Find Defects or Bugs in the Software


The primary purpose of testing is to identify any mistakes or issues in the software code that
could cause it to behave incorrectly.
Example: In a mobile banking app, a bug might allow the same transaction to be
processed twice. Testing helps detect and fix this issue before users are affected.

2. To Check if the Software Works as per the Requirements


Testing ensures that the software functions exactly as it was designed and expected by the
client or user.
Example: If a client asks for a weather app that shows temperature in Celsius, but
the app shows it in Fahrenheit, testing will catch this mismatch with the requirement.

3. To Ensure the Software is Usable, Secure, and Efficient


Testing also checks the overall quality of the software in terms of usability, security, and
performance.
Usability Example: In an e-learning app, the "Submit" button should be easy to find and
use. If users struggle to navigate, testing will reveal this.
Security Example: Testing ensures that passwords are encrypted and unauthorized users
can’t log in to a system.
Efficiency Example: A photo editing app should load and apply filters quickly. Testing
measures how fast and resource-friendly the app is.

4. To Verify that the Final Product is of High Quality


Testing helps ensure the final product is reliable, free from major issues, and ready for users.
Example: Before releasing an online shopping website, all modules (search, cart,
checkout, payment) are tested to make sure they work smoothly together, providing a good
user experience.

Types of Testing
Testing can be broadly classified into the following types:
a. Manual Testing
• Done by humans without using any tools.
• Testers follow a set of test cases and observe the application’s behavior.
• Useful for exploratory, usability, or ad-hoc testing.
• Example: Imagine you're testing a login page on a website. A tester manually types
in different usernames and passwords (valid and invalid), clicks the "Login" button,
and observes the results.
b. Automated Testing
• Uses software tools to run tests automatically.
• Faster and more reliable for repetitive tests.
• Examples of tools: Selenium, JUnit, TestNG, etc.
• Suppose you have an e-commerce website and want to test whether the "Add to
Cart" button works on every product page. Writing a test script using a tool like
Selenium can automatically open each product page, click the button, and verify that
the item was added to the cart.
Unit testing
• Unit testing is a type of software testing that focuses on individual units or
components of a software system.
• The purpose of unit testing is to validate that each unit of the software works as
intended and meets the requirements.
• Unit testing is typically performed by developers, and it is performed early in the
development process before the code is integrated and tested as a whole system.
• Unit tests are automated and are run each time the code is changed to ensure that new
code does not break existing functionality.

Types of Unit Testing


Testing can be broadly classified into the following types:
a. Manual Testing
• Done by humans without using any tools.
• Testers follow a set of test cases and observe the application’s behavior.
• Useful for exploratory, usability, or ad-hoc testing.
• Example: Imagine you're testing a login page on a website. A tester manually types
in different usernames and passwords (valid and invalid), clicks the "Login" button,
and observes the results.
b. Automated Testing
• Uses software tools to run tests automatically.
• Faster and more reliable for repetitive tests.
• Examples of tools: Selenium, JUnit, TestNG, etc.
• Suppose you have an e-commerce website and want to test whether the "Add to
Cart" button works on every product page. Writing a test script using a tool like
Selenium can automatically open each product page, click the button, and verify that
the item was added to the cart.

Workflow of Unit Testing

1. Create Test Cases


o In this step, the developer writes test cases for individual units (functions or
methods).
o Each test case includes input values, the expected output, and a way to
check the actual result.
o Example: For a function add(a, b), you may create a test case like add(2, 3)
should return 5.
2. Review
o Test cases are then reviewed by peers or team members to ensure they are
correct, complete, and meaningful.
o Reviewing helps catch errors in test logic and ensures the test cases truly
represent all required scenarios (positive and negative cases).
3. Baseline
o Once reviewed, the test cases are finalized and approved. This version
becomes the baseline.
o From now on, these tests serve as a reference standard for verifying changes
in the code.
o Any future changes to test cases must be properly tracked.
4. Execute Test Cases
o The final step is to run the test cases using a testing tool or manually.
o The outputs are compared to the expected results.
o If all tests pass, the unit is considered correct. If any test fails, the issue must
be fixed and tested again.

Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
Black Box Testing
Black Box Testing is a software testing method where the internal structure, design, or code
of the software being tested is not known to the tester. The focus is entirely on input and
output of the software system. It is also known as Closed Box Testing, Data-driven Testing,
Behavioural Testing.

White Box Testing


• It is a way of testing software in which the tester has knowledge about the internal
structure of the code.
• It includes:
o Path coverage
o Loop testing
o Condition testing
o Statement coverage
• Also known as clear box testing / glass box testing

Methods
• Equivalence Testing → split module & workout will end
• Boundary value analysis → They set boundary value & check end value.
• Decision Table Testing → testing systems based on a combination of inputs.
• Error Guessing → Error Guessing is a technique based on the tester’s experience to
guess the areas of the application that might be prone to errors.

Black box testing


Black Box Testing is a software testing method in which the tester evaluates the
functionality of an application without knowing the internal code, structure, or
implementation details. The focus is solely on inputs and expected outputs. It's like testing a
machine by pressing buttons and checking results, without opening it up to see how it works.
Types of Black Box Testing

1. Functional Testing

This type of testing verifies what the system does. It focuses on checking the application’s
behavior against the defined functional requirements.
Purpose:
To ensure that each function of the software operates according to the requirements
specification.
Examples:”
• Login Page – The system should allow login with valid credentials and reject invalid
ones.
• Registration Form – Fields like email, password, and phone number should be
validated.
• Payment Processing – The system should correctly handle transactions and display
confirmation.
• Form Submission – Clicking the “Submit” button should save or process the data as
intended.
Different level of testing:
Under functional Black Box Testing, different levels of testing can be conducted:

1. Unit Testing (Typically White Box, but can have Black Box Approach)
Definition:
Unit testing involves testing individual components or functions of a software system in
isolation to verify that they work correctly.
Black Box Approach:
Although unit testing is usually white box (done by developers), it can also be done using a
black box approach, where:
• Each unit (like a function or method) is treated as a "black box."
• Testers supply inputs and verify if the output is correct, without knowing the internal
logic.
Example:
Suppose there's a function calculateTotal(price, quantity). The tester gives values like (100, 2)
and checks if the output is 200 — without knowing how the function computes it internally.
2. Integration Testing (Black Box Perspective)
Integration testing checks whether different modules or components of system work
together as intended.
Black Box Approach:
• Testers treat each module as a black box and verify the data flow between them.
• Focus is on interfaces, data exchange, and interactions between modules.
• No need to know how each module works internally.
Example:
In an online shopping system:
• The order module sends data to the payment module.
• Black box integration testing will check if the order total is correctly passed and if the
payment module responds appropriately.

3. System Testing
System testing tests the entire software system as a whole, to ensure that it meets
the specified requirements.
Black Box Approach:
• The entire application is treated as a black box.
• Testers evaluate end-to-end functionality, performance, security, and behavior under
different conditions.
• No knowledge of internal code or logic is required.
Example:
For a banking application, system testing might include:
• Logging in with valid and invalid credentials.
• Transferring money.
• Viewing account balances.
All from a user's perspective, without knowing how these features are implemented.

4. User Acceptance Testing (UAT)


User Acceptance Testing is the final phase of testing where the software is tested
by the end users or clients to ensure it meets their needs and expectations.
Black Box Approach:
• Conducted by non-technical users or clients, who see the application as a black box.
• They test real-world use cases to decide whether the software is ready for deployment.
Example:
In a hotel booking system:
• Users test if they can search hotels, book rooms, receive confirmation, and cancel
bookings.
• They don't care how the booking logic is coded; they only care that it works correctly.
2. Non-Functional Testing

This type of testing verifies how well the system performs, rather than what it does. It focuses
on aspects like performance, usability, and reliability.
Purpose:
To evaluate the software's quality attributes and ensure it performs well under expected
conditions.

Different level of testing:


Under non-functional Black Box Testing, different levels of testing can be conducted:
1. Performance Testing
Checks how quickly and efficiently the application responds under various conditions.
Example: A shopping website should load the homepage within 2 seconds—even if 500 users
are browsing at the same time.

2. Cross-Browser Compatibility
Ensures the website or app looks and works the same across different web browsers.
Example: A button that works on Chrome should also respond properly on Firefox and Edge,
with no layout issues or errors.

3. Usability Testing
Evaluates how user-friendly, intuitive, and easy the software is for real users.
Example: Testing whether users can easily find the “Checkout” button on an e-commerce site
without confusion or extra steps.

4. Reliability Testing
Checks if the software runs consistently and without crashing over time or repeated use.
Example: A video streaming app should continue playing videos for hours without freezing or
crashing, even if used daily.

3. Regression Testing Technique


Regression Testing is a software testing technique used to ensure that new changes or
updates in the code haven’t accidentally broken existing features of the application. It helps
maintain software stability after enhancements, bug fixes, or configuration changes.
Example:
Suppose a developer fixes a bug in the payment module of an online store.
Regression testing will re-check:
• Login
• Product search
• Adding items to cart
• Checkout
Even though they were not changed, to make sure nothing got broken due to the new
fix.

Steps in Black Box Testing:


1. Requirement Analysis:
o Review the software requirements to understand the functionality that the
system is supposed to provide.
o This includes gathering functional specifications, user stories, and system
behavior descriptions.
2. Test Case Design:
o Based on the requirements, the tester designs test cases that cover a wide range
of input scenarios, including:
▪ Valid inputs: Testing the system with expected, correct input values.
▪ Invalid inputs: Providing incorrect, unexpected input to test how the
system handles errors.
▪ Boundary inputs: Testing edge cases and limits (e.g., maximum and
minimum values).
▪ Performance and stress testing: Examining how the system handles
load or extreme conditions.
3. Test Execution:
o The test cases are executed in the actual software environment to verify the
system's output based on the provided inputs.
o The tester does not need access to the internal code; the test is run from the
user's perspective.
4. Output Validation:
o The outputs produced by the system are compared against expected results or
defined success criteria.
o Any discrepancies or errors are logged for further investigation.

5. Reporting:
o If any failures or bugs are found, they are reported, and the software is sent back
to the development team for fixes.
o Re-testing is done to ensure the issues have been resolved.

----------------------------------------------------------------------------------------------------------------
White Box Testing

White Box Testing is a method where the tester knows how the system or code works internally.
The goal is to check the internal logic, code structure, and flow of the program. It's also called
clear box or glass box testing because the tester can "see through" the code.

It focuses on:
• Path Coverage: Testing all possible paths through the code.
• Loop Testing: Checking how loops behave (e.g., zero times, once, many times).
• Condition Testing: Verifying each condition (like if, while, etc.) for true and false
outcomes.
• Statement Coverage: Making sure each line of code is executed at least once during
testing.

Methods of White Box Testing


1. Equivalence Testing
• Idea: Divide the input data into groups or partitions where the program is expected to
behave similarly.
• Example: Suppose a form accepts ages between 18 and 60. You can divide input like:
o Below 18 → Invalid group
o 18–60 → Valid group
o Above 60 → Invalid group
• You then test only one value from each group (e.g., 16, 30, 65) instead of testing every
possible age.
2. Boundary Value Analysis
• Idea: Errors often happen at the edges of input ranges, so test values at and around
the boundaries.
• Example: If valid input is 1 to 100:
o Test values like 0, 1, 2, 99, 100, 101
• This helps find off-by-one or boundary condition errors.
3. Decision Table Testing
• Idea: Used when an application behaves differently for various combinations of
inputs.
• You make a table that lists:
o All possible input conditions
o Expected outcomes for each combination
• Example: Suppose login requires correct username and password:
Username Correct Password Correct Result
Yes Yes Login success
Yes No Fail
No Yes Fail
No No Fail
This ensures all combinations are tested.
4. Error Guessing
• Idea: Based on the tester’s experience, they guess where errors are likely.
• There's no formal process — it’s about intuition and past experience.
• Example:
o Guess that users might enter negative numbers in a price field.
o Try special characters in a name field.
• Experienced testers can often find hidden bugs this way.

White Box Testing Process:


1. Understand the Code
• The tester studies the source code and design of the program.
• They identify key elements: loops, conditions, functions, logic paths, and how data
flows through the application.

2. Create Test Cases


Based on code analysis, the tester creates detailed test cases. Here's how different techniques
are applied:
a. Path Coverage
• Identify all possible paths in the code (like all if, else, and loop branches).
• Write test cases to ensure every path is executed at least once.
b. Condition Testing
• For every decision point (like if or while), test both true and false conditions.
• Ensures logic behaves correctly in both scenarios.
c. Loop Testing
• For each loop (like for, while):
o Test with 0 iterations (loop not executed).
o Test with 1 iteration.
o Test with many iterations.
o Test with maximum allowed iterations if there’s a limit.
d. Statement Coverage
• Make sure each line of code (statement) is executed at least once during testing.

3. Apply Specific White Box Methods


Use one or more of these methods depending on the situation:
a. Equivalence Partitioning
• Divide input data into equivalent groups.
• Pick representative values from each group for testing.
b. Boundary Value Analysis
• Identify input limits (min, max, just above, just below).
• Create tests focused around these edge values.
c. Decision Table Testing
• Build a table showing combinations of inputs and expected outputs.
• Create test cases for each row of the table to cover all logic combinations.
d. Error Guessing
• Based on experience, testers guess possible points of failure.
• Create test cases around commonly missed or error-prone areas (like blank inputs,
special characters, wrong formats).

4. Execute Tests
• Run the test cases directly against the code (often using unit testing tools like JUnit,
NUnit, etc.).
• Debug and analyze the output of each test case.

5. Fix Bugs and Retest


• If issues are found, developers fix the bugs.
• The tester then reruns test cases to confirm that the fixes work and haven’t broken
anything else.

6. Document Results
• Record which tests passed or failed.
• Report bugs with details like:
o Input used
o Actual vs expected output
o Location in code (if known)

This structured process helps find logic errors, unreachable code, incorrect conditions, and
boundary-related bugs early in development.

----------------------------------------------------------------------------------------------------------------
Integration Testing
Integration testing is a type of software testing where individual software modules are
combined and tested as a group. The primary objective of integration testing is to identify issues
that might occur when the different modules or components of a software system interact with
each other.

Purpose of Integration Testing:


The main purpose of integration testing is to:
1. Ensure that different parts of the system communicate effectively with each other.
2. Verify that data flows correctly between components.
3. Identify any issues that might occur when modules are combined, such as
miscommunication or incorrect behavior.
4. Check that combined modules perform as expected in a real-world, end-to-end
environment.

Types of integration testing

1. Incremental Integration Testing


Modules are integrated step-by-step and tested gradually, making it easier to identify and fix
issues early in development.
a) Top-Down Approach
Integration begins from the top-level module and progresses downward. If lower
modules are not ready, stubs are used to simulate their behavior.
Example: Suppose you're testing an e-commerce app:
o You start with the Checkout module (top-level).
o If Payment and Cart modules aren't ready, you use stubs to simulate them.
o As these modules are developed, you replace stubs with real modules and test
again.
b) Bottom-Up Approach
Testing starts with the lowest-level modules and moves upward. Drivers are used to
mimic higher modules until they are available.
Example: In the same e-commerce app:
• Begin testing the Inventory and Database modules first.
• Use drivers to simulate calls from the Search or Cart modules.
• Gradually integrate higher-level modules and test upward.

2. Non-Incremental Integration Testing (Big-Bang Approach)


All modules are combined after development and tested at once. No stubs or drivers are used.
Example: In a Flight Booking System, modules such as "Search", "Booking", "Payment", and
"User Profile" are all integrated together only after all are completed.

Process of Integration Testing


Integration Testing is the process of testing how different software modules work together after
being combined. It ensures that individually tested units (components or modules) interact
correctly when integrated. Here's a step-by-step overview of the general process:

1. Define the Integration Strategy


Choose the appropriate integration testing approach:
• Incremental (Top-Down or Bottom-Up)
• Non-Incremental (Big-Bang)
This decision depends on factors like project size, module dependencies, and development
progress.

2. Identify Module Interfaces


• Determine how modules communicate (e.g., function calls, APIs, data flow).
• Define input and output for each module.
• Prepare integration test cases based on these interactions.

3. Prepare Test Data and Environment


• Set up the test environment with necessary hardware, software, and tools.
• Create or simulate any missing components using stubs (for Top-Down) or drivers (for
Bottom-Up).
4. Begin Integration and Testing
For Incremental Testing:
• Top-Down: Start with top-level modules and integrate downward.
• Bottom-Up: Begin with bottom-level modules and integrate upward.
At each step:
• Integrate the next module.
• Execute integration test cases.
• Log and fix any defects.
• Retest after fixing.
For Big-Bang Testing:
• Wait until all modules are ready.
• Integrate all at once.
• Test the complete system as a whole.
5. Defect Logging and Resolution
• Record any issues or mismatches in module interactions.
• Developers fix the issues, and the test is re-executed to ensure the problem is resolved.

6. Continue Until Full Integration


• Repeat the integration and testing process until all modules are connected and working
together as intended.
• Verify that the system behaves correctly from end to end.

7. Final Validation
• Perform final checks to ensure overall system stability.
• Confirm that data flows correctly between modules and no major defects remain.

----------------------------------------------------------------------------------------------------------------

System Testing
System Testing is a level of software testing where a complete and integrated software
application is tested as a whole.
It verifies the system’s compliance with specified requirements, both functional and non-
functional.
• It checks the entire software system (including hardware, software, and network)
from end to end.
• This testing simulates a real-world environment where the system will be used.
Goal: To ensure that the system works as a single, cohesive unit that performs its intended
functions.

When is System Testing Done?


System testing is performed after Integration Testing and before Acceptance Testing.
1. Unit Testing: Tests individual components or functions.
2. Integration Testing: Tests interactions between integrated components or systems.
3. ➤ System Testing: Now the full system is tested.
4. Acceptance Testing: The customer or end-user tests to see if the software meets their
expectations.

Why is System Testing Important?


System Testing is crucial because it:
• Validates complete functionality: Ensures all parts of the system work together.
• Catches issues early: Identifies bugs before the product is released to customers.
• Checks requirements compliance: Verifies if the software meets technical, functional,
and business requirements.
• Improves user satisfaction: Ensures that the product works as users expect.
Skipping this phase can lead to critical failures in production, such as login issues, payment
failures, or security breaches.
Example: Online Shopping Website
Suppose you're testing an e-commerce site. System Testing would involve:
• Login: Can users register and log in successfully?
• Cart: Can users add/remove items and see the updated total?
• Payment: Does the payment gateway process transactions correctly?
• Order Confirmation: Is a receipt sent after a successful purchase?
• Search and Filters: Can users find products and apply filters?
• Logout: Does the session end securely?

Types of Testing Included in System Testing


System Testing is comprehensive and includes various types of testing:
1. Functional Testing
• Tests whether the application behaves as per the functional requirements.
• Example: Clicking the "Buy Now" button adds the product to the cart.
2. Performance Testing
• Checks how the system performs under load.
• Subtypes include:
o Load Testing – How it behaves under expected traffic.
o Stress Testing – How it handles traffic beyond limits.
• Example: Does the site remain responsive when 10,000 users log in at once?
3. Security Testing
• Ensures the system is secure from threats like hacking or data leakage.
• Example: Can unauthorized users access admin pages?
4. User Interface (UI) Testing
• Verifies that the UI is intuitive, responsive, and error-free.
• Example: Are buttons correctly aligned and do they function properly?
5. Compatibility Testing
• Tests how the software behaves in different environments:
o Browsers (Chrome, Firefox, Safari)
o Operating Systems (Windows, macOS, Linux)
o Devices (Desktop, Tablet, Mobile)
• Example: Does the checkout page look correct on both Android and iPhone?

Process:
1. Requirement Analysis
Understand system requirements and features to be tested. Identify critical functionality to
ensure comprehensive testing.
Example: "Users should be able to add products to the shopping cart."
2. Test Planning
Create a testing strategy, define the scope, and assign resources. Plan the testing schedule and
necessary tools.
Example: Plan to test login, cart, and payment functions.

3. Test Case Design


Write specific test cases to validate each feature and its expected behavior. Ensure all
functional requirements are covered.
Example: "Test logging in with valid credentials."

4. Test Environment Setup


Set up the necessary hardware, software, and network configurations for testing. Ensure
compatibility across all platforms.
Example: Set up the app on Chrome, Firefox, and Safari browsers.

5. Test Execution
Run the test cases, verifying that the system functions as expected. Track the results and note
any defects.
Example: Log in to the website and verify that the homepage appears correctly.

6. Defect Reporting and Tracking


Log any defects found during testing and ensure they are fixed. Track their resolution through
the development process.
Example: Report the issue: "The payment button doesn't work."

7. Test Result Analysis


Review test outcomes to check if the system meets the requirements. Identify issues that need
fixing before release.
Example: "90% of tests passed, but login failed in some cases."

8. Test Closure
Finalize testing and document the results. Prepare reports and ensure all issues have been
resolved.
Example: "95 tests passed, 5 defects fixed, prepare final report."

9. Post-Testing Activities
Ensure the system is ready for User Acceptance Testing (UAT) and release to production.
Perform final checks.
Example: Verify that all critical defects are fixed and the system is ready for UAT.

==================================================================
Regression Testing
Regression testing is the process of testing a software application after making changes to
ensure that the new changes haven't broken or affected the existing features of the
application. The goal is to make sure that the software still works as expected after changes
such as bug fixes, new features, or upgrades.
Key Aspects of Regression Testing:
1. Preserves Existing Functionality: The main aim of regression testing is to ensure
that new code doesn't disrupt the current working of the application.
2. Tests Changes in Code: Whenever a part of the software is modified or a new feature
is added, regression testing checks to see if these changes have caused any unexpected
issues with the existing parts of the system.
3. Detects Side Effects: Sometimes, changes in one part of the system might have side
effects on other parts. Regression testing helps to identify these issues early.
4. Repeated Testing: Regression tests are often repeated frequently as new updates are
made to the software, making it an ongoing process throughout the software
development lifecycle.

The Process of Regression Testing:

1. Identify the Test Cases to Reuse


After a software update or bug fix, you need to identify which of the previously written test
cases are still relevant to ensure the older features haven't been broken.
Example:
Suppose you're testing an e-commerce website. If a change was made to the checkout
process, you would reuse test cases that cover:
• Login functionality (to ensure users can still log in).
• Cart functionality (to ensure items added to the cart are still correct).
• Payment functionality (to ensure payments are processed correctly).

2. Select Areas to Test


Focus on areas directly impacted by the change, but also consider testing related or indirectly
affected areas.
Example:
In the e-commerce website, after fixing a bug in the payment gateway, you would primarily
focus on:
• Payment processing tests (e.g., successful and failed transactions).
But you would also test related features such as:
• Order confirmation page (to ensure it still works correctly after the payment
process).
• Email notifications (to ensure users receive correct confirmation emails after
payment).
3. Run the Test Cases
Execute the selected test cases on the affected and related areas to verify the system's
behavior.
Example:
Running a test case that checks if a payment goes through successfully:
• Test Case: A user purchases a product using a credit card.
• Expected Result: The payment is successfully processed, and the user receives an
order confirmation.
• Actual Result: The payment is processed correctly, and the user receives
confirmation—pass.

4. Evaluate Results
After running the tests, compare the current results with the expected outcomes.
Example:
After running the payment test case, if the actual result is:
• Expected Result: The user gets a confirmation email after a successful transaction.
• Actual Result: The user does not receive the email.
This indicates a failure, which could be related to the changes made in the payment
gateway.

5. Fix Issues
Developers address any issues that were found during regression testing. After fixing the
problems, the tests need to be rerun.
Example:
The developer identifies that an email service was disrupted during the payment gateway
update. Once the issue is fixed, the test case is rerun, and now the user receives the
confirmation email.

6. Repeat the Process


Regression testing is an ongoing process, especially when more updates or bug fixes are
applied.
Example:
After adding a new feature like product reviews, new tests are added to check the
functionality. Then, regression tests are rerun to ensure that the previously working features
(like payment processing) still function correctly without issues.

Types of Regression Testing:


1. Corrective Regression Testing:
This type of testing is done when no new functionality has been added, but the
application has undergone bug fixes or optimizations. The main aim is to ensure the bug
fixes don’t affect the existing functionality.
2. Progressive Regression Testing:
This is done when new features are added to the software. The new features might
affect existing code, so progressive regression testing ensures that everything,
including the new and old features, works together.
3. Retest-all Regression Testing:
In this type, all the tests are re-executed after any change, regardless of whether the
change is related to the feature being tested. It's a thorough approach but can be time-
consuming.
4. Selective Regression Testing:
Only the affected areas of the application and their related modules are tested. This is
a quicker option than retesting everything.
5. Partial Regression Testing:
This involves testing only the parts of the application that were directly impacted by
the changes, not the whole system. It's used when changes are small and limited in
scope.
6. Complete Regression Testing:
This involves running all test cases, even those that may not be related to the changes,
to ensure the software's overall integrity.

==================================================================

Debugging

Debugging is the process of identifying, isolating, and fixing issues or bugs in a software
program to ensure it functions as intended. It involves systematically analyzing code,
identifying errors, and applying corrective measures to remove or handle these errors.
Debugging can include checking for syntax errors, logical flaws, or runtime issues that affect
the performance or output of the program. The goal is to make the code robust, efficient, and
error-free.
Example:
• Imagine you're using a calculator app, and when you try to add two numbers, the result
is wrong. You check the app's logic and discover that it mistakenly adds a multiplication
operation instead of the addition one.
• To debug, you review the app's logic, find the error in the code that handles operations,
and correct it to ensure it adds the numbers properly. After fixing the error, the calculator
now returns the correct result.
debugging techniques:
1. Print Debugging (Manual Tracing)
This technique involves inserting print statements into the program to monitor the values of
variables, the flow of execution, or the state of objects. It is one of the simplest ways to
observe what the program is doing at runtime.
Real-time Example:
In a library management system, the availability status of a book is not updating after it is
issued. By printing the book object's status before and after issuing, the developer can verify
if the method is working correctly.
2. Debugger Tool (Interactive Debugging)
A debugger is a tool provided by most modern IDEs that allows developers to pause
execution, inspect object properties, step through code line by line, and monitor program
flow. It is highly effective for tracing logic and control issues.
Real-time Example:
In an e-commerce application, the total price in the shopping cart is displaying incorrectly.
The developer sets breakpoints in the discount and tax calculation functions and steps
through the code to find where the computation is failing.
3. Unit Testing
Unit testing involves writing specific test cases to verify that individual units (like classes or
methods) perform as expected. These tests can be automated and are useful for detecting bugs
early during development.
Real-time Example:
In a passport automation system, unit tests are created to check whether the user input
validation accepts only correct formats and rejects invalid or incomplete entries.
4. Logging
Logging records the internal operations of a program during execution. Developers can
analyze logs to understand what happened before an error occurred, making it easier to debug
issues in production or complex systems.
Real-time Example:
In a hospital management system, a doctor’s login occasionally fails. Reviewing the logs
reveals that the authentication service throws a null value error because the department
information was missing during login.
5. Code Review and Pair Debugging
This involves collaborating with another developer to examine the code for bugs, design
flaws, or improper use of object-oriented principles. It helps in identifying issues that tools
might miss.
Real-time Example:
In a school management system, a team member discovers during code review that the
Teacher class was unnecessarily inheriting from the Student class, which could lead to logical
inconsistencies.
6. Static Code Analysis
Static analysis uses tools to examine source code without executing it. These tools check for
syntax issues, bad coding practices, and violations of object-oriented design rules.
Real-time Example:
In a banking application, static analysis detects that certain transaction classes are not
releasing database connections, leading to potential memory leaks and performance issues.

Steps in debugging:
1. Identify the Bug
In this step, you first need to figure out that something is wrong. Bugs are often reported
through user feedback, test failures, or monitoring logs. It is crucial to recognize patterns that
point to an issue, like unexpected behavior, incorrect output, or crashes.
Real-time Example:
In a student registration system, a user encounters a "null pointer exception" when
submitting the student form. The error is identified when the form submission is attempted
and the application crashes or shows an unexpected result. The system log or error message
indicates a null pointer exception, signaling that something in the form submission is causing
the issue.

2. Reproduce the Bug


Once the bug is identified, the next step is to replicate it. Reproducing the bug helps confirm
the specific conditions under which the error occurs, allowing for more precise debugging.
This can involve running the program with the same inputs, conditions, or user actions that
triggered the problem.
Real-time Example:
In the student registration system, the "null pointer exception" is reproduced by filling out
the form with incomplete data, such as leaving the student name field blank. By doing this,
the developer confirms that the issue happens only when the name field is empty.

3. Analyze the Root Cause


After reproducing the bug, the next step is to analyze the root cause. This involves using
debugging tools, logs, or even manually checking the program’s flow to find exactly where
things are going wrong. The goal is to track down the faulty part of the code, such as
incorrect logic, object mismanagement, or a broken method.
Real-time Example:
In the student registration system, after reproducing the bug, the developer traces the error
using the logs or a debugger. The logs reveal that the Student object is not being properly
initialized before its method setName() is called. Without initialization, the system tries to
access the setName() method on a null Student object, resulting in the "null pointer
exception."
4. Fix the Bug
Once the root cause is identified, the next step is to fix the bug. This can involve correcting
logic errors, adjusting object lifecycles (like proper initialization), or redesigning object
interactions. The fix should address the underlying cause of the issue without introducing
new problems.
Real-time Example:
In the student registration system, the developer initializes the Student object before calling
the setName() method. This ensures that the object is properly created, eliminating the null
pointer error. The fix may also involve adding validation to check that the name field is not
empty before proceeding with the form submission.

5. Test the Fix


After the fix, it’s crucial to test the solution to ensure the bug is resolved. This involves
rerunning the scenario that caused the bug, as well as other related tests, to verify that the fix
works and no new issues have been introduced.
Real-time Example:
In the student registration system, the developer re-runs the form submission with an empty
student name to verify that the error no longer occurs. The form now successfully validates
the input and displays the appropriate error message if the name is empty, preventing the
previous bug from resurfacing.

6. Document the Bug and Fix


Once the bug is fixed, documentation should be added to record the nature of the issue and
how it was resolved. This helps other developers understand the problem and solution, and
also provides a record for future reference. It’s common to add comments in the code or write
a bug report in the repository.
Real-time Example:
In the student registration system, the developer adds a comment in the
StudentFormController.java file:
“Fixed null pointer in StudentFormController.java – added object initialization in init()
method before calling setName().”
Additionally, a bug report or ticket in the project management system could be updated with
the description of the issue and resolution, ensuring transparency and clarity for the
development team.

7. Monitor Post-Fix Behavior


After applying the fix and testing it, it’s important to monitor the system during real usage to
ensure that no issues arise again. This involves reviewing logs or continuing to check for any
unexpected behavior post-deployment. Even after a fix, unforeseen side effects can occur, so
it's important to keep an eye on the system’s performance and error logs.
Real-time Example:
In the student registration system, after deploying the fix, the developer monitors the logs
and user feedback to ensure that no further errors occur during form submission. If the system
logs show successful form submissions and no more null pointer exceptions, the fix is
confirmed as successful.

==================================================================

Program analysis

In Object-Oriented Software Engineering (OOSE), program analysis refers to the process of


examining and understanding a program to assess its behavior, structure, and correctness. It
involves analyzing the relationships between the program's components, such as classes,
methods, and objects, to ensure the system meets its design goals and works as expected.

Here’s a detailed breakdown of program analysis in the context of OOSE:


1. Static Analysis
Static analysis examines the program without executing it. It focuses on the structure and
syntax of the code, looking for potential issues like bugs, performance bottlenecks, or
violations of coding standards. The key activities include:
• Code inspection: Reviewing the source code to identify errors or inefficiencies.
• Class analysis: Identifying class responsibilities, relationships, and dependencies.
Tools like UML (Unified Modeling Language) diagrams help visualize this.
• Dependency analysis: Analyzing how classes, methods, and objects interact and
depend on each other. This is crucial to understanding coupling and cohesion in the
design.
• Code metrics: Calculating various metrics like cyclomatic complexity, class
coupling, and depth of inheritance to evaluate code maintainability and
understandability.

2. Dynamic Analysis
Dynamic analysis involves running the program and observing its behavior at runtime. This
helps identify issues that cannot be detected through static analysis, such as memory leaks,
race conditions, and performance issues. Key aspects include:
• Execution monitoring: Tracking how the program behaves during execution, often
using debugging tools or profilers.
• Testing: Running unit tests, integration tests, and system tests to validate the
program’s behavior.
• Memory analysis: Monitoring the program's memory usage and identifying potential
memory leaks or excessive resource consumption.
3. Behavioral Analysis
This type of analysis focuses on how the system behaves in response to specific inputs or
actions. It involves:
• Use case analysis: Identifying and analyzing use cases to ensure the system meets the
functional requirements.
• State transitions: Examining state machines or sequence diagrams to understand how
the system transitions between different states or behaviors.
• Object interaction: Analyzing how objects communicate, interact, and collaborate to
perform tasks.

4. Refactoring and Optimization


Part of program analysis in OOSE involves improving the program’s design and performance
through refactoring. This includes:
• Code smell detection: Identifying areas of the code that may indicate poor design,
such as large methods or classes that violate the Single Responsibility Principle
(SRP).
• Design pattern application: Analyzing opportunities to apply design patterns to
improve the structure and flexibility of the system.
• Performance optimization: Identifying and fixing bottlenecks to improve the overall
performance, such as optimizing algorithms or reducing database queries.

5. Formal Methods
In some cases, formal methods are used to rigorously prove the correctness of a program.
This involves using mathematical techniques to model the system and verify that it behaves
as expected under all conditions. Techniques include:
• Formal specification: Writing precise mathematical descriptions of the system’s
behavior.
• Model checking: Automatically checking if a model satisfies certain properties or
specifications.

6. Impact Analysis
Impact analysis involves determining the effects of changes in the program, particularly in
object-oriented systems where changes in one class or method can ripple through the system.
It includes:
• Change propagation: Identifying which parts of the code may be affected by a
change in a class or method.
• Regression testing: Ensuring that changes to the program do not introduce new bugs
by re-running previous tests.
7. Tools and Techniques
There are various tools and techniques that can assist in program analysis in OOSE:
• UML tools: Tools like Enterprise Architect or Visual Paradigm help in modeling and
visualizing the program’s structure and behavior.
• Static analysis tools: Tools like SonarQube, Checkstyle, and PMD analyze the code
without executing it to find potential issues.
• Profilers: Tools like VisualVM or JProfiler monitor memory usage, CPU usage, and
thread activity during runtime.
• Testing frameworks: JUnit, TestNG, and other testing frameworks help in
performing unit testing, integration testing, and system testing.

==================================================================

Symbolic Execution

Symbolic execution is a technique used in program analysis where instead of executing a


program with actual inputs, the program is executed with symbolic inputs. These symbolic
inputs represent all possible values that the actual inputs could take. The goal of symbolic
execution is to explore how a program behaves under various inputs, to identify bugs,
vulnerabilities, or other interesting behaviors.

Working of Symbolic Execution


Symbolic execution simulates the execution of a program not with actual inputs, but
with symbolic representations of inputs. The goal is to explore multiple possible execution
paths of the program in a single analysis.

Step 1: Represent Inputs Symbolically


Instead of assigning a fixed value to inputs (like 5 or 10), symbolic execution
assigns symbolic variables (like X, Y, etc.).
• These symbols can take any value.
• All variables and expressions in the program are tracked as functions of these
symbols.
Purpose: This allows the analysis to cover all possible input values at once.

Step 2: Execute Instructions Symbolically


Each statement in the program is interpreted with symbolic inputs.
• For example, if a program computes an output using inputs, symbolic execution will
compute the output as a symbolic expression involving symbolic variables.
As the program proceeds, it maintains:
• The symbolic state: A mapping of all program variables to symbolic expressions.
• The program counter: A pointer to the current instruction being executed.
• The path condition (PC): A list of conditions (constraints) that must be satisfied for
the program to follow the current path.
Step 3: Handle Branching (Conditional Statements)
Whenever a conditional decision point (like an if or while statement) is encountered,
symbolic execution splits the execution into multiple paths, one for each possible outcome
of the condition.
For each path:
• The path condition is updated to reflect the condition required for that path to be
followed.
• The symbolic execution proceeds separately along each path, maintaining its own
symbolic state and path condition.
This builds a tree of possible execution paths.

Step 4: Maintain Path Conditions


Each execution path has an associated path condition (PC) — a logical formula representing
the constraints on inputs required for that path to be taken.
Example path conditions might be:
• X > 0 (input X must be greater than 0)
• Y ≤ 5 (input Y must be 5 or less)
Symbolic execution updates the path condition each time a branch is taken, and each path
in the execution tree accumulates a different set of constraints.

Step 5: Detect Errors and Special Conditions


During symbolic execution, if the program reaches a point where a known error may occur
(like division by zero, buffer overflow, etc.), the engine checks the path condition:
• If the path condition is satisfiable, that means there exists some input that can cause
this error.
• Symbolic execution then reports a potential bug, along with the input values that
could trigger it.
Step 6: Use a Constraint Solver
At the end of symbolic execution, or for any path of interest:
• The symbolic execution engine sends the path condition to a constraint solver (like
Z3 or STP).
• The solver checks whether the condition is satisfiable.
o If YES, it returns concrete input values that satisfy the path.
o These inputs can be used to generate test cases or reproduce bugs.
Step 7: Output and Usefulness
From symbolic execution, you can obtain:
• A list of execution paths.
• Corresponding path conditions.
• Concrete test inputs for each path.
• Warnings about errors that occur under certain conditions.
Why Symbolic Execution is Useful
a. Automated Test Generation
Symbolic execution can automatically generate inputs that exercise different paths through a
program. This helps achieve high test coverage without manual effort.
b. Bug Detection
Because it considers all possible inputs and paths, symbolic execution can discover bugs such
as:
• Dividing by zero
• Accessing memory that does not exist
• Logical errors in conditional statements
c. Security Analysis
Symbolic execution is widely used in security to identify vulnerabilities. It helps find inputs
that might cause the program to:
• Crash
• Leak sensitive data
• Allow unauthorized access

Applications of Symbolic Execution


• Software testing: To automatically generate test inputs that cover different program
paths.
• Formal verification: To mathematically prove that a program meets its
specifications.
• Security analysis: To detect vulnerabilities such as buffer overflows or assertion
violations.
• Debugging: To find inputs that lead to specific errors or unexpected behavior.
Model Checking
Model Checking is a process that involves:
• Creating a formal model of the system (usually as a finite state machine).
• Writing properties or specifications the system should satisfy (using temporal
logic).
• Automatically verifying whether the model satisfies the specifications by exploring all
possible states and transitions.
If the model satisfies the specification, it is considered correct. If not, the model checker
provides a counterexample to show the path where the error occurs.

Key Concepts
a) System Model (State Transition System)
• The system is represented as a graph of states and transitions.
• A state is a snapshot of all variable values at a particular moment.
• A transition represents a change from one state to another.
b) Specification (Properties)
• Specifications are written in temporal logic.
• Two commonly used logics:
o LTL (Linear Temporal Logic): Focuses on sequences of events.
o CTL (Computation Tree Logic): Explores branching paths of executions.
c) Model Checking Tool
• A model checker automatically verifies if the specification holds in the model.
• If it doesn’t, a counterexample trace is given to help fix the error.

Steps in Model Checking (with Examples)


Step 1: Model the System
The system is represented as a finite state machine (FSM) – a graph with:
• States: Possible configurations of the system.
• Transitions: How the system moves from one state to another.
Example:
Traffic light system with 4 states:
State NS Light EW Light
S1 Green Red
S2 Yellow Red
S3 Red Green
S4 Red Yellow
Transitions: S1 → S2 → S3 → S4 → S1 → ...

Step 2: Write the Specification


The property to be verified is written using Temporal Logic:
• LTL (Linear Temporal Logic): For sequence-based verification
• CTL (Computation Tree Logic): For tree-structured paths
Example:
Safety Property: “Both directions should never be green at the same time”
Written in LTL as:
G ¬(NS_Green ∧ EW_Green)
(G = globally, ¬ = not, ∧ = and)

Step 3: Explore the State Space


The model checker explores all reachable states and paths from the initial state. This is
called state space exploration.
Example:
It checks:
• S1 → S2 → S3 → S4 → ...
• Are any of these states violating the rule?
• Are there loops, deadlocks, or illegal combinations?

Step 4: Verify the Property


The tool verifies whether the temporal logic property holds in every possible execution
path.
Example:
• It checks every state to ensure: NS_Green ∧ EW_Green is never true
• If all states satisfy the rule → Property is verified.

Step 5: If Failed, Give Counterexample


If any state violates the property, the model checker provides a counterexample trace,
showing the exact sequence of states that led to the error.
Example:
Let’s say the developer made a mistake and added a new state:
State NS Light EW Light
S5 Green Green
Then the tool will output:
Initial State → S1 → S2 → S5
Violation Found: NS and EW are green at S5
This helps in debugging and fixing the design.

4. Types of Properties Checked


• Safety Property: Something bad never happens.
E.g., "The system should never enter a deadlock."
• Liveness Property: Something good eventually happens.
E.g., "Every request should get a response."
• Fairness Property: Every process eventually gets its turn.
E.g., "No user gets permanently blocked."
Applications
• Hardware circuit verification (e.g., CPU, RAM controllers)
• Communication protocols (e.g., Bluetooth, TCP)
• Railway signalling systems
• Medical device software (e.g., pacemakers)
• Real-time embedded systems (e.g., automotive ECUs)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy