0% found this document useful (0 votes)
8 views

SQA Mtt2

The document outlines the Software Quality Assurance (SQA) system architecture, detailing its six components including pre-project components, project life cycle activities, and software quality management. It emphasizes the importance of structured exercises to ensure software quality and introduces various software development methodologies and testing strategies. Additionally, it discusses defect removal effectiveness and the significance of verification, validation, and qualification in maintaining software quality.

Uploaded by

mitalpatil746
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

SQA Mtt2

The document outlines the Software Quality Assurance (SQA) system architecture, detailing its six components including pre-project components, project life cycle activities, and software quality management. It emphasizes the importance of structured exercises to ensure software quality and introduces various software development methodologies and testing strategies. Additionally, it discusses defect removal effectiveness and the significance of verification, validation, and qualification in maintaining software quality.

Uploaded by

mitalpatil746
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

SQA Mtt2

Chapter 3
SQA System Architecture

 SQA ensures software quality by following structured exercises.

 It is divided into six components:

1. Pre-project components – Ensuring proper resource allocation, schedules, and


budgets.

2. Project life cycle activities assessment – Includes reviews, expert opinions, and
software testing for error detection.

3. Infrastructure error prevention and improvement – Organizational measures to


reduce errors based on past experience.

4. Software quality management – Ensures control of development and maintenance


activities to prevent budget and schedule failures.

5. Standardization, certification, and SQA system assessment – Implementation of


international standards for quality assurance.

6. Organizing for SQA (Human components) – Involves managers, testers, SQA unit,
and professionals responsible for implementing SQA methods.

2. Pre-Project Components

Purpose:

 Ensures that project commitments are well-defined in terms of resources, schedule, and
budget.

 Guarantees that development and quality plans are prepared correctly.

Main Activities:

1. Contract Review:

o Evaluates whether customer requirements are properly defined.

o Checks project feasibility, schedules, risk assessment, and whether the


development team has adequate resources and skills.

o Identifies conflicts between customer expectations and project limitations.

2. Development and Quality Plans:

o Development Plan: Defines project schedule, required resources, software reuse,


and risk management.

o Quality Plan: Specifies quality goals, testing criteria, and verification method

3. Project Life Cycle Activities Assessment

Purpose:
 Ensures that errors are detected early during development and maintenance.

SQA Components in Project Life Cycle:

1. Reviews:

o Design Reviews (DRs): Conducted by senior experts to assess software architecture


and system design.

o Peer Reviews: Includes:

 Inspections: Formal reviews to find errors.

 Walkthroughs: Informal discussions among developers to identify potential


issues.

2. Expert Opinions:

o External domain experts provide insights when in-house expertise is insufficient.

o Helps resolve disputes and design issues efficiently.

3. Software Testing:

o Conducted at various levels:

 Unit Testing: Checks individual components.

 Integration Testing: Ensures modules work together.

 System Testing: Verifies the complete system's performance.

 Acceptance Testing: Confirms that software meets user requirements.

4. Infrastructure Error Prevention and Improvement

Purpose:

 Establishes organizational measures to reduce software errors.

 Implements lessons from past projects to improve development quality.

Key Methods:

 Process standardization to prevent common mistakes.

 Automated tools for code review and error detection.

 Regular training programs for developers on software best practices.

5. Software Quality Management

Purpose:

 Ensures early intervention in development and maintenance activities.

 Reduces risks of exceeding budget and schedule.

Key Activities:

 Tracking project progress and identifying deviations.


 Implementing corrective measures before defects become critical.

 Ensuring documentation and process control to maintain software reliability.

6. Standardization, Certification, and SQA System Assessment

Purpose:

 Applies international standards for software quality.

 Ensures adherence to ISO 9001, CMMI (Capability Maturity Model Integration), and IEEE
standards.

Certification & Assessment:

 Conducts internal audits to measure compliance.

 Implements corrective actions to align with quality benchmarks.

 Periodic SQA system evaluations to assess effectiveness.

7. Organizing for SQA (Human Components)

Purpose:

 Defines roles and responsibilities of professionals involved in SQA.

Key Participants:

 Project Managers – Oversee quality implementation.

 Testing Personnel – Conduct software testing.

 SQA Unit – Ensures adherence to standards and reviews.

 SQA Trustees & Committee Members – Guide quality assurance strategies.

 Forum Members – Discuss and implement improvements.

Chapter 4: Defect Removal Effectiveness


 Software quality metrics help in assessing the effectiveness of software development and
testing processes.

 Two important metrics:

o Defect Removal Density (DRD) – Measures defect detection efficiency.


o Cyclomatic Complexity – Measures the complexity of software code.

 A software metric used to measure the logical complexity of a program.

 Developed by Thomas McCabe in 1976.

 Helps in determining the number of independent paths in the program.

Cyclomatic Complexity Formula

 The formula for Cyclomatic Complexity (CC):

CC = E - N + 2P

o E = Number of edges in the control flow graph.

o N = Number of nodes in the control flow graph.

o P = Number of connected components (usually 1 for a single program).

1. Software Development Methodologies

Different models are used in software development to ensure high quality and efficiency:

1.1 The Software Development Life Cycle (SDLC) Model


 A linear sequential model that covers the full development process.

 Begins with requirement gathering and ends with system operation and maintenance.

 Provides a structured framework for software development.

 Waterfall model

1.2 The Prototyping Model

 Focuses on building a prototype first to understand user requirements better.

 The prototype is iteratively refined until the final product is achieved.

 Helps in reducing risks and gathering early feedback.

1.3 The Spiral Model

 Developed by Boehm (1988, 1998).

 Combines iterative development, risk analysis, and customer feedback.

 Suitable for large and complex projects with high chances of failure.
1.4 The Object-Oriented Model

 Emphasizes reuse of software components (objects).

 Uses a software library for integrating pre-developed components.

 Advantages:

o Cost-effective: Reusing components reduces development costs.

o Higher quality: Reused components have fewer defects.

o Faster development: Less effort needed for coding and testing.

2. Factors Affecting Quality Assurance (QA) Activities

QA activities vary depending on project size, complexity, and available resources.

2.1 Project Factors

 Size of the project: Larger projects require more rigorous QA.


 Technical complexity: Complex projects need specialized testing.

 Extent of reusable components: More reuse leads to fewer defects.

 Impact of failure: Critical systems (e.g., medical software) need extensive QA.

2.2 Team Factors

 Skill level: Experienced developers ensure better quality.

 Familiarity with the project: Teams familiar with the system can detect defects easily.

 Availability of staff: More QA staff ensures better defect detection.

3. Verification, Validation, and Qualification

Quality assurance ensures that the software meets requirements through three key processes:

3.1 Verification

 Evaluates if the product meets specifications at each development phase.

 Ensures internal consistency (e.g., checking if a module follows design documents).

3.2 Validation

 Ensures that the final product meets user requirements.

 Focuses on functional correctness and user needs.

3.3 Qualification

 Determines if the system is fit for operational use.

 Includes real-world testing before deployment.

4. Model for Defect Removal Effectiveness and Cost

A defect removal model helps measure how efficiently defects are detected and fixed.

4.1 Key Data Used in the Model

4.1.1 Defect Origin Distribution

 Defects are introduced at different development stages.

 Studies (e.g., IBM research) show most defects occur early in development.

4.1.2 Defect Removal Effectiveness

 Every QA activity filters out a percentage of defects.

 Some defects are missed or improperly fixed, leading to carry-over defects.

4.1.3 Cost of Defect Removal

 Early defect detection is cheaper.


 Fixing a defect in later stages (e.g., after release) is much more expensive.

Chapter 5: Software Testing


1. Introduction to Software Testing

 Software Testing is the first quality assurance tool applied before software deployment.

 It consumes the most resources in software quality assurance.

 24% of project budgets and 27% of project time are allocated to testing.

2. Definition and Objectives

 Classic Definition: "Testing is the process of executing a program with the intention of finding
errors."

 IEEE Definition: "The process of operating a system or component under specified


conditions, observing results, and making evaluations."

 Formal Definition: A systematic process executed by a specialized team to examine software


units or entire packages.

Objectives

1. Direct Objectives:

o Identify and detect errors.

o Improve software quality by correcting errors.

o Perform tests efficiently within time and budget constraints.

2. Indirect Objectives:

o Maintain a record of errors for error prevention.

o Assist in corrective and preventive actions.

3. Software Testing Life Cycle (STLC)

The STLC is a stepwise approach ensuring software meets quality standards.


Phases of STLC

1. Requirement Analysis

o Understand functional and non-functional requirements.

o Identify testable requirements.

2. Test Planning

o Define scope, objectives, and resources.

o Identify tools, timelines, and responsibilities.

3. Test Case Development

o Write test cases covering possible scenarios.

o Prepare test scripts.

4. Test Environment Setup

o Prepare the system to simulate the end-user environment.

o Ensure test data is realistic.

5. Test Execution

o Run test cases and compare results with expected outcomes.

o Identify and report defects.

6. Test Closure

o Analyze test results.

o Document findings for future improvements.


4. Software Testing Strategies

Testing is performed at different levels and follows specific strategies.

Approaches to Testing

1. Testing Entire Software at Once

o The entire software is tested after completion.

o Not suitable for large projects.

2. Incremental Testing

o Tests individual modules and integrates them step by step.

o Two approaches:

Top-down testing: Tests the main module first, then lower-level modules.

Bottom-up testing: Tests lower-level modules first, then integrates with higher modules.
Use of Stubs and Drivers

 Stub: Replaces missing lower-level modules (used in top-down testing).

 Driver: Replaces missing higher-level modules (used in bottom-up testing).

5. Software Test Classification

Software tests are classified based on concepts and requirements.

Types of Testing

1. Functional Testing

o Ensures software meets requirements.

o Types: Unit Testing, Integration Testing, System Testing, Acceptance Testing.

2. Non-Functional Testing

o Focuses on software performance, security, usability.

o Types: Performance Testing, Load Testing, Stress Testing.


3. Manual vs. Automated Testing

o Manual Testing: Performed by testers manually.

o Automated Testing: Uses scripts and tools (e.g., Selenium, JUnit).

6. Levels of Testing

1. Unit Testing

o Tests individual software components.

o Uses stubs and drivers.

2. Integration Testing

o Combines tested modules and checks interactions.

o Uses top-down or bottom-up approaches.

3. System Testing

o Tests the entire software in a real environment.

4. Acceptance Testing

o Verifies software meets user requirements before deployment.

7. Software Testing Techniques

White Box Testing

 Tests the internal structure of code.

 Includes branch testing, path testing, statement coverage.

Black Box Testing

 Tests software without knowledge of internal code.

 Focuses on input/output behavior.

Grey Box Testing

 Combination of White and Black Box Testing.

 Used when testers have partial knowledge of the code.

8. Regression Testing

 Ensures new changes don’t affect existing functionality.

 Common in agile development.


9. Alpha and Beta Testing

 Alpha Testing: Conducted by developers in a controlled environment.

 Beta Testing: Performed by real users before release.

MTT2 Previous year paper


(a) Suggest an imaginary project ideally suitable for prototyping.

Answer:
An imaginary project suitable for prototyping is a "Smart Restaurant Ordering System."

 Description: The system allows customers to place food orders through a mobile app with an
AI-based chatbot for recommendations.

 Why Prototyping?

o Helps in getting user feedback on the UI/UX.

o Identifies missing functionalities before full development.

o Reduces risks by validating core features.

(b) Distinguish between direct and indirect testing objectives.

Aspect Direct Testing Objectives Indirect Testing Objectives

Purpose Identify software errors and Improve overall software quality.


defects.

Example Finding bugs in a login system. Analyzing test reports for process
improvement.

Focus Ensuring correctness of code. Enhancing maintainability and reliability.

Applicatio Functional testing, unit testing. Regression testing, quality assurance.


n

(c) What are Test Cases? Explain any two test cases that might be used for a food ordering app.

Answer:
A test case is a set of conditions and inputs designed to check whether a software application meets
the expected results.

Test Cases for a Food Ordering App:

1. Login Functionality Test Case

o Test Case ID: TC001

o Description: Verify that users can log in with valid credentials.

o Test Steps:
1. Open the app.

2. Enter a valid email and password.

3. Click "Login."

o Expected Result: User is successfully logged in.

2. Order Placement Test Case

o Test Case ID: TC002

o Description: Verify that users can place an order successfully.

o Test Steps:

1. Select a food item from the menu.

2. Add it to the cart.

3. Proceed to checkout and complete payment.

o Expected Result: Order confirmation message is displayed.

(d) Why is Beta Testing performed when Alpha Testing has already been done to test the software?

Answer:
Even after Alpha Testing, Beta Testing is performed because:

 Real-World Environment: Alpha Testing is conducted in a controlled environment, while Beta


Testing is done by real users in different conditions.

 User Feedback: Helps in collecting valuable feedback from actual users to improve usability
and performance.

 Uncovered Bugs: Identifies issues that might not have been caught in Alpha Testing.

 Performance Testing: Ensures the software can handle large-scale usage.

Q2. Explain the differences between verification, validation, and qualification.

Aspect Verification Validation Qualification

Definition Ensures software meets Ensures software meets Ensures software is


requirements before customer expectations certified for release.
execution. after execution.

Focus Process-oriented (Are we Product-oriented (Are we Compliance-oriented


building the product building the right (Does it meet
right?). product?). standards?).

Techniques Reviews, walkthroughs, Testing, user acceptance Compliance checks,


Used inspections. testing (UAT). audits, certifications.

Example Checking if design Testing a mobile app for Ensuring an e-commerce


documents match user experience. website follows PCI DSS
specifications. standards.

Q3. Take an example and explain the working of Stub and Driver.

Answer:
Stubs and Drivers are used in Incremental Integration Testing to simulate missing components.

Example: Online Banking System

 Suppose a banking application has three modules:

1. Login System

2. Account Balance Module

3. Fund Transfer Module

Case 1: Using a Stub (Top-Down Testing)

 If the Fund Transfer Module is incomplete, a Stub is created to simulate its expected
response.

 The Login System and Account Balance Module interact with this Stub.

Case 2: Using a Driver (Bottom-Up Testing)

 If the Login System is not ready but the Fund Transfer Module needs testing, a Driver is
created.

 The Driver simulates user input and calls the Fund Transfer Module for testing.

Key Differences:

Aspect Stub Driver

Used in Top-Down Testing Bottom-Up Testing

Replace Missing lower modules Missing higher modules


s

Example A fake function simulating payment A temporary script that mimics user
processing. login.

Q4. What are the main types of automated tests? Explain.

Answer:
Automated testing ensures fast and efficient software testing. The main types include:

1. Unit Testing

o Tests individual components of software.

o Example: Testing a login function using JUnit (Java) or PyTest (Python).


2. Integration Testing

o Verifies communication between different modules.

o Example: Checking how a payment gateway interacts with a shopping cart.

3. Functional Testing

o Ensures the software meets user requirements.

o Example: Testing a checkout process in an e-commerce website.

4. Regression Testing

o Ensures new updates do not break existing features.

o Example: Running automated test scripts after adding a new feature.

5. Performance Testing

o Measures speed, scalability, and stability.

o Example: Using JMeter to check how many users a website can handle.

6. Security Testing

o Identifies vulnerabilities.

o Example: Running penetration tests using Burp Suite.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy