0% found this document useful (0 votes)
129 views

FLP - Fundamentals of Software Testing - 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views

FLP - Fundamentals of Software Testing - 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Fundamentals of Software Testing

independent Test Engineering and Monitoring


Solutions
Contents

 Why do Testing
 What is Testing
 What to Test
 Defects
 Why do defects arise
 Effects of a defect
 Causes of defects
 Definition
 Principles
 Testing and Quality
 How much testing is enough
 Evolution of Testing @ Microsoft
 Software Testing ..then
 Software Testing… Now
 Food for Thought
 Attributes of a tester
 Testing at Capgemini
 Focus on Testing
 Testing Roles in CG

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 1
Why Do Testing

Software testing is focused on finding defects in the final product.


Testing : Testing is to avoid effects of defects that might be Human
mistakes or Environmental defects.

Testing is necessary:
To avoid effects of defects.
To avoid failures of Software

It is human nature to make mistakes but certain conditions make humans


to make more mistakes e.g:

Time Pressure(Dead Lines)


Complexity of the Requirement/Technology
Lack of experience/skill
Lack of information
Frequent changes

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 2
What Is Testing?

Software testing is a process of verifying and validating that a software application or program
 Meets the business and technical requirements that guided its design and development, and
 Works as expected.
 also identifies important defects, flaws, or errors in the application code that must be fixed

Software testing has three main purposes: verification, validation, and defect finding.
♦ The verification process confirms that the software meets its technical specifications. A
“specification” is a description of a function in terms of a measurable output value given a
specific input value under specific preconditions.
The validation process confirms that the software meets the business requirements. A simple
example of a business requirement is “After choosing a branch office name, information about
the branch’s customer account managers will appear in a new window. The window will present
manager identification and summary information about each manager’s customer base: <list of
data elements>.” Other requirements provide details on how the data will be summarized,
formatted and displayed.
♦ A defect is a variance between the expected and actual result. The defect’s ultimate source may
be traced to a fault introduced in the specification, design, or development (coding) phases.

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 3
What To Test
•Test what is important.
•Focus on the core functionalities
•Concentrate on the application’s capabilities in common usage situations before going on to unlikely
situations.
Testing can involve some or all of the following factors.
♦ Business requirements
♦ Functional design requirements
♦ Technical design requirements
♦ Regulatory requirements
♦ Programmer code
♦ Systems administration standards and restrictions
♦ Corporate standards
♦ Professional or trade association best practices
♦ Hardware configuration
♦ Cultural issues and language differences

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 4
Defects

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 5
Why Do Defects Arise

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 6
Causes of Software Defects

Minimal or no proper documentation of Business


Requirements

Insufficient time window for development

Lack of domain knowledge

Programming Language constraints

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 7
Effects of a Defect

Leads to Injury/Death
Leads to loss of Time
Leads to loss of money
Leads to Bad reputation
Environmental factors can result in mistakes
Pollution (ex: mobile)
Radiation(ex: electromagnetic radiation)

Reference: ISTQB Foundation Syllabus

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 8
Definition of a Defect

1.
Human beings can make an error (aka Mistake)
Which produces a defect (aka fault, bug)
When the faulty system is executed it might cause a failure
2.
All defects might not lead to failures
Error ::Detected at the same level/Stage
Bug/Fault/Defect: A deviation identified by another person at a different
stage
Failure: Client or End Users identifies the defect

Reference: ISTQB Foundation Syllabus

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 9
Principles

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 10
Principles of Testing

Principle 1 – Testing shows presence of defects


Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossible


Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial
cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing
efforts.

Principle 3 – Early testing


Testing activities should start as early as possible in the software or system development life
cycle, and should be focused on defined objectives.

Principle 4 – Defect clustering


A small number of modules contain most of the defects discovered during pre-release testing, or
are responsible for the most operational failures.

Principle 5 – Pesticide paradox


If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.

Principle 6 – Testing is context dependent


Testing is done differently in different contexts. For example, safety-critical software is tested
differently from an e-commerce site.

Principle 7 – Absence-of-errors fallacy


Finding and fixing defects does not help if the system built is unusable and does not fulfill the
users’ needs and expectations.

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 11
Principles contd ……

Test planning and control


Test planning is the activity of verifying the mission of testing, defining the objectives of testing
and the specification of test activities in order to meet the objectives and mission.
Test control is the ongoing activity of comparing actual progress against the plan, and reporting
the status, including deviations from the plan. It involves taking actions necessary to meet the
mission and objectives of the project. In order to control testing, it should be monitored
throughout theproject. Test planning takes into account the feedback from monitoring and control
activities.
Test planning and control tasks are defined in Chapter 5 .

Test analysis and design


Test analysis and design is the activity where general testing objectives are transformed into
tangible test conditions and test cases.
Test analysis and design has the following major tasks:
Reviewing the test basis (such as requirements, architecture, design, interfaces).
Evaluating testability of the test basis and test objects.
Identifying and prioritizing test conditions based on analysis of test items, the specification,
behaviour and structure.
Designing and prioritizing test cases.
Identifying necessary test data to support the test conditions and test cases.
Designing the test environment set-up and identifying any required infrastructure and tools.

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 12
Principles contd ……

Test implementation and execution


Test implementation and execution is the activity where test procedures or scripts are specified
by
combining the test cases in a particular order and including any other information needed for test
execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:
Developing, implementing and prioritizing test cases.
Developing and prioritizing test procedures, creating test data and, optionally, preparing test
harnesses and writing automated test scripts.
Creating test suites from the test procedures for efficient test execution.
Verifying that the test environment has been set up correctly.
Executing test procedures either manually or by using test execution tools, according to the
planned sequence.
Logging the outcome of test execution and recording the identities and versions of the software
under test, test tools and testware.
Comparing actual results with expected results.
Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a
defect in the code, in specified test data, in the test document, or a mistake in the way the test
was executed).
Repeating test activities as a result of action taken for each discrepancy.
For example, reexecution of a test that previously failed in order to confirm a fix (confirmation
testing), execution of a corrected test and/or execution of tests in order to ensure that defects
have not been introduced in unchanged areas of the software or that defect fixing did not uncover
other defects (regression testing)

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 13
Principles contd ……

Evaluating exit criteria and reporting


Evaluating exit criteria is the activity where test execution is assessed against the defined
objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
Checking test logs against the exit criteria specified in test planning.
Assessing if more tests are needed or if the exit criteria specified should be changed.
Writing a test summary report for stakeholders.

Test closure activities


Test closure activities collect data from completed test activities to consolidate experience,
testware, facts and numbers. For example, when a software system is released, a test project is
completed (or cancelled), a milestone has been achieved, or a maintenance release has
beencompleted.
Test closure activities include the following major tasks:
Checking which planned deliverables have been delivered, the closure of incident reports or raising
of change records for any that remain open, and the documentation of the acceptance of the
system.
Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
Handover of testware to the maintenance organization.
Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
Independent level of Testing
Tests designed by the person(s) who wrote the software under test (low level of independence).
Tests designed by another person(s) (e.g. from the development team).
Tests designed by a person(s) from a different organizational group (e.g. an independent test
team) or test specialists (e.g. usability or performance test specialists).
Tests designed by a person(s) from a different organization or company (i.e. outsourcing or
Certification by an external body)

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 14
Testing and Quality

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 15
Testing and Quality
Testing helps to measure the Quality of software in terms of
Number of defects found
Tests also reveal important information regarding the Non functional attributes like Reliability, Security,
Performance, etc.
Quality needs to be validated and verified.
The delivered system must meet the specification. This is known as validation ('is this the right
specification?') and ('is the system correct to specification?') is verification.
Customer ,Project team and other stake holders should agree and set expectations.

Customers definition of quality should be understood.

What we as software developers and testers may see as quality - that the software meets its defined
specification, is technically excellent and has few bugs in it - may not provide a quality solution for our
customers.
e.g if our customers find they have spent more money than they wanted or that the software doesn't help
them carry out their tasks, they won't be impressed by the technical excellence of the solution.
These attributes or characteristics described should /can serve as a framework or checklists for areas to
consider coverage.

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 16
How Much Testing is Enough

We have a choice: test everything, test nothing or test some of the software

Following points need to be considered:


Define test approach

Assessing and managing risk is one of the most important activities in any project is a key activity and reason for testing.

Deciding how much testing is enough should take account of the level of risk, including technical and business risks related to
the product and project constraints such as time and budget.

Carry out a risk assessment to decide how much testing to do

Vary the testing effort based on the level of risk in different areas

Additionally, testing should provide sufficient information to stakeholders to make informed decisions about the release of the software
or system we're testing, for the next development step or handover to customers.
The effort put into the quality assurance and testing activities needs to be tailored to the risks and costs associated with the project.
Because of the limits in the budget, the time, and in testing we need to decide how we will focus our testing, based on the risks

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 17
Evolution of Testing @ Microsoft

Approach:
 Up to late 1980s - Testing by developers, ad-hoc, some help from outside
contractors
 Since 1990/91 - Testing is a separate discipline, comparable to development

Milestones:
1984 - “Software test Engineers” - a new designation introduced
1984 - Separate testing group
1986 - Director of Testing (Dave Morre) of part time
1993 - Director of Testing ( Roger Sherman) full time
1995 - One tester for every developer

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 18
Software testing - Then ...

 Ad-hoc
 Need-driven
 Ignored completely
 No theoretical basis i.e. No
mathematical models etc
 Totally manual
 Testing jobs perceived as
low scale as compared to
other discipline

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 19
Software testing - Now !

 Organized body of knowledge


• Independent Testing teams
• Testing Models
• Testing knowledge Groups

 A specialized engineering discipline in great demand


 Testing industry grew by 750% from 2001 to 2003
and is expected to grow phenomenally
 Commercial tools, Mathematical and scientific
models available.
 IEEE Standards, Certification programs like CSTE
etc.
 Automation solutions for the entire testing lifecycle
Client or Topic | Financial Services
In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 20
Food for Thought

What do you think should be attributes of a good test


engineer?

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 21
Attributes of a good tester

 A good test engineer has a 'test to break' attitude


 Need to take a different view, a different mindset (“What if it isn’t?”,
“What could go wrong?”)
 An ability to understand the point of view of the customer
 A passion for quality and attention to detail
 Notice little things that others miss/ignore (See symptom not bug)
 Ability to communicate fault information to both technical
(developers) and non-technical (managers and customers)
 Tact and diplomacy for maintaining a cooperative relationship with
developers
 Work under worst time pressure (at the end)
 “Patience”

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 22
Testing at CapGemini

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 23
Capgemini: Focus on Testing

 Fastest growing service line in Capgemini


 Team of 300+ test engineers
 Was CMM level 5 assessed within a year of being established
 Presence in most of the existing Capgemini accounts
 High Focus on competency building
 Do almost all types of testing including Performance
monitoring and Public Website testing
 Offering Security Testing

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 24
Testing roles in The Testing Teams

An effective testing team includes a mixture of members with;


 Testing expertise
 Tools expertise
 Domain/Technology expertise
 Management expertise

Roles within the Testing Team;


 Test Manager
 Onsite coordinator
 Business Analyst
 Automation Test Designer
 Test Engineer/Test Executor

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 25
Testing roles in Testing Teams

Entry Level
Client or Topic | Financial Services
In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 26
Feedback Please !

 Please give feedback for relevant points.


e.g. course material is not provided so that is
not applicable

 We expect genuine feedback. This


will help us improve. Please cite
instances if necessary.

Client or Topic | Financial Services


In collaboration with All work described was performed by Capgemini or a Capgemini affiliate
Partner logo Insert "Title, Author, Date" © 2007 Capgemini - All rights reserved 27
Questions?
Thank You!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy