Manual Testing Faqs: What Is Software Quality Assurance'?
Manual Testing Faqs: What Is Software Quality Assurance'?
Manual Testing Faqs: What Is Software Quality Assurance'?
• Miscommunication or no communication
• Software complexity
• Programming errors
What is Verification?
“Verification” checks whether we are building the right system, and Verification
typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings.
What is Validation?
“Validation” checks whether we are building the system right. Validation typically
involves actual testing and takes place after verifications are completed.
What is a ‘walkthrough’?
A ‘walkthrough’ is an informal meeting for evaluation or informational purposes.
Little or no preparation is usually required.
• Black box testing - not based on any knowledge of internal design or code.
Tests are based on requirements and functionality.
• Unit testing - the most ‘micro‘ scale of testing; to test particular functions or
code modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. Not always easily done
unless the application has a well-designed architecture with tight code, may require
developing test driver modules or test harnesses.
• System testing - Black box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
• end-to-end testing - similar to system testing; the ‘macro’ end of the test
scale; involves testing of a complete application environment in a situation that
mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.
• Performance testing - term often used interchangeably with ’stress’ and ‘load’
testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in
requirements documentation or QA or Test Plans.
• Security testing - testing how well the system protects against unauthorized
internal or external access, willful damage, etc; may require sophisticated testing
techniques.
• Mutation testing - a method for determining if a set of test data or test cases
is useful, by deliberately introducing various code changes (’bugs’) and retesting with
the original test data/cases to determine if the ‘bugs’ are detected. Proper
implementation requires large
What is SEI? CMM? ISO? IEEE? ANSI? Will it help? • SEI = ‘Software
Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense
Department to help improve software development processes. • CMM = ‘Capability
Maturity Model’, developed by the SEI. It’s a model of 5 levels of organizational
‘maturity’ that determine effectiveness in delivering quality software. It is geared to
large organizations such as large U.S. Defense Department contractors. However,
many of the QA processes involved are appropriate to any organization, and if
reasonably applied can be helpful. Organizations can receive CMM ratings by
undergoing assessments by qualified auditors. Level 1 – Level 2 – Level 3 -
Level 4 - Level 5 - (Perspective on CMM ratings: During 1992-1996 533
organizations were assessed. Of those, 62% were rated at Level 1, 23% at 2,13%
at 3, 2% at 4, and 0.4% at 5. The median size of organizations was 100 software
engineering/maintenance personnel; 31% of organizations were U.S. federal
contractors. For those rated at Level 1, the most problematical key process area
was in Software Quality Assurance.) • ISO = ‘International Organization for
Standards’ - The ISO 9001, 9002, and 9003 standards concern quality systems that
are assessed by outside auditors, and they apply to many kinds of production and
manufacturing organizations, not just software. The most comprehensive is 9001,
and this is the one most often used by software development organizations. It covers
documentation, design, development, production, testing, installation, servicing, and
other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO
9001 to software development organizations. The U.S. version of the ISO 9000
series standards is exactly the same as the international version, and is called the
ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ
(American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a
third-party auditor assesses an organization, and certification is typically good for
about 3 years, after which a complete reassessment is required. Note that ISO 9000
certification does not necessarily indicate quality products - it indicates only that
documented processes are followed. (Publication of revised ISO standards are
expected in late 2000; see http://www.iso.ch/ for latest info.) • IEEE = ‘Institute
of Electrical and Electronics Engineers’ - among other things, creates standards such
as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829),
‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard
for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730), and others. •
ANSI = ‘American National Standards Institute’, the primary industrial standards
body in the U.S.; publishes some software-related standards in conjunction with the
IEEE and ASQ (American Society for Quality).
What is the ’software life cycle’? The life cycle begins when an application is
first conceived and ends when it is no longer in use. It includes aspects such as
initial concept, requirements analysis, functional design, internal design,
documentation planning, test planning, coding, document preparation, integration,
testing, maintenance, updates, retesting, phase-out, and other aspects.
A good test engineer has a ‘test to break’ attitude, an ability to take the point of view
of the customer, a strong desire for quality, and an attention to detail. Tact and
diplomacy are useful in maintaining a cooperative relationship with developers, and
an ability to communicate with both technical (developers) and non-technical
(customers, management) people is useful. Previous software development
experience can be helpful as it provides a deeper understanding of the software
development process, gives the tester an appreciation for the developers’ point of
view, and reduce the learning curve in automated test tool programming. Judgement
skills are needed to assess high-risk areas of an application on which to focus testing
efforts when time is limited.
The same qualities a good tester has are useful for a QA engineer. Additionally, they
must be able to understand the entire software development process and how it can
fit into the business approach and goals of the organization. Communication skills
and the ability to understand various sides of issues are important. In organizations
in the early stages of implementing QA processes, patience and diplomacy are
especially needed. An ability to find problems as well as to see ‘what’s missing’ is
important for inspections and reviews.
A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test plan
is a useful way to think through the efforts needed to validate the acceptability of a
software product. The completed document will help people outside the test group
understand the ‘why’ and ‘how’ of product validation. It should be thorough enough
to be useful but not so thorough that no one outside the test group will read it. The
following are some of the items that might be included in a test plan, depending on
the particular project.
The best bet in this situation is for the testers to go through the process of reporting
whatever bugs or blocking-type problems initially show up, with the focus being on
critical bugs. Since this type of problem can severely affect schedules, and indicates
deeper problems in the software development process (such as insufficient unit
testing or insufficient integration testing, poor design, improper build or release
procedures, etc.) managers should be notified, and provided with some
documentation as evidence of the problem.
Use risk analysis to determine where testing should be focused.Since it’s rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. This requires judgement skills,
common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include: • Which functionality is most important to the project’s
intended purpose? • Which functionality is most visible to the user? • Which
functionality has the largest safety impact? • Which functionality has the largest
financial impact on users? • Which aspects of the application are most important to
the customer? • Which aspects of the application can be tested early in the
development cycle? • Which parts of the code are most complex, and thus most
subject to errors? • Which parts of the application were developed in rush or panic
mode? • Which aspects of similar/related previous projects caused problems? •
Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application? •
What kinds of problems would cause the worst publicity? • What kinds of problems
would cause the most customer service complaints? • What kinds of tests could
easily cover multiple functionalities? • Which tests will have the best high-risk-
coverage to time-required ratio? •
How can World Wide Web sites be tested?Web sites are essentially client/server
applications - with web servers and ‘browser’ clients. Consideration should be given
to the interactions between html pages, TCP/IP communications, Internet
connections, firewalls, applications that run in web pages (such as applets,
javascript, plug-in applications), and applications that run on the server side (such as
cgi scripts, database interfaces, logging applications, dynamic page generators, asp,
etc.). Additionally, there are a wide variety of servers and browsers, various versions
of each, small but sometimes significant differences between them, variations in
connection speeds, rapidly changing technologies, and multiple standards and
protocols. The end result is that testing for web sites can become a major ongoing
effort. Other considerations might include: • What are the expected loads on the
server (e.g., number of hits per unit time?), and what kind of performance is
required under such loads (such as web server response time, database query
response times). What kinds of tools will be needed for performance testing (such as
web load testing tools, other tools already in house that can be adapted, web robot
downloading tools, etc.)? • Who is the target audience? What kind of browsers will
they be using? What kind of connection speeds will they by using? Are they intra-
organization (thus with likely high connection speeds and similar browsers) or
Internet-wide (thus with a wide variety of connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)? • Will down
time for server and content maintenance/upgrades be allowed? how much? • What
kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is
it expected to do? How can it be tested? • How reliable are the site’s Internet
connections required to be? And how does that affect backup system or redundant
connection requirements and testing? • What processes will be required to manage
updates to the web site’s content, and what are the requirements for maintaining,
tracking, and controlling page content, graphics, links, etc.? • Which HTML
specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers? • Will there be any standards or requirements for page
appearance and/or graphics throughout a site or parts of a site?? • How will
internal and external links be validated and updated? how often? • Can testing be
done on the production system, or will a separate test system be required? How are
browser caching, variations in browser option settings, dial-up connection
variabilities, and real-world internet ‘traffic congestion’ problems to be accounted for
in testing? • How extensive or customized are the server logging and reporting
requirements; are they considered an integral part of the system and do they require
testing? • How are cgi programs, applets, javascripts, ActiveX components, etc. to
be maintained, tracked, controlled, and tested? Some sources of site security
information include the Usenet newsgroup ‘comp.security.announce’ and links
concerning web site security in the ‘Other Resources’ section. Some usability
guidelines to consider - these are subjective and may or may not apply to a given
situation (Note: more information on usability testing issues can be found in articles
about web site usability in the ‘Other Resources’ section): • Pages should be 3-5
screens max unless content is tightly focused on a single topic. If larger, provide
internal links within the page. • The page layouts and design elements should be
consistent throughout a site, so that it’s clear to the user that they’re still within a
site. • Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type. • All pages should have links
external to the page; there should be no dead-end pages. • The page owner,
revision date, and a link to a contact person or organization should be included on
each page. Many new web site test tools are appearing and more than 180 of them
are listed in the ‘Web Test Tools’ section.
• Test Plan
• Design Test Cases
• Execute Tests
• Evaluate Results
• Document Test Results
• Casual Analysis/ Preparation of Validation Reports
• Regression Testing / Follow up on reported bugs.
Test case selection that is based on an analysis of the internal structure of the
component.Testing by looking only at the code. Sometimes also called “Code Based
Testing”. Obviously you need to be a programmer and you need to have the source
code to do this.
Test Case
Operational Testing
Validation
Verification
The process of evaluating a system or component to determine whether the products
of the given development phase satisfy the conditions imposed at the start of that
phase.
Control Flow
CAST
Metrics