Q1. What Is Verification?: Glossary and Technical Faqs
Q1. What Is Verification?: Glossary and Technical Faqs
Q1. What Is Verification?: Glossary and Technical Faqs
As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too. Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too. Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made. Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read. Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.
Q11. Give me five common problems that occur during software development.
A: Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication. 1. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems. 2. The schedule is unrealistic if too much work is crammed in too little time. 3. Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes. 4. It's extremely common that new features are added after development is underway. 5. Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.
Q13. Give me five solutions to problems that occur during software development.
A: Solid requirements, realistic schedules, adequate testing, firm requirements and good communication. 1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements. 2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out. 3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing. 4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on. 5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation.
Good test engineers have a "test to break" attitude. We, good test engineers, take the point of view of the customer, have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. Previous software development experience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming.
increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.
Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible.
Q27. What if the project isn't big enough to justify extensive testing?
A: Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc" testing, or write up a limited test plan based on the risk analysis.
Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job. Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes. Design some flexibility into automated test scripts; Focus initial automated testing on application aspects that are most likely to remain unchanged; Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs; Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans; Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.
Q29. What if the application has functionality that wasn't in the requirements?
A: It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.
Q31. What if organization is growing so fast that fixed QA processes are impossible?
A: This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than... Hire good people (i.e. hire Rob Davis) Ruthlessly prioritize quality issues and maintain focus on the customer; Everyone in the organization should be clear on what quality means to the customer.
Q33. Why do you recommended that we test during the design phase?
A: Because testing during the design phase can prevent defects later on. We recommend verifying three things... Verify the design is good, efficient, compact, testable and maintainable. 1. Verify the design meets the requirements and is complete (specifies all relationships between modules, how to pass data, what happens in exceptional circumstances, starting state of each module and how to guarantee the state of each module). 2. Verify the design incorporates enough memory, I/O devices and quick enough runtime for the final product.
regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
team also works with the client/customer/project manager to develop the acceptance criteria.
Provide documentation required by FDA, FAA, other regulatory agencies and your customers; Save money by discovering defects 'early' in the design process, before failures occur in production, or in the field; Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.
A: One software testing methodology is the use a three step process of... 1. 2. 3. Creating a test strategy; Creating a test plan/design; and Executing tests.
This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers' applications.
Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. It is the test team who, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts include the specific data that will be used for testing the process or transaction. Test procedures or scripts may cover multiple test scenarios. Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing. A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
Inputs for this process: Approved Test Strategy Document. Test tools, or automated test tools, if applicable. Previously developed scripts, if applicable. Test documentation problems uncovered as a result of testing. A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data. Outputs for this process: Approved documents of test scenarios, test cases, test conditions and test data. Reports of software design issues, given to software developers for correction.
The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing. A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead. After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager's formal acceptance. The test team reviews test document problems identified during testing, and update documents where appropriate. Inputs for this process: Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. Test tools, including automated test tools, if applicable. Developed scripts. Changes to the design, i.e. Change Request Documents. Test data. Availability of the test team and project team. General and Detailed Design Documents, i.e. Requirements Document, Software Design Document. A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager. Test Readiness Document. Document Updates. Outputs for this process: Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables. Changes to the code, also known as test fixes. Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems. Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues. Formal record of test incidents, usually part of problem tracking. Base-lined package, also known as tested source and object code, ready for migration to the next level
Q79. What is the difference between performance testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing. You CAN learn testing, with little or no outside help. Get CAN get free information. Click on a link!
Q80. What is the difference between reliability testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.
Q81. What is the difference between volume testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is gray area in between stress testing and load testing.
Q102. What is the difference between a software fault and a software failure?
A: A software failure occurs when the software does not do what the user expects to see. A software fault, on the other hand, is a hidden programming error. A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended.
Data Complexity Metric (DV). Data Complexity Metric quantifies the complexity of a module's structure as it relates to data-related variables. It is the number of independent paths through data logic, and therefore, a measure of the testing effort with respect to datarelated variables. Tested Data Complexity Metric (TDV). Tested Data Complexity Metric quantifies the complexity of a module's structure as it relates to data-related variables. It is the number of independent paths through data logic that have been tested. Data Reference Metric (DR). Data Reference Metric measures references to data-related variables independently of control flow. It is the total number of times that data-related variables are used in a module. Tested Data Reference Metric (TDR). Tested Data Reference Metric is the total number of tested references to data-related variables. Maintenance Severity Metric (maint_severity). Maintenance Severity Metric measures how difficult it is to maintain a module. Data Reference Severity Metric (DR_severity). Data Reference Severity Metric measures the level of data intensity within a module. It is an indicator of high levels of data related code; therefore, a module is data intense if it contains a large number of data-related variables. Data Complexity Severity Metric (DV_severity). Data Complexity Severity Metric measures the level of data density within a module. It is an indicator of high levels of data logic in test paths, therefore, a module is data dense if it contains data-related variables in a large proportion of its structures. Global Data Severity Metric (gdv_severity). Global Data Severity Metric measures the potential impact of testing data-related basis paths across modules. It is based on global data test paths.
McCabe Object-Oriented Software Metrics; Encapsulation Percent Public Data (PCTPUB). PCTPUB is the percentage of public and proteced data within a class. Access to Public Data (PUBDATA) PUBDATA indicates the number of accesses to public and protected data. McCabe Object-Oriented Software Metrics; Polymorphism Percent of Unoverloaded Calls (PCTCALL). PCTCALL is the number of nonoverloaded calls in a system. Number of Roots (ROOTCNT). ROOTCNT is the total number of class hierarchy roots within a program. Fan-in (FANIN). FANIN is the number of classes from which a class is derived. McCabe Object-Oriented Software Metrics; Quality Maximum v(G) (MAXV). MAXV is the maximum cyclomatic complexity value for any single method within a class. Maximum ev(G) (MAXEV). MAXEV is the maximum essential complexity value for any single method within a class. Hierarchy Quality(QUAL). QUAL counts the number of classes within a system that are dependent upon their descendants.
Depth (DEPTH). Depth indicates at what level a class is located within its class hierarchy. Lack of Cohesion of Methods (LOCM). LOCM is a measure of how the methods of a class interact with the data in a class. Number of Children (NOC). NOC is the number of classes that are derived directly from a specified class. Response For a Class (RFC). RFC is a count of methods implemented within a class plus the number of methods accessible to an object of this class type due to inheritance. Weighted Methods Per Class (WMC). WMC is a count of methods implemented within a class. Halstead Software Metrics Program Length. Program length is the total number of operator occurences and the total number of operand occurences. Program Volume. Program volume is the minimum number of bits required for coding the program. Program Level and Program Difficulty. Program level and program difficulty is a measure of how easily a program is comprehended. Intelligent Content. Intelligent content shows the complexity of a given algorithm independent of the language used to express the algorithm. Programming Effort. Programming effort is the estimated mental effort required to develop a program. Error Estimate. Error estimate calculates the number of errors in a program. Programming Time. Programming time is the estimated amount of time to implement an algorithm. Line Count Software Metrics Lines of Code Lines of Comment Lines of Mixed Code and Comments Lines Left Blank
Q119. How can I learn to use WinRunner, without any outside help?
A: I suggest you read all you can, and that includes reading product description pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on. Then the next step is getting some hands-on experience on how to use WinRunner. If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to use WinRunner, with little or no outside help. Get CAN get free information. Click on a link!
Q120. To learn to use WinRunner, should I sign up for a course at a nearby educational institution?
A: The cheapest, or free, education is sometimes provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools. In lieu of a job, it is often a good idea to sign up for courses at nearby educational institutions. Classroom education, especially non-degree courses in local, community colleges, tends to be cheap.
Q121. I don't have a lot of money. How can I become a good tester with little or no cost to me?
A: The cheapest, or free, education is sometimes provided on the job, by an employer, while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools.
A: Software Configuration management (SCM) is the control, and the recording of, changes that are made to the software and documentation throughout the software development life cycle (SDLC). SCM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, and changes made to them, and to keep track of who makes the changes. Rob Davis has experience with a full range of CM tools and concepts, and can easily adapt to an organization's software tool and process needs.
Q127. Which of these roles are the best and most popular?
A: As a yardstick of popularity, if we count the number of applicants and resumes, Tester roles tend to be the most popular. Less popular roles are roles of System Administrators, Test/QA Team Leads, and Test/QA Managers. The "best" job is the job that makes YOU happy. The best job is the one that works for YOU, using the skills, resources, and talents YOU have. To find the best job, you need to experiment, and "play" different roles. Persistence, combined with experimentation, will lead to success.
component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions.
Q138. What is the difference between user documentation and user manual?
A: When a distinction is made between those who operate and use a computer system for its intended purpose, a separate user documentation and user manual is created. Operators get user documentation, and users get user manuals.
of the names and values of variables accessed and changed during the execution of a computer program.