SE(BCS601)unit-II
SE(BCS601)unit-II
SRS Document
Software Requirement Specification (SRS) Format as the name suggests, is a complete specification
and description of requirements of the software that need to be fulfilled for the successful development
of the software system. These requirements can be functional as well as non-functional depending upon
the type of requirement. The interaction between different customers and contractors is done because it
The important parts of the Software Requirements Specification (SRS) document are:
1. Functional requirements of the system
2. Non-functional requirements of the system, and
3. Goals of implementation
These are explained as follows.
Functional Requirements
The purposeful requirements part discusses the functionalities needed from the system.
1. The system is taken into account to perform a group of high-level functions Fi. The functional view
of the system is shown in the below diagram
2. Each function Fi of the system can be considered as a transformation of a set of input data Ii to the
corresponding set of output knowledge Oi.
The user will get some purposeful piece of labor done employing a high-level operation.
Non-functional Requirements
Non-functional necessities accommodate the characteristics of the system which may not be expressed as
functions – like the maintainability of the system, mobility of the system, the usability of the system, etc.
Non-functional requirements may include:
1. Reliability issues
2. Accuracy of results
3. Human-computer interface issues
4. Constraints on the system implementation, etc.
Goals of Implementation
The goals of implementation part documents some general suggestions relating to development. These
suggestions guide trade-off among style goals.
1. The goals of the implementation section would possibly document problems like revisions to the
system functionalities that will be needed within the future, new devices to be supported within the
future, reusability problems, etc.
2. These are the things that the developers would possibly detain their mind throughout development in
order that the developed system may meet some aspects that don’t seem to be needed straightaway.
What is Requirement Engineering?
Requirements engineering is a broad domain that focuses on being the connector between modeling,
analysis, design, and construction. It is the process that defines, identifies, manages, and develops
requirements in a software engineering design process. This process uses tools, methods, and principles
to describe the system’s behavior and the constraints that come along with it.
Requirements engineering is the most important part every business must follow, in order to build and
release a project successfully, as it is the foundation to key planning and implementation.
2. Elicitation
This is the second phase of the requirements analysis process. This phase focuses on gathering the
requirements from the stakeholders. One should be careful in this phase, as the requirements are what
establishes the key purpose of a project. Understanding the kind of requirements needed from the
customer is very crucial for a developer. In this process, mistakes can happen in regard to, not
implementing the right requirements or forgetting a part. The right people must be involved in this
phase. The following problems can occur in the elicitation phase:
Problem of Scope: The requirements given are of unnecessary detail, ill-defined, or not possible to
implement.
Problem of Understanding: Not having a clear-cut understanding between the developer and
customer when putting out the requirements needed. Sometimes the customer might not know what
they want or the developer might misunderstand one requirement for another.
Problem of Volatility: Requirements changing over time can cause difficulty in leading a project.
It can lead to loss and wastage of resources and time.
3. Elaboration
This is the third phase of the requirements analysis process. This phase is the result of the inception and
elicitation phase. In the elaboration process, it takes the requirements that have been stated and
gathered in the first two phases and refines them. Expansion and looking into it further are done as
well. The main task in this phase is to indulge in modeling activities and develop a prototype that
elaborates on the features and constraints using the necessary tools and functions.
4. Negotiation
This is the fourth phase of the requirements analysis process. This phase emphasizes discussion and
exchanging conversation on what is needed and what is to be eliminated. In the negotiation phase,
negotiation is between the developer and the customer and they dwell on how to go about the project
with limited business resources. Customers are asked to prioritize the requirements and make
guesstimates on the conflicts that may arise along with it. Risks of all the requirements are taken into
consideration and negotiated in a way where the customer and developer are both satisfied with
reference to the further implementation. The following are discussed in the negotiation phase:
Availability of Resources.
Delivery Time.
Scope of requirements.
Project Cost.
Estimations on development.
5. Specification
This is the fifth phase of the requirements analysis process. This phase specifies the following:
Written document.
A set of models.
A collection of use cases.
A prototype.
In the specification phase, the requirements engineer gathers all the requirements and develops a
working model. This final working product will be the basis of any functions, features or constraints to
be observed. The models used in this phase include ER (Entity Relationship) diagrams, DFD (Data
Flow Diagram), FDD (Function Decomposition Diagrams), and Data Dictionaries.
A software specification document is submitted to the customer in a language that he/she will
understand, to give a glimpse of the working model.
6. Validation
This is the sixth phase of the requirements analysis process. This phase focuses on checking for errors
and debugging. In the validation phase, the developer scans the specification document and checks for
the following:
All the requirements have been stated and met correctly
Errors have been debugged and corrected.
Work product is built according to the standards.
This requirements validation mechanism is known as the formal technical review. The review team that
works together and validates the requirements include software engineers, customers, users, and other
stakeholders. Everyone in this team takes part in checking the specification by examining for any
errors, missing information, or anything that has to be added or checking for any unrealistic and
problematic errors. Some of the validation techniques are the following-
Requirements reviews/inspections.
Prototyping.
Test-case generation.
Automated consistency analysis.
7. Requirements Management
This is the last phase of the requirements analysis process. Requirements management is a set of
activities where the entire team takes part in identifying, controlling, tracking, and establishing the
requirements for the successful and smooth implementation of the project.
In this phase, the team is responsible for managing any changes that may occur during the project. New
requirements emerge, and it is in this phase, responsibility should be taken to manage and prioritize as
to where its position is in the project and how this new change will affect the overall system, and how
to address and deal with the change. Based on this phase, the working model will be analyzed carefully
and ready to be delivered to the customer.
Conclusion
Requirements engineering tasks is crucial for successful software projects. It involves gathering,
refining, and documenting requirements, ensuring clarity and feasibility. To ensure feasibility and
clarity, requirements must be gathered, refined, and documented. This procedure makes sure that the
project goals align with those of the stakeholders. Effective requirements engineering reduces risks and
guides smooth project execution.
8. What is Data Flow Diagram (DFD)?
DFD is the abbreviation for Data Flow Diagram. The flow of data in a system or process is
represented by a Data Flow Diagram (DFD). It also gives insight into the inputs and outputs of each
entity and the process itself. Data Flow Diagram (DFD) does not have a control flow and no loops or
decision rules are present. Specific operations, depending on the type of data, can be explained by a
flowchart. It is a graphical tool, useful for communicating with users, managers and other personnel. it
is useful for analyzing existing as well as proposed systems.
It should be pointed out that a DFD is not a flowchart. In drawing the DFD, the designer has to specify
the major transforms in the path of the data flowing from the input to the output. DFDs can be
hierarchically organized, which helps in progressively partitioning and analyzing large systems.
It provides an overview of
What data is system processes.
What transformation are performed.
What data are stored.
What results are produced , etc.
Data Flow Diagram can be represented in several ways. The Data Flow Diagram (DFD) belongs to
structured-analysis modeling tools. Data Flow diagrams are very popular because they help us to
visualize the major steps and data involved in software-system processes.
Characteristics of Data Flow Diagram (DFD)
Below are some characteristics of Data Flow Diagram (DFD):
Graphical Representation: Data Flow Diagram (DFD) use different symbols and notation to
represent data flow within system. That simplify the complex model.
Problem Analysis: Data Flow Diagram (DFDs) are very useful in understanding a system and can
be effectively used during analysis. Data Flow Diagram (DFDs) are quite general and are not
limited to problem analysis for software requirements specification.
Abstraction: Data Flow Diagram (DFD) provides a abstraction to complex model i.e. DFD hides
unnecessary implementation details and show only the flow of data and processes within
information system.
Hierarchy: Data Flow Diagram (DFD) provides a hierarchy of a system. High- level diagram i.e.
0-level diagram provides an overview of entire system while lower-level diagram like 1-level DFD
and beyond provides a detailed data flow of individual process.
Data Flow: The primary objective of Data Flow Diagram (DFD) is to visualize the data flow
between external entity, processes and data store. Data Flow is represented by an arrow Symbol.
Ease of Understanding: Data Flow Diagram (DFD) can be easily understand by both technical and
non-technical stakeholders.
Modularity: Modularity can be achieved using Data Flow Diagram (DFD) as it breaks the complex
system into smaller module or processes. This provides easily analysis and design of a system.
Types of Data Flow Diagram (DFD)
There are two types of Data Flow Diagram (DFD)
1. Logical Data Flow Diagram
2. Physical Data Flow Diagram
Logical Data Flow Diagram (DFD)
Logical data flow diagram mainly focuses on the system process. It illustrates how data flows in the
system. Logical Data Flow Diagram (DFD) mainly focuses on high level processes and data flow
without diving deep into technical implementation details. Logical DFD is used in various
organizations for the smooth running of system. Like in a Banking software system, it is used to
describe how data is moved from one entity to another.
Logical Data Flow Diagram of Online Grocery Store
1-Level DFD
This level provides a more detailed view of the system by breaking down the major processes identified
in the level 0 DFD into sub-processes. Each sub-process is depicted as a separate process on the level 1
DFD. The data flows and data stores associated with each sub-process are also shown. In 1-level DFD,
the context diagram is decomposed into multiple bubbles/processes. In this level, we highlight the main
functions of the system and breakdown the high-level process of 0-level DFD into subprocesses.
Level 1 DFD of Railway Reservation System
2-level DFD
This level provides an even more detailed view of the system by breaking down the sub-processes
identified in the level 1 DFD into further sub-processes. Each sub-process is depicted as a separate
process on the level 2 DFD. The data flows and data stores associated with each sub-process are also
shown.
Rules for Data Flow Diagram (DFD)
Following are the rules of DFD:
Data can flow from:
o Terminator or External Entity to Process
o Process to Terminator or External Entity
o Process to Data Store
o Data Store to Process
o Process to Process
Data Cannot Flow From
o Terminator or External Entity to Terminator or External Entity
o Terminator or External Entity to Data Store
o Data Store to Terminator or External Entity
o Data Store to Data Store
Advantages of Data Flow Diagram (DFD)
It helps us to understand the functioning and the limits of a system.
It is a graphical representation which is very easy to understand as it helps visualize contents.
Data Flow Diagram represent detailed and well explained diagram of system components.
It is used as the part of system documentation file.
Data Flow Diagrams can be understood by both technical or nontechnical person because they are
very easy to understand.
Disadvantages of Data Flow Diagram (DFD)
At times Data Flow Diagram (DFD) can confuse the programmers regarding the system.
Data Flow Diagram takes long time to be generated, and many times due to this reasons analysts
are denied permission to work on it.
How to Draw Data Flow Diagram?
Following are the steps to Draw Data Flow Diagram
Understand the System
Identify External Entities
Identify Processes
Identify Data Stores
Use Standard Symbols
Create Level 0 Diagram
Based on Complexity Draw Further Level Diagram like Level 1, 2 and so on.
Identify Data Flows:
Number Processes and Data Stores
Review and Validate
Conclusion
Data Flow Diagram ( DFD) are visual maps that provides a clear understanding of how information
moves within a information system. Data Flow Diagrams (DFD) consist of four component i.e.
Processes that represent system’s functionality, External Entities that represent the end users, data store
that represent database or data ware house and data flow that represent how data are flow among these
three components. DFD help everyone, from computer experts to regular users, as it provide a clear
understanding of how a system works and how different parts of it interact. By using DFDs, people can
work together effectively to analyze, design, and communicate about systems.
The Entity Relationship Model is a model for identifying entities (like student, car or company) to be
represented in the database and representation of how those entities are related. The ER data model
specifies enterprise schema that represents the overall logical structure of a database graphically.
Components of ER Diagram
ER Model consists of Entities, Attributes, and Relationships among Entities in a Database System.
Components of ER Diagram
What is Entity?
An Entity may be an object with a physical existence – a particular person, car, house, or employee – or
it may be an object with a conceptual existence – a company, a job, or a university course.
What is Entity Set?
An Entity is an object of Entity Type and a set of all entities is called an entity set. For Example, E1 is
an entity having Entity Type Student and the set of all students is called Entity Set. In ER diagram,
Entity Type is represented as:
Entity Set
We can represent the entity set in ER Diagram but can’t represent entity in ER Diagram because entity
is row and column in the relation and ER Diagram is graphical representation of data.
Types of Entity
There are two types of entity:
1. Strong Entity
A Strong Entity is a type of entity that has a key Attribute. Strong Entity does not depend on other
Entity in the Schema. It has a primary key, that helps in identifying it uniquely, and it is represented by
a rectangle. These are called Strong Entity Types.
2. Weak Entity
An Entity type has a key attribute that uniquely identifies each entity in the entity set. But some entity
type exists for which key attributes can’t be defined. These are called Weak Entity types .
For Example, A company may store the information of dependents (Parents, Children, Spouse) of an
Employee. But the dependents can’t exist without the employee. So Dependent will be a Weak Entity
Type and Employee will be Identifying Entity type for Dependent, which means it is Strong Entity
Type .
A weak entity type is represented by a Double Rectangle. The participation of weak entity types is
always total. The relationship between the weak entity type and its identifying strong entity type is
called identifying relationship and it is represented by a double diamond.
Strong Entity and Weak Entity
What is Attributes?
Attributes are the properties that define the entity type. For example, Roll_No, Name, DOB, Age,
Address, and Mobile_No are the attributes that define entity type Student. In ER diagram, the attribute
is represented by an oval.
Attribute
Types of Attributes
1. Key Attribute
The attribute which uniquely identifies each entity in the entity set is called the key attribute. For
example, Roll_No will be unique for each student. In ER diagram, the key attribute is represented by an
oval with underlying lines.
Key Attribute
2. Composite Attribute
An attribute composed of many other attributes is called a composite attribute. For example, the
Address attribute of the student Entity type consists of Street, City, State, and Country. In ER diagram,
the composite attribute is represented by an oval comprising of ovals.
Composite Attribute
3. Multivalued Attribute
An attribute consisting of more than one value for a given entity. For example, Phone_No (can be more
than one for a given student). In ER diagram, a multivalued attribute is represented by a double oval.
Multivalued Attribute
4. Derived Attribute
An attribute that can be derived from other attributes of the entity type is known as a derived attribute.
e.g.; Age (can be derived from DOB). In ER diagram, the derived attribute is represented by a dashed
oval.
Derived Attribute
The Complete Entity Type Student with its Attributes can be represented as:
Entity and Attributes
Entity-Relationship Set
A set of relationships of the same type is known as a relationship set. The following relationship set
depicts S1 as enrolled in C2, S2 as enrolled in C1, and S3 as registered in C3.
Relationship Set
Unary Relationship
2. Binary Relationship: When there are TWO entities set participating in a relationship, the
relationship is called a binary relationship. For example, a Student is enrolled in a Course.
Binary Relationship
3. Ternary Relationship: When there are three entity sets participating in a relationship, the
relationship is called a ternary relationship.
4. N-ary Relationship: When there are n entities set participating in a relationship, the relationship is
called an n-ary relationship.
What is Cardinality?
The number of times an entity of an entity set participates in a relationship set is known as cardinality .
Cardinality can be of different types:
1. One-to-One: When each entity in each entity set can take part only once in the relationship, the
cardinality is one-to-one. Let us assume that a male can marry one female and a female can marry one
male. So the relationship will be one-to-one.
the total number of tables that can be used in this is 2.
2. One-to-Many: In one-to-many mapping as well where each entity can be related to more than one
entity and the total number of tables that can be used in this is 2. Let us assume that one surgeon
department can accommodate many doctors. So the Cardinality will be 1 to M. It means one
department has many Doctors.
total number of tables that can used is 3.
one to many cardinality
3. Many-to-One: When entities in one entity set can take part only once in the relationship set and
entities in other entity sets can take part more than once in the relationship set, cardinality is many to
one. Let us assume that a student can take only one course but one course can be taken by many
students. So the cardinality will be n to 1. It means that for one course there can be n students but for
one student, there will be only one course.
The total number of tables that can be used in this is 3.
many to one cardinality
In this case, each student is taking only 1 course but 1 course has been taken by many students.
4. Many-to-Many: When entities in all entity sets can take part more than once in the relationship
cardinality is many to many. Let us assume that a student can take more than one course and one course
can be taken by many students. So the relationship will be many to many.
the total number of tables that can be used in this is 3.
many to many cardinality
In this example, student S1 is enrolled in C1 and C3 and Course C3 is enrolled by S1, S3, and S4. So it
is many-to-many relationships.
Participation Constraint
Participation Constraint is applied to the entity participating in the relationship set.
1. Total Participation – Each entity in the entity set must participate in the relationship. If each
student must enroll in a course, the participation of students will be total. Total participation is shown
by a double line in the ER diagram.
2. Partial Participation – The entity in the entity set may or may NOT participate in the relationship.
If some courses are not enrolled by any of the students, the participation in the course will be partial.
The diagram depicts the ‘Enrolled in’ relationship set with Student Entity set having total participation
and Course Entity set having partial participation.
Every student in the Student Entity set participates in a relationship but there exists a course C4 that is
not taking part in the relationship.
How to Draw ER Diagram?
The very first step is Identifying all the Entities, and place them in a Rectangle, and labeling them
accordingly.
The next step is to identify the relationship between them and place them accordingly using the
Diamond, and make sure that, Relationships are not connected to each other.
Attach attributes to the entities properly.
Remove redundant entities and relationships.
Add proper colors to highlight the data present in the database.
Conclusion
An Entity-Relationship (ER) model is a way to visually represent the structure of a database. It shows
how different entities (like objects or concepts) are connected and interact with each other through
relationships. The model uses diagrams to represent entities as rectangles and relationships as
diamonds, making it easier to design and understand databases .
1. Quality Planning
Quality planning involves defining specific quality standards for projects and determining the necessary
processes to achieve these standards. This stage sets the foundation for what the quality goals are and how
they will be measured. It includes establishing quality policies, objectives, and criteria for accepting
software products.
4. Quality Management
This encompasses all activities related to maintaining and enhancing the quality of the software. It
includes:
Leadership Engagement: Ensuring that the organization’s leadership understands and supports quality
initiatives.
Resource Management: Allocating and managing resources effectively to maintain quality standards.
Risk Management: Identifying, analyzing, and mitigating risks that could impact software quality.
Metrics and measurements are crucial for assessing the effectiveness of SQA activities. Common metrics
include:
Defect Density: The number of defects confirmed in software divided by the size of the software.
Code Coverage: A measure of how much code is executed during testing, which helps in understanding
the extent of testing.
Customer Satisfaction: Feedback from users about the software’s performance and features.
Preventive Actions: Steps taken to eliminate the causes of potential nonconformities or defects.
Corrective Actions: Actions taken to eliminate the causes of detected nonconformities or defects.
7. Continuous Improvement
The principle of continuous improvement, often referred to by its Japanese name, "Kaizen," is integral to
SQA. This involves ongoing efforts to improve all processes, based on feedback and iterative learning.
Techniques like retrospectives, post-mortem analyses, and process refinement sessions are used to
analyze successes and failures and to implement lessons learned.
Software Quality Assurance (SQA) involves various processes and techniques that help ensure the quality
of software throughout the development lifecycle. These methodologies are designed to prevent defects,
ensure functionality meets specified requirements, and maintain a high standard of software performance.
Here’s an in-depth look at some key SQA processes and techniques:
1. Code Reviews
Code reviews are a critical SQA activity where other developers (peers) review the source code written
by a developer before it merges into the main branch. This practice aims to catch errors early in the
development phase, promote a higher code quality standard, and share knowledge across the team.
Benefits include:
2. Automated Testing
Automated testing uses software tools to run tests on the software automatically, checking for errors,
defects, and functional mismatches. This can include:
Integration Testing: Testing combined parts of an application to determine if they function together
correctly.
System Testing: Testing the complete and integrated software product to evaluate the system’s
compliance with its specified requirements.
Automated testing is valuable because it can be executed quickly and repeatedly, which is crucial for
continuous integration and delivery pipelines.
CI/CD is a method to frequently deliver apps to customers by introducing automation into the stages of
app development. The main concepts attributed to CI/CD are continuous integration, continuous delivery,
and continuous deployment. CI/CD is intended to:
Static Analysis: This technique involves analyzing the code without executing it. It’s used to detect
coding errors, security lapses, and compliances with coding guidelines.
Dynamic Analysis: Unlike static analysis, dynamic analysis involves executing code. It provides insights
into the system's behavior and verifies that the system performs expected tasks under varying conditions.
5. Risk-Based Testing
Risk-based testing prioritizes testing of features and functions in the software application based on the
risk of failure, the importance and likelihood of failure, and the impact of failure. This approach helps to
optimize test efforts towards the most critical areas of the system.
TDD is a software development approach in which tests are written before the code that needs to be
tested. The process follows a simple cycle:
This technique ensures that the software is tested at the function level and that all functionalities are
covered by the tests, which improves code quality.
7. Performance Testing
This type of testing is performed to determine how a system performs in terms of responsiveness and
stability under a particular workload. It can involve load testing, stress testing, and spike testing, among
others, to ensure the software application behaves as expected under varied conditions.
Software Quality Frameworks are structured methodologies that guide the development process towards
achieving high-quality software. Here’s a breakdown of three prominent frameworks:
1. ISO 9000 Models:
Focus: These are generic quality management standards developed by the International Organization for
Standardization (ISO). They are not specific to software development but can be adapted for this purpose.
Core Principles: The ISO 9000 family emphasizes a process-oriented approach to quality management.
It focuses on establishing documented processes, continuous improvement, and customer satisfaction.
Benefits: Implementing ISO 9000 standards can lead to improved software quality, consistency, and
efficiency in development processes. It can also demonstrate a commitment to quality to stakeholders.
Limitations: ISO 9000 is not specifically designed for software development, and achieving certification
can be resource-intensive.
Verification and Validation is the process of investigating whether a software system satisfies
specifications and standards and fulfills the required purpose. Verification and Validation both play an
important role in developing good software development. Verification helps in examining whether the
product is built right according to requirements, while validation helps in examining whether the right
product is built to meet user needs. In this article, we will learn the difference between Verification and
Validation.
What is Verification?
Verification is the process of checking that software achieves its goal without any bugs. It is the process
to ensure whether the product that is developed is right or not. It verifies whether the developed product
fulfills the requirements that we have. Verification is static testing. Verification means Are we building
the product right?
What is Validation?
Validation is the process of checking whether the software product is up to the mark or in other words
product has high-level requirements. It is the process of checking the validation of the product i.e. it
checks what we are developing is the right product. It is validation of the actual and expected products.
Validation is dynamic testing. Validation means Are we building the right product?
Differences between Verification and Validation
Verification Validation
It can find the bugs in the early stage of It can only find the bugs that could not be
the development. found by the verification process.
Bug
Verification is also termed as white box Validation can be termed as black box
testing or static testing as work product testing or dynamic testing as work
Another goes through reviews. product is executed.
Terminology
Verification finds about 50 to 60% of the Validation finds about 20 to 30% of the
defects. defects.
Performance
Software Quality Framework is a model for software quality that ensures quality by connecting and
integrating the different views of software quality. This article focuses on discussing the Software
Quality Framework.
What is a Software Quality Framework?
Software Quality Framework connects the customer view with the developer’s view of software quality
and it treats software as a product.
1. The software product view describes the characteristics of a product that bear on its ability to
satisfy stated and implied needs.
2. This is a framework that describes all the different concepts relating to quality in a common way
measured by a qualitative scale that can be understood and interpreted commonly.
Therefore, the most influential factor for the developers is the customer perception. This framework
connects the developer with the customer to derive a common interpretation of quality.
Developers View
Validation and verification are two independent methods used together to check that a software product
meets the requirements and that it fulfills its intended purpose. Validation checks that the product
design satisfies the purposeful usage and verification checks for errors in the software.
1. The primary concern for developers is in the design and engineering processes involved in
producing software.
2. Quality can be measured by the degree of conformance to predetermined requirements and
standards, and deviations from these standards can lead to poor quality and low reliability.
3. While validation and verification are used by the developers to improve the software, the two
methods don’t represent a quantifiable quality measurement.
4. The developer’s view of software quality and the customer’s view of software quality are both
different things.
For example, the customer understands or describes the quality of operation as meeting the requirement
while the developers use different factors to describe the software quality. The developer view of
quality in the software is influenced by many factors. This model stresses on 3 primary ones:
The code: It is measured by its correctness and reliability.
The data: The application integrity measures it.
Maintainability: It has different measures the simplest is the mean time to change.
Users View
When the user acquires software, he/she always expect a high-quality software. When end users
develop their software then quality is different. End-user programming, a phrase popularized by which
is programming to achieve the result of a program primarily for personal, rather than public use.
1. The important distinction here is that software itself is not primarily intended for use by many users
with varying needs.
2. For example, a teacher may write a spreadsheet to track student’s test scores.
3. In these end-user programming situations, the program is a means to an end that could be used to
accomplish a goal.
4. In contradiction to end-user programming, professional programming has the goal of producing
software for others to use.
5. For example, the moment a novice Web developer moves from designing a web page for himself to
designing a Web page for others, the nature of this activity has changed.
6. Users find software quality as a fit between their goals and software’s functionality.
7. The better the quality, the more likely the user will be satisfied with the soft-ware.
8. When the quality is bad, developers must meet user needs or face a diminishing demand for their
software.
Therefore, the user understands quality as fitness for purpose. Avoiding complexity and keeping
software simple, considerably lessens the implementation risk of software. In some instances, users
abandoned the implementation of a complex software because the software developers were expecting
the users to change their business and to go with the way the software works.
Product View
The product view describes quality as correlated to inherent characteristics of the product. Product
quality is defined as the set of characteristics and features of a product that gives contribution to its
ability to fulfill given requirements.
1. Product quality can be measured by the value-based view which sees the quality as dependent on
the amount a customer is willing to pay for it.
2. According to the users, a high-quality product is one that satisfies their expectations and
preferences while meeting their requirement.
3. Satisfaction of end users of the product represents craft to learn, use, upgrade the product and when
asked to participate in rating the product, a positive rating is given.